-
Notifications
You must be signed in to change notification settings - Fork 29
Extended consumer latency recorded after partition leadership reverts to preferred #204
Comments
I notice by examining a RPC trace from the client, that during the period 5.6. the consumer is fetching from a follower. I can tell this by logging where the client is sending the FetchRequest RPCs. I see all the requests going to broker 0 (fetch blocks for partition 0 and 2) and broker 1 (fetch blocks for partition 1), but none to broker 2 during this period. As soon as period 7. starts, I see FetchRequests going to all three brokers again (with single fetch blocks). I also see the following in the logs at the moment the problem resolves.
The reason the client makes this decision is the fact that the metadata has been refreshed. In Sarama this is controlled by It surprises me that consuming from a follower introduces almost a 1 second of latency. The client's interactions with a single broker are single threaded so I am wondering if this is giving rise to additional latency. The client is sending a single |
I am wondering about:
@ppatierno WDYT |
I believe this is the Kafka's bug. I think the fetch to the wrong follower will get "NOT_LEADER_OR_FOLLOWER", but it didn't fallback to leader when receiving this error. |
OK, I found I was wrong, because canary uses sarama client, not java client. So maybe they have similar issues there. |
thanks @showuon. In the case of the canary the client is Sarama and I don't think it is suffering the same client defect. In this case, the problem is resolved by the refreshing of the metadata (so the client is respecting the prefered read replica). The client logs:
just after the timed metadata refresh (10mins). That's coming from: https://github.com/Shopify/sarama/blob/v1.35.0/consumer.go#L977 I think fixing the canary to refresh is client metadata more frequently is the correct thing to do. Separately, I am still puzzled why fetching from a follower introduces so much latency in this case (~1000ms). The canary's producer is using RequiredAcks WaitForAll (-1) when publishing. I know that all replicas are in-sync so this should mean that all brokers have the message. The canary is using a SyncProducer and the code is timing the latency between ProduceRequest and ProduceResponse. I know this is short ~5ms. I cannot account for ~990ms.
|
I believe I have answered my own question. What I am seeing is a consequence of how high water mark propagation behaves in partitions with sparse traffic. The leader informs the followers of the high water mark via the |
@k-wall , nice investigation! I think |
… partition leadership changes. This minimises the time the canary spends consumer from followers after the partition leadership changes on the broker, and so avoid end to end latency measure being skewed. Signed-off-by: kwall <kwall@apache.org>
I am proposing #206 to resolve this defect. Earlier I spoke about refactor the canary not to use a consumer group. Whilst I think this is the right approach, as we intend to rewrite the Canary in Java, I don't think it is worth putting the time into the Go implementation. I believe #206 is a worthwhile improvement that should reduce the worst of the spurious latency measurements. |
…e partition leadership changes This minimises the time the canary spends consumer from followers after the partition leadership changes on the broker, and so avoid end to end latency measure being skewed. Signed-off-by: kwall <kwall@apache.org>
@k-wall when you described the problem, this issue I raised in the Sarama community came to my mind IBM/sarama#1927 |
@ppatierno I don't think it is a Sarama client issue. When the sarama client refreshes the metadata (i mean its own timed refresh), the problem goes away immediately. That's the first time the client learns that the leadership has moved. It reacts properly to that signal IMO.
I think this is Kafka working as designed. There's no kafka mechanism for the client to learn of the leadership change until it refreshes right? The |
@k-wall but isn't the behaviour you are seeing exactly the same described in the issue I opened in the Sarama repo? |
No, this is different to your report: In IBM/sarama#1927 (comment) you said "yes you are right (of course! ;-)) even after a metadata refresh nothing is changed." That's not the case here. The timed metadata refresh is resolving the issue - but we suffer potentially minutes of duff latency measurements until it fires. My proposed change triggers the metadata refresh early (so we don't have to wait for the next timed one and don't suffer the duff data). |
…e partition leadership changes This minimises the time the canary spends consumer from followers after the partition leadership changes on the broker, and so avoid end to end latency measure being skewed. Signed-off-by: kwall <kwall@apache.org>
Signed-off-by: Paolo Patierno <ppatierno@live.com>
Using strimzi-canary 0.5.0 against a three broker instance, I am noticing that sometimes I see consumer latencies being unexpected high for a period. The problem remains for about 10mins until it resolves, without any intervention.
Here's sequence of events the causes the situation:
auto.leader.rebalance.enable = true
,leader.imbalance.check.interval.seconds = 300
,leader.imbalance.per.broker.percentage = 0
)The problem is impactful because it leads to seemingly spurious latency alerts.
Here's what the logs look like:
At 1. (normal operation)
At 2./3. (one broker restarted - normal operation from two brokers)
At 5./6. ( partition leadership reverts... produce latencies normal .. consume latencies for partition 2 extended)
At 7. (after about 10mins, the extended consume latency from partition 2 disappear).
And here's what it looks like in Prometheus:
The text was updated successfully, but these errors were encountered: