-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Robustness] Etcd v3.4 required more than expected leader elections #17455
Comments
member-0 with less up-to-date log became candidates earlier than the other two nodes in several round of elections. It could not win election and prevented the other two from stepping into candidate. Enabling prevote can reduce such problem. @serathius please assign this to me. |
Just a note, I didn't treat this as a flake as there is no signs of this being issue with robustness. We have been running robustness tests on v3.4 for long time and this is the first time it happen twice in row. It could point to some regression in v3.4. cc @ahrtr @jmhbnz @siyuanfoundation are you aware of any changes that could cause regressions in v3.4 for number required of leader elections? |
Right, I haven't yet dive deep into the logs but I would want to make sure that we know:
|
Say all 3 members were in term 3, and there was no leader. member-0 has less up-to-date log than the other two. When member-0 became candidate first (got a small random timeout), it stepped to term 4 and sent vote requests to the other two (which were still in term 3). Upon receiving vote messages, member-1 and member-2 were changed to follower of term 4, and its election expiration counting was reset so they had no chance to become candidate of term 3. And member-1 and member-2 both rejected voting to member-0 because they have more up-to-date logs. Now we have 3 members in term 4 with no leader, waiting another round of election. This happened multiple times in a row until member-0 finally did not become candidate than the other two. |
Right, this seems like a clear case for |
No, there are no real changes to the 3.4 binary recently. All the changes are related to tests and metrics. |
It's expected behaviour when pre-vote isn't enabled. I think we should
|
A lower expected QPS for 3.4 may hide other potential issues that impact availability. Improvement in raft sounds good, but is it possible to just cherry pick this change? I don't see a 3.4 release branch in raft repo. |
I think that instead of forcing prevote in robustness test, we should go with #17775 |
Done |
Bug report criteria
What happened?
We got 2 consecutive robustness test failures on v3.4
https://github.com/etcd-io/etcd/actions/runs/7928589548/job/21647130490
https://github.com/etcd-io/etcd/actions/runs/7940776245/job/21682419728
The cause was low average traffic:
The reason for traffic was that cluster was unavailable for longer then normally and required multiple leader elections.
What did you expect to happen?
Cluster should recover quickly after member crash and not require multiple leader elections.
How can we reproduce it (as minimally and precisely as possible)?
No repro yet
Anything else we need to know?
Expect that this might be due to v3.4 not using --pre-vote, which is default in v3.5.
Etcd version (please run commands below)
Etcd configuration (command line flags or environment variables)
paste your configuration here
Etcd debug information (please run commands below, feel free to obfuscate the IP address or FQDN in the output)
Relevant log output
The text was updated successfully, but these errors were encountered: