-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Skip leadership check if the etcd instance is active processing heartbeats #18428
Conversation
SummaryThis PR resolves the performance issue #18069 Main branch
This PR
Revert the #16822 (checked out
|
cc @fuweid @jmhbnz @serathius @qixiaoyang0 @ranandfigma @caesarxuchao This PR resolves the performance issue with a minimum change. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files
... and 23 files with indirect coverage changes @@ Coverage Diff @@
## main #18428 +/- ##
==========================================
+ Coverage 68.83% 68.85% +0.02%
==========================================
Files 420 420
Lines 35475 35490 +15
==========================================
+ Hits 24420 24438 +18
+ Misses 9631 9621 -10
- Partials 1424 1431 +7 Continue to review full report in Codecov by Sentry.
|
…beat Signed-off-by: Benjamin Wang <benjamin.ahrtr@gmail.com>
1da8809
to
b8b0cf8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for implementation looks clean. LGTM
I have one question: Imagine a scenario where leadership is lost due to two candidates fighting over with similar terms and equal votes.
Does that mean after merging this the new vote will be start with more delay? And possibly the cluster will be down?
I know this scenario is hard to produce i'm asking to find out the behavior.
This PR has nothing to do with any raft's protocol. It just tries to avoid unnecessary |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for the quick fix!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - Nice work @ahrtr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks, Benjamin.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ahrtr, caesarxuchao, ivanvc, jmhbnz The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Thanks all for the review. |
@@ -904,10 +904,26 @@ func (s *EtcdServer) revokeExpiredLeases(leases []*lease.Lease) { | |||
}) | |||
} | |||
|
|||
// isActive checks if the etcd instance is still actively processing the | |||
// heartbeat message (ticks). It returns false if no heartbeat has been | |||
// received within 3 * tickMs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 3 election ticks? I think the correct value is somewhere between TickMs
(default 100ms) and ElectionMs
(default 1s), meaning the 3 makes sense for default config, but I would argue it can be problematic for cases where ElectionMS
< 3* TicksMs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 3 election ticks?
Read #18428 (comment).
I would argue it can be problematic for cases where
ElectionMS
< 3*TicksMs
I am not worry about this.
- checking activity should be only related to
tickMs
. - also if
ElectionMS
< 3*TicksMs
, it means it's a very inappropriate election value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also if ElectionMS < 3* TicksMs, it means it's a very inappropriate election value.
Then let's codify that is is inappropriate. Either working or failing validation.
// ensureLeadership checks whether current member is still the leader. | ||
func (s *EtcdServer) ensureLeadership() bool { | ||
lg := s.Logger() | ||
|
||
if s.isActive() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is incorrect assumption for 5 node clusters, getting a tick from 1 member is not effective way to confirm no other leader was elected. Imagine 3/2 network split with leader on side of 2. With your change the leader can continue think it's active by receiving ticks from 1 member, where on the other side of network 3 members have already elected new leader.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It has nothing to do with how many nodes the cluster has. The intention is to guard the case the node being stuck by itself, e.g stall write.
Imagine 3/2 network split with leader on side of 2. With your change the leader can continue think it's active by receiving ticks from 1 member
It's active, but it won't be a leader any more. It will step down automatically in such case due to losing quorum.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that the raft protocol handle the network partition case perfectly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's active, but it won't be a leader any more.
We are talking about 1 second before network partition, until heath probes fail for ElectionMs, the old leader will still think it's a leader before they resign.
The proposed solution is not compatible with etcd configuration options and doesn't work for 5 node clusters. Please consider reverting and redesign. |
Don't agree. Pls see my comments above. The only minor possible improvement is to guard the threshold < electionTimeout, but I am not worry about it as mentioned in #18428 (comment) |
Resolve #18069
Will post benchmark result later.
Please read https://github.com/etcd-io/etcd/blob/main/CONTRIBUTING.md#contribution-flow.