Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do election in order based on failed primary rank to avoid voting conflicts #1018

Open
wants to merge 4 commits into
base: unstable
Choose a base branch
from

Conversation

enjoy-binbin
Copy link
Member

When multiple primary nodes fail simultaneously, the cluster can not recover
within the default effective time (data_age limit). The main reason is that
the vote is without ranking among multiple replica nodes, which case too many
epoch conflicts.

Therefore, we introduced into ranking based on the failed primary node name.
Introduced a new failed_primary_rank var, this var means the rank of this
myself instance in the context of all failed primary list. This var will be
used in failover and we will do the failover election packets in order based
on the rank, this can effectively avoid the voting conflicts.

…flicts

When multiple primary nodes fail simultaneously, the cluster can not recover
within the default effective time (data_age limit). The main reason is that
the vote is without ranking among multiple replica nodes, which case too many
epoch conflicts.

Therefore, we introduced into ranking based on the failed primary node name.
Introduced a new failed_primary_rank var, this var means the rank of this
myself instance in the context of all failed primary list. This var will be
used in failover and we will do the failover election packets in order based
on the rank, this can effectively avoid the voting conflicts.

Signed-off-by: Binbin <binloveplay1314@qq.com>
@@ -64,3 +64,34 @@ start_cluster 3 4 {tags {external:skip cluster} overrides {cluster-ping-interval
}

} ;# start_cluster

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test may be time-consuming. It basically cannot pass before the patch, but can pass locally after the patch.

@enjoy-binbin enjoy-binbin added the run-extra-tests Run extra tests on this PR (Runs all tests from daily except valgrind and RESP) label Sep 11, 2024
Signed-off-by: Binbin <binloveplay1314@qq.com>
Signed-off-by: Binbin <binloveplay1314@qq.com>
Signed-off-by: Binbin <binloveplay1314@qq.com>
Copy link

codecov bot commented Sep 14, 2024

Codecov Report

Attention: Patch coverage is 96.66667% with 1 line in your changes missing coverage. Please review.

Project coverage is 70.62%. Comparing base (09def3c) to head (c6a71b5).

Files with missing lines Patch % Lines
src/cluster_legacy.c 96.66% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##           unstable    #1018      +/-   ##
============================================
+ Coverage     70.61%   70.62%   +0.01%     
============================================
  Files           114      114              
  Lines         61664    61694      +30     
============================================
+ Hits          43541    43571      +30     
  Misses        18123    18123              
Files with missing lines Coverage Δ
src/cluster_legacy.c 86.17% <96.66%> (-0.10%) ⬇️

... and 9 files with indirect coverage changes

Copy link
Member

@PingXie PingXie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall. I like this idea. Thanks @enjoy-binbin!

continue;
}

if (memcmp(node->name, myself->replicaof->name, CLUSTER_NAMELEN) < 0) rank++;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it make sense to sort by shard_id? replicaof is not as reliable/up-to-date as shard_id. There is the chain replication and there is still the replicaof cycle.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to make sense, I'll think about it later.

* Specifically 0.5 second * rank. This way those failed primaries will be
* elected in rank to avoid the vote conflicts. */
server.cluster->failover_failed_primary_rank = clusterGetFailedPrimaryRank();
server.cluster->failover_auth_time += server.cluster->failover_failed_primary_rank * 500;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious - how did you arrive at 500? Given that CLUSTERMSG_TYPE_FAILOVER_AUTH_REQUEST is broadcast and answered pretty much right away, unless the voter is busy, I would think the network round trip time between any two nodes should be significantly less than 50 ms for all deployments. I wonder if we could tighten it up a bit to like 250 or 200?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This 500 is just the experience points gained from here. I usually think that one election round can be completed between 500ms - 1s. Yes, i think the numbers may be adjustable, but I haven't experimented with it.

        server.cluster->failover_auth_time = mstime() +
                                             500 + /* Fixed delay of 500 milliseconds, let FAIL msg propagate. */
                                             random() % 500; /* Random delay between 0 and 500 milliseconds. */

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
run-extra-tests Run extra tests on this PR (Runs all tests from daily except valgrind and RESP)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants