Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Ehancement]Try run command again after renew the redis cluster slot cache #2195

Closed

Conversation

diracccc
Copy link

Consider such a scenario in redis cluster, one master node down and the replica node promote to master, which means the cluster topological changed. So the request to the previous master node will fail and retry. But in this scenario, retry will never success.
Currently, Jedis will renew the slots cache in the last retry, release this invalid jedis instance from pool, then just throws a JedisClusterMaxAttemptsException, so this request get failed.
Maybe it's better to give a chance to retry run command after renew the cluster's slots cache, so I add this logic: in the second last attempt, renew slots cache, then run last retry.

@wangminpku
Copy link

Dear diraccc(@diracccc ) and Jedis team (@xetorthio ), (@HeartSaVioR ), (@sazzad16 )

We are working on automatically identifying and clustering the related pull requests for code review. We have found that PR#1763 and PR#1728 might be related to this one, because both of them modified src/main/java/redis/clients/jedis/JedisClusterCommand.java for the similar purpose.

We would really appreciate it if you could help us to validate if the information from PR#1763 and PR#1728 is helpful when you review the current PR.

Thank you very much for your time!

@sazzad16
Copy link
Collaborator

sazzad16 commented Apr 6, 2021

Same goal is now achieved due to #2358

@sazzad16 sazzad16 closed this Apr 6, 2021
@sazzad16 sazzad16 added this to the 3.6.0 milestone Apr 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants