-
Notifications
You must be signed in to change notification settings - Fork 653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Persistence - Remove Unowned Keys #568
base: unstable
Are you sure you want to change the base?
Conversation
2eb2028
to
eb85e5f
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## unstable #568 +/- ##
============================================
+ Coverage 70.35% 70.59% +0.23%
============================================
Files 112 114 +2
Lines 61467 61703 +236
============================================
+ Hits 43248 43562 +314
+ Misses 18219 18141 -78
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also should implement a test for this as well. I know it will probably be a bit tricky. I'm thinking of:
- Create an RDB file with data in all slots.
- Start a node with it that has a nodes.conf that doesn't own all the slots.
If we move the deletion after |
@madolson gently ping for additional input |
@hpatro Thanks for reviewing this PR, I just rebased it to resolve conflicts and applied changes per your comments, please take a look. |
Mostly LGTM, thanks @singku! |
For cluster, after startup loading, remove keys that shouldn't be served by this server based on slot assignment of a cluster. Signed-off-by: Liang Tang <tangliang@google.com>
@madolson Could you take a pass? LGTM. |
Gentle Ping.. |
Co-authored-by: Harkrishn Patro <bunty.hari@gmail.com> Signed-off-by: Ping Xie <pingxie@outlook.com>
For a more generic solution, I think we need to shed unowned keys during loading. The current fix drops the unowned keys after the loading is complete but the node might have already OOM'd before reaching the point.
+1
This change does need to be tested in cluster mode. Have you tried out @madolson's recommendation? |
Proposing this PR per issue discussion:
#539
For cluster, after startup loading, remove keys
that shouldn't be served by this server based on slot assignment of a cluster.
Also added stat fields in server to count the total removed keys and skipped slots from last loading.