-
Notifications
You must be signed in to change notification settings - Fork 596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Snapshot compaction threshold is relative to node count #525
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! I think this will work. I was thinking it would be cool to add a test hook that we can flip to enable a 1 second sleep on the disk flushes or something just as a sanity check that we never let the event channel block with a high event rate and slow disk.
Happy to add that to this PR or after if you like.
shutdownFlushTimeout = 250 * time.Millisecond | ||
|
||
// snapshotBytesPerNode is an estimated bytes per node to snapshot | ||
snapshotBytesPerNode = 128 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is conservative but that seems like a good thing.
For (my own) reference the format for most entries is:
alive: [node name] [ipaddres]:[port]
Typically for ipv4 addresses we need 7 + 21 + len(node name). IPv6 is up to 47 bytes. That leaves 74 bytes for hostname in worst case which is plenty.
serf/snapshot.go
Outdated
|
||
// snapshotCompactionThreshold is the threshold we apply to | ||
// the snapshot size estimate (nodes * bytes per node) before compacting. | ||
snapshotCompactionThreshold = 3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems kinda high to me? I mean I don't think it's a super big deal but just want to sanity check the edge cases here.
In an example with 1000 nodes, our estimate for the size is already probably ~2x reality for typical hostnames and ipv4 (see last comment). Which means we can probably write 5-6k log entries so it would take a really long time to hit compaction on a somewhat stable cluster.
Erring on the cautious side to minimise disk IO is a good call, but I wonder if for really huge clusters we'll end up with snapshots that actually take significant time to read in on serf startup again?
Back of envelope maths: a cluster with 50k nodes would not compact until it's snapshot is 30000 * 128 * 3 = 19.2MB
which in typical cases (60 byte records) would mean something like 320k serf events recorded. I doubt that's a huge deal to read on startup.
Seems a factor of 2 would be plenty to minimize IO especially given the min size to prevent really small clusters from doing it constantly.
This is a bit of a bikeshed - it's fine like this, just trying to think why it needs to be that high?
@banks Made the threshold less conservative. Feel free to add a hook test after merge! |
This includes fixes that improve gossip scalability on very large (> 10k node) clusters. The Serf changes: - take snapshot disk IO out of the critical path for handling messages hashicorp/serf#524 - make snapshot compaction much less aggressive - the old fixed threshold caused snapshots to be constantly compacted (synchronously with request handling) on clusters larger than about 2000 nodes! hashicorp/serf#525 Memberlist changes: - prioritize handling alive messages over suspect/dead to improve stability, and handle queue in LIFO order to avoid acting on info that 's already stale in the queue by the time we handle it. hashicorp/memberlist#159 - limit the number of concurrent pushPull requests being handled at once to 128. In one test scenario with 10s of thousands of servers we saw channel and lock blocking cause over 3000 pushPulls at once which ballooned the memory of the server because each push pull contained a de-serialised list of all known 10k+ nodes and their tags for a total of about 60 million objects and 7GB of memory stuck. While the rest of the fixes here should prevent the same root cause from blocking in the same way, this prevents any other bug or source of contention from allowing pushPull messages to stack up and eat resources. hashicorp/memberlist#158
This PR replaces a static maximum snapshot size with a relative size based on the number of nodes. This is to avoid frequent compaction for large clusters.
Depends on #524