-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase inotify default limits #2335
Conversation
@markusboehme and @foersleo - would be great if you could weigh in on this change too. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see any downsides to these changes. I have to say I was surprised by the high value for the vm.max_map_count
sysctl, though. I wouldn't have thought of a process needing that many VMAs, but e.g. OpenSearch recommends configuring a minimum of 256k. Aside from the typical memory-mapped files, memory allocators will use anonymous maps to request memory from the kernel. To satisfy my curiosity, it'll be interesting to see which of these are the reason for OpenSearch's recommendation.
Not blocking approval on it, but as a minor improvement I'd prefer to see the changes split into two commits. The bump in inotify resource limits is independent of the limit on the number of VMAs per process.
Thanks @markusboehme! I've dropped the |
We have had several reports from users that the inotify limits (fs.inotify.max_user_instances and fs.inotify.max_user_watches) are too low for their workloads, causing them to get errors when deploying pods. Also bumping up vm.max_map_count so all three settings match what is currently used for Amazon Linux to make sure we have a consistent experience. The user data settings can be used to raise (or lower) these defaults if an end user needs to fine tune their settings. Signed-off-by: Sean McGinnis <stmcg@amazon.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me and seems to be what kubernetes/kops
and other container focused distros set their limits to:
Although, more generally, I am curious, why these numbers? Simply because they are sane middle grounds; not too high, not too low?
Part middle ground so it just works for most end users without needing to make adjustments. Part just because we want to match what Amazon Linux uses as a default so the user experience is consistent if they switch between distros. |
Issue number:
Closes #1525
Description of changes:
We have had several reports from users that the inotify limits
(fs.inotify.max_user_instances and fs.inotify.max_user_watches) are too
low for their workloads, causing them to get errors when deploying pods.
The user data settings can be used to raise (or lower) these defaults if
an end user needs to fine tune their settings.
Related Amazon Linux changes:
Testing done:
Made changes and built image to make sure there were no errors.
Published AMI and spun up EKS cluster. Connected to console, went to admin container, and used
sheltie to verify values returned are what is expected for these settings.
Terms of contribution:
By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.