-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use system fs.max-files as limits instead of hard-coded value #142
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally looks great, but I have a question about whether we should use all of fs/file-max
maxConns, err := sysctl.New().GetSysctl("fs/file-max") | ||
if err != nil { | ||
glog.Errorf("unexpected error reading system maximum number of open file descriptors (fs.file-max): %v", err) | ||
return 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a comment here that this means "don't render", not that the max will be 0 :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
return 0 | ||
} | ||
|
||
return maxConns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we have some "padding" e.g. maxConns - 1024 ? Also, is worker_rlimit_nofile
per worker process . I honestly just don't know - I'm happy to follow your lead here if you say "this is correct", just not sure I follow the logic of letting a single nginx process use up all the FDs available to the whole system?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I honestly just don't know - I'm happy to follow your lead here if you say "this is correct", just not sure I follow the logic of letting a single nginx process use up all the FDs available to the whole system?
Actually this process can use just the available FDs in the system.
If nginx tries to use more than the available the log just will show "Too many open files" which means:
- the current
fs.file-max
value is low - the node is small
- there is a process using/leaking descriptors
This setting tries to use the available resources (if needed) and not require scale the pod just because we are using the default values and we cannot get more than a couple of thousand connections.
@justinsb I think this is an improvement. We can review this after a release. |
OK, SGTM! |
fixes kubernetes-retired/contrib#1966