Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need to run ulimit -n 3000000 before running m3dbnode #137

Closed
brndnmtthws opened this issue May 11, 2019 · 4 comments · Fixed by #147
Closed

Need to run ulimit -n 3000000 before running m3dbnode #137

brndnmtthws opened this issue May 11, 2019 · 4 comments · Fixed by #147

Comments

@brndnmtthws
Copy link

Currently the container will start m3dbnode regardless of the ulimit settings for the current user. m3dbnode will continuously print an error to the log about ulimits:

{"level":"warn","ts":1557587437.1545765,"msg":"invalid configuration found, refer to linked documentation for more information","url":"https://m3db.github.io/m3/operational_guide/kernel_configuration","error":"current value for RLIMIT_NOFILE(1048576) is below recommended threshold(3000000)\nmax value for RLIMIT_NOFILE(1048576) is below recommended threshold(3000000)"}
{"level":"warn","ts":1557587447.154574,"msg":"invalid configuration found, refer to linked documentation for more information","url":"https://m3db.github.io/m3/operational_guide/kernel_configuration","error":"current value for RLIMIT_NOFILE(1048576) is below recommended threshold(3000000)\nmax value for RLIMIT_NOFILE(1048576) is below recommended threshold(3000000)"}
{"level":"warn","ts":1557587457.1546123,"msg":"invalid configuration found, refer to linked documentation for more information","url":"https://m3db.github.io/m3/operational_guide/kernel_configuration","error":"current value for RLIMIT_NOFILE(1048576) is below recommended threshold(3000000)\nmax value for RLIMIT_NOFILE(1048576) is below recommended threshold(3000000)"}
{"level":"warn","ts":1557587467.1545691,"msg":"invalid configuration found, refer to linked documentation for more information","url":"https://m3db.github.io/m3/operational_guide/kernel_configuration","error":"current value for RLIMIT_NOFILE(1048576) is below recommended threshold(3000000)\nmax value for RLIMIT_NOFILE(1048576) is below recommended threshold(3000000)"}
{"level":"warn","ts":1557587477.1545806,"msg":"invalid configuration found, refer to linked documentation for more information","url":"https://m3db.github.io/m3/operational_guide/kernel_configuration","error":"current value for RLIMIT_NOFILE(1048576) is below recommended threshold(3000000)\nmax value for RLIMIT_NOFILE(1048576) is below recommended threshold(3000000)"}
{"level":"warn","ts":1557587487.1546104,"msg":"invalid configuration found, refer to linked documentation for more information","url":"https://m3db.github.io/m3/operational_guide/kernel_configuration","error":"current value for RLIMIT_NOFILE(1048576) is below recommended threshold(3000000)\nmax value for RLIMIT_NOFILE(1048576) is below recommended threshold(3000000)"}
...

It's possible for m3dbnode to use the setrlimit system call to change the limit, but that's a separate issue.

A workaround is to patch each statefulset like this:

$  kubectl patch statefulset m3db-rep0 -p '{"spec":{"template":{"spec":{"containers":[{"name":"m3db-rep0","args":["-c","ulimit -n 3000000 && m3dbnode -f /etc/m3db/m3.yml"],"command":["/bin/sh"]}]}}}}'
statefulset.apps/m3db-rep0 patched
$ kubectl patch statefulset m3db-rep1 -p '{"spec":{"template":{"spec":{"containers":[{"name":"m3db-rep1","args":["-c","ulimit -n 3000000 && m3dbnode -f /etc/m3db/m3.yml"],"command":["/bin/sh"]}]}}}}'
statefulset.apps/m3db-rep1 patched
$ kubectl patch statefulset m3db-rep2 -p '{"spec":{"template":{"spec":{"containers":[{"name":"m3db-rep2","args":["-c","ulimit -n 3000000 && m3dbnode -f /etc/m3db/m3.yml"],"command":["/bin/sh"]}]}}}}'
statefulset.apps/m3db-rep2 patched
@schallert
Copy link
Collaborator

Hmm, this is a bit tougher now that our changes (#107) to lock down pod privileges are on a tagged release. I'll see if we can do something like an init container if the user sets a security context allowing setrlimit.

@schallert
Copy link
Collaborator

Ref m3db/m3#1666

@cirego
Copy link

cirego commented Aug 16, 2019

I don't suppose there's any update on this? I'm currently working through deploying an M3 cluster on AWS and, even though the limits are set correctly within the VMs, I'm getting invalid configuration found error messages.

I tried using podSecurityContext to enable sysctls but that didn't appear to work.

@cirego
Copy link

cirego commented Aug 16, 2019

I have tried the following configurations and I'm still getting the errors listed above:

Trying workarounds from m3db/m3#1800.

Adding just the "SYS_RESOURCE" capability does not work:

  securityContext:
    capabilities:
        add: ["SYS_RESOURCE"]

Adding the "privileged" flag does not work:

  securityContext:
    privileged: true

Adding both "privileged" and "SYS_ADMIN" (inspiration from https://godoc.org/k8s.io/api/core/v1#SecurityContext comment on field AllowPrivilegeEscalation) does not work either:

  securityContext:
    privileged: true
    capabilities:
        add: ["SYS_ADMIN"]

Are the securityContext, podSecurityContext parameters being used or just accepted? When I try to describe my pods, I don't see anything on the pod or stateful set that indicates these values are being used.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants