-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Latest AMI version lowers ulimit and breaks Elasticsearch #193
Comments
This fixes awslabs#193. This change retains the explicit limits via docker daemon config file, but increases them to 65536.
I did some extensive testing, and it looks like the docker version packaged with the previous AMI didn't actually apply the configuration from /etc/sysconfig/docker. If you look at the docker process, it does not have the default limits applied to the process (from the previous AMI):
Here it is on the new AMI:
I think the change assumed that it was being applied, which if true, would have increased the limit. However, since it wasn't it reduced it. |
Ran into the same issue with the amazon-eks-node-1.11-v20190220 image in eu-central-1. Switching back to amazon-eks-node-1.11-v20190211 seems to fix this. |
Ran into the same issue and tried using v20190211 per @jgoeres but nodes didn't join and were giving authentication from kubernetes whereas latest ami works. I realised that I don't need to run elasticsearch on EKS yet and will try again later and hopefully the eks ami will be fixed. |
this is broken again, what was the last version that worked? |
@gnydick there are now k8s 1.11.9 and 1.12.7 worker AMIs available you could test and see if they have the ulimit revert patch. |
They don't. It regressed.
…On Thu, Mar 28, 2019, 6:03 PM Aaron Roydhouse ***@***.***> wrote:
@gnydick <https://github.com/gnydick> there are now k8s 1.11.9 and 1.12.7
worker AMIs available you could test and see if they have the ulimit revert
patch.
https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#193 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AARTxMQUhqH2ik0tGhB3TnmyNpADbOlYks5vbWZXgaJpZM4bEXYI>
.
|
The problem that originally reduced ulimits was introduced by #186 on 14 Feb and reverted by #206 on 26 Feb. If you are having problems with builds before/after that date range, it is probably a different issue. The patch to reverse the ulimit reduction was merged before well before the 1.11.9 and 1.12.7 worker images were built, and @tabern commented in #206 said that the revert would be in these new 1.11 images. And you can inspect the You might need to start a new issue and investigation the cause if you still have problems @gnydick. |
Solved with the last ami of: https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html |
I've upgraded to ami-0f6f3929a9d7a418e (i have kubernetes 1.10) - the issue still exists, any help? |
What happened:
Upgrading to the latest AMI reduced the
ulimit
to 8192 which broke Elasticsearch deployments. Elasticsearch requires aulimit
of 65536.What you expected to happen:
Upgrading AMIs shouldn't reduce the
ulimit
(which is the opposite effect of what the PR #186 implies).How to reproduce it (as minimally and precisely as possible):
Use the latest AMI
amazon-eks-node-1.11-v20190220
- containers will have a max ulimit of 8192, not unlimited or 65536.Anything else we need to know?:
This was broken by #186 - that PR reduced the default limit, rather than increasing it.
Environment:
us-east-1
m5.xlarge
amazon-eks-node-1.11-v20190220
The text was updated successfully, but these errors were encountered: