-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Worker nodes are not joining the cluster using version 2.3.0 #310
Comments
That's strange. I think it's related to #302 What is your AMI release? Perhaps in your AMI version the script |
I am working with the AMI amazon-eks-node-1.11-v20190109, and it doesn't have indeed the |
I see. Maybe we just replace the |
FYI @michaelmccord |
@armandorvila v1.11-v20190211 doesn't have the ulimit bug. The ami id for us-east-1 is |
@max-rocket-internet Huge +1 on this. It opens up a lot more options and does some good future-proofing for stuff like this too. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have issues
While creating a new cluster using the version 2.3.0 of the module and the AMI
ami-01e08d22b9439c15a
for the worker nodes, not a single worker node joins the cluster.I'm submitting a...
What is the current behaviour?
The EKS cluster is created, the autoscaling group, the lunch configuration and nodes are created. However, when running kubectl get nodes there are no nodes at all.
Running cat /var/lib/kubelet/kubeconfig within any of the worker nodes, we can see a miss-configuration issue:
Instead of the cluster name, after the
-i
option we are getting--enable-docker-bridge
If this is a bug, how to reproduce? Please include a code sample if relevant.
Create a new cluster using the 2.3.0 version of the module and the AMI
ami-01e08d22b9439c15a
and run:SSH into one of the worker nodes and run:
What's the expected behaviour?
Worker nodes joining the cluster properly, just like they do with the version 2.2.1 of the module.
Are you able to fix this problem and submit a PR? Link here if you have already.
Environment details
Any other relevant info
The text was updated successfully, but these errors were encountered: