-
Notifications
You must be signed in to change notification settings - Fork 320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EKS] [request]: VPC endpoint support for EKS API #298
Comments
Is there any news on this? |
Any updates on this issue? |
If you use EKS Managed Nodes, the bootstrapping process avoids the |
Thanks Mike. Unfortunately managed nodes are not an option because they cannot be scaled to 0. We run some machine learning workloads that require scaling up ASGs with expensive VMs (x1.32xlarge) and we need to be able to scale them back to 0 once the workloads have completed. |
Thanks for the feedback. Can you open a separate GH issue with that feature request for Managed Node Groups? Will keep this issue open as it's something we are researching. |
@mikestef9 I'm interested in the managed nodes solution. What do you mean by "you can launch workers into a private subnet without outbound internet access as long as you setup the other required PrivateLink endpoints correctly"? Which PrivateLink endpoints are you referring to? Just the other service endpoints such as SQS and SNS that the applications running on the cluster may happen to use? Or do you mean that there are particular PrivateLink endpoints required to run EKS in private subnets with no internet gateway? |
Hi @dsw88, In order for the worker node to join the cluster, you will need to configure VPC endpoints for ECR, EC2, and S3 See this GH repo https://github.com/jpbarto/private-eks-cluster created by an AWS Solutions Architect for a reference implementation. Note that only 1.13 and above EKS clusters have a kubelet version that is compatible with the ECR VPC endpoint. |
@mikestef9 Thanks so much for the info, and thanks for the pointer to the private EKS cluster reference repository! I have one final question that I'm having a hard time figuring out how to deal with: How can I configure other hosts in this same private VPC to be able to talk to the cluster? Knowing the private DNS name isn't a huge deal, because I can just hard-code it into whatever needs to talk to the cluster. A bigger problem, however, is how a host in the private VPC can authenticate with the cluster. Currently when I use the AWS API to set up a kubeconfig with EKS, it includes the following snippet in the generated kubeconfig file:
As you can see, it called the EKS API to get a token that authenticates it with the cluster. That definitely presents a problem since my hosts in the private VPC also don't have access to the EKS API. Is there another way that I can authenticate to the cluster without EKS API access? |
It seems that this repo uses unmanaged nodes though. I tried deploying it and it brought up a cluster without any nodes listed under the EKS web console. Is this correct? |
@mikestef9 Thank you very much for this clue. Now I have a working setup with managed worker groups and no access to the Internet 🎉 I was not sure if it's feasible as the documentation says:
Well, apparently it is. If someone needs working Terraform recipes, ping me stepan@vrany.dev. |
@vranystepan great to hear you have this working. As part of our fix for #607 we will make sure to get our documentation updated. |
This is still a real issue. I need to actually create and delete new clusters from private subnets with no NAT or Egress gateways. I can create private endpoints for apparently every AWS service but EKS. This is a a deep pain for some customers, as we have to build complicated workarounds to have traffic routed towards the EKS service, whereas every other AWS service is easily exposed with a private endpoint. |
I agree with @duckie this issue should not be closed yet. EKS support is laughable. |
I agree that VPC endpoints are still very important, and this issue should be kept open. It is possible to run EKS clusters in private subnets with no internet egress, but it is not possible to manage those clusters from within that private VPC. We are limited in the tooling we can develop around EKS for lifecycle actions such as creating, updating, and deleting clusters because we can't perform those actions inside our private VPC. Please consider implementing a VPC endpoint for EKS! Thanks! |
Hi, |
Is there status on this issue? This is a real problem for vendors that only use the bootstrap.sh to perform automated eks deployments because our environment are private. I would like to know if anyone is working on this eks private endpoint? Thanks |
We have the problem too. We've built a private cluster for a private vpc with CDK (the VPC is connected to a Transit Gateway). CDK makes usage of a custom resource lambda for doing the kubeconfig update. When the cluster endpointAccess is private (or public and private) this lambda is associated to the VPC (via ENIs). The Lambda function calls "aws eks update-kubeconfig" from "inside" of the VPC, but is unable to access the cluster endpoint and fails with a timeout. All necessary VPC Endpoints (according to the official EKS docs) are set (ecr.api, ecr.dkr, s3, ...,). |
+1 |
+1 The result would be |
when i create cluster with no internet access, getting below error... Is there any update on VPC endpoint support for EKS API? Command used to create cluster: Error Message: |
I need this as well. Is there a solution or a current workaround yet? |
Commenting as well. An EKS VPC Endpoint would be a huge help. Have there been any updates recently? |
Mike, what are the "other required endpoints"? Is there a list somewhere that says, "here are all of the endpoints that a managed node requires"? |
@deitch imho the folowing VPC endpoints are required :
|
Cool thanks. Are the ECR only if you use containers from ECR? Or general requirement? This should be documented formally somewhere in AWS. |
Using EKS then ECR is required to bootstrap nodes. And because ECR stores images on S3 under-the-hood, you have to get access to S3. |
Much appreciated. |
Are there any updates on this team? |
Cluster autoscaler, when running in a private EKS cluster, also experiences that problem:
After reading https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html |
Amazon EKS now supports AWS PrivateLink for the EKS management APIs. A few call outs:
|
Hello @mikestef9 , maybe this doc should be updated to enable EKS Pod identity to work with limited internet access. |
yes good call, will get that updated. |
@mikestef9 well a good call that must be a bit described :) |
Tell us about your request
VPC endpoint support for EKS, so that worker nodes that can register with an EKS-managed cluster without requiring outbound internet access.
Which service(s) is this request for?
EKS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
Worker nodes based on the EKS AMI run bootstrap.sh to connect themselves to the cluster. As part of this process,
aws eks describe-cluster
is called, which currently requires outbound internet access.I'd love to be able to turn off outbound internet access but still easily bootstrap worker nodes without providing additional configuration.
Are you currently working around this issue?
bootstrap.sh
.Additional context
The text was updated successfully, but these errors were encountered: