-
Notifications
You must be signed in to change notification settings - Fork 320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EKS Private Endpoint Support #22
Comments
Could you clarify if this for the EKS API or the Kubernetes API, or both? If its the Kubernetes API, that is great. Currently, having to reach out over the internet from worker nodes in private subnets to EKS masters is whats stopping us from using EKS. |
When you create a VPC endpoint, a record resolving to a VPC IP address is generated in the private DNS. If the system was designed this way then the internal and external endpoints wouldn't need to know about each other and can coexist. Is this not how all other VPC endpoints work? |
This roadmap item is specifically for the Kubernetes cluster API endpoint, not the EKS API. |
Great news, thats what we need. Thanks for the update. |
What is the estimated timeline for this? Seems this will be a major security improvements. |
this would be super helpful as we have to limit worker exposure to the internet for some clusters. |
I cannot use EKS at all if the api is exposed to the internet, its a non-starter. This feature is a must for me |
Same as above. To us, this is a complete showstopper. |
Doesn't this blog article already suggest it exists? I don't see a way to forward your PrivateLink NLB to the EKS API endpoint though? Or the actual endpoint being listed under AWS services as with ECS. |
It seems this is partially supported? I saw there are 2 ENI created in my EKS VPC, if I modify the worker node manually (changing hosts file to point the EKS API Server DNS to the private IP of the ENI or change the kubeconfig file to use the private IP for the server address), it can connect to the master through private IP address successfully. Just the EKS master's DNS resolves to public IP addresses currently. |
@pwdebruin1 that blog is incorrect, we'll get that corrected. @kzwang the approach you suggest does not transit the internet, but those IPs are subject to change as API servers are upgraded or undergo routine change. |
Just to clarify (k8s newbie here), docs state that:
Do those docs actually mean the Kubernetes cluster API? I'm assuming worker nodes would not need to access the EKS API for any standard sort of deployment (just working out whether I want the roadmap item specified by this issue, or separately PrivateLink support for the EKS API as well). |
https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh#L86 By default, the worker node bootstrap script calls the Amazon EKS API to get the API server endpoint and cluster's certificate authority data. If you launch your node group by following the steps here, you can bypass this call by specifying the API server endpoint and cluster certificate authority data as BootstrapArguments when you launch the node group. https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh#L20 |
This feature is now live and available for all EKS customers! |
Perfect. right on time. Thank you guys. |
@tabern Thanks. Just tried this. Why is the private zone hidden? Ideally, we would have disabled public access, enabled private access and use Route53 resolvers to forward to the private zone from on-premise (over direct connect). Now we either have to setup bastion hosts in VPC to forward traffic or keep public access enable :( |
@spasam have you tried with the direct connect yet? The private zone is not available in the Route53 console, but this does not mean you cannot use direct connect. You connect kubectl to the control plane using the provisioned endpoint in the same way regardless of the access control setting. The difference is that that connection will not work if you are outside of the VPC and do not have direct connect, transit gateway, etc enabled for your client to connect to the API server within the VPC. See info in our docs |
Great work, any ideas if CloudFormation support is coming to enable this feature via IaaC? |
Worth noting you also need to bring up EC2 API on private link for the CNI functions to work correctly and any links to containers such as ECR. |
@tabern thanks for your response. We do use Direct Connect. The problem is with DNS resolution when only |
We also have the same problem RE DNS resolution. We'd like to access the private endpoint from a VPN in a different VPC from the EKS cluster. So far I've tried with VPC peering (with DNS resolution & hostnames enabled in both VPCs as well as requester & accepter DNS resolution) and I've also tried using a transit gateway (with DNS support enabled). Interestingly, I can resolve a instance IP in the other VPC using the amazon provided instance DNS name but not the private endpoint DNS. |
Adding another voice to the chorus. Same problem as @antonosmond |
Agreed with the other comments that the DNS resolution doesn't seem ideal. A not-so-nice workaround is to deploy a web proxy on an EC2 node in the VPC, then set the HTTPS_PROXY environment variable on your client before running kubectl. The proxy in the VPC performs the DNS resolution, so your client network doesn't access to Route 53. |
We are using custom DNS servers in our VPCs and therefor can't even use the private endpoint on our EC2 instances. |
Theoretically speaking: Would it not work to put an A record off the known ip (that can be retrieved within the cluster) on some private route 53 zone? I mean not a super nice solution but should work? |
@devkid wrote:
So are we. The web proxy I used let me configure a custom host resolver plugin, and in the plugin I used a DNS client library to get the IP address from Route 53 instead of using the host's DNS client. I only got this working with kubectl. I'm not sure whether the nodes themselves can be configured to connect to the API through a web proxy. |
@Alien2150 because there is no guarantee whatsoever regarding the permanence of those IPs. (What if AWS decides to replace a faulty node of the control plane and the new instance gets a new IP?) |
Thanks for the feedback, keep it coming. We're currently exploring options to allow DNS routing with private endpoints and will update here as we have more information. |
@tabern Feedback: In case you would provide multiple endpoints (one private, one public), it would be nice to have an option in |
If one were to disable public access what would be the ramifications?
|
@devkid If I understand correctly, the API server endpoint DNS name is same for both public and private endpoint. It's just AWS assign private DNS record (pointing to internal Load Balancer?) to Cluster VPC if private endpoint enabled. |
Yes to both |
Yeah common guys, great feature but not helpful if you run your EKS clusters in their own VPCs which is different from where you terminate your VPN connections. |
I'm aware of the current state, I made suggestions for changes. |
We also have the same DNS problem. We have a split-brain DNS server on another VPC that we use to resolve both route53 and corporate addresses. If we could see the Routee 53 zone in the console, we could associate it with the other VPC to solve the resolution problem but, without that, this is a show stopper :( |
@wilbur4321 can you point your split brain DNS server to forward to VPC .2 address as conditional forward for the Route53 domain? So even though you can’t see it in R53 it makes resolvable to your Corp network? |
@cdenneen our problem is the split-brain DNS server is in a different VPC (in a different account, no less), and we've got our subnet's DHCP pointing to it for everything there as well, so that our AWS systems/containers can talk back into corporate. Normally, this works great -- but it gives us the problem that EKS worker hosts won't be able to talk to the cluster because they can't resolve it's name. |
Hi all, we're now tracking the feature request for DNS resolution to the private hosted zone to allow access to the private Kubernetes API server endpoint as a new item on the roadmap. Please feel free to leave any feedback here: #221 |
I've tried using this feature and it broken the eks cluster. State shows: "Endpoint-access-update | FAILED" with "Errors (0)" |
@mycroftmih is your cluster still there? Can you open a support ticket so we can investigate? |
I still have the cluster, it's in active state but inaccesibile. It's a testing account with Basic support, I don't think I will be able to open a support tiket from there. There was an automatic case opened by AWS but I didn't see it in time, so it's closed (5920666461). |
Hello, @tabern actually I created an ec2 instance with Rancher server installed on it inside VPC A, in VPC B, I deployed an EKS Cluster through Rancher UI. By default, it creates the API server endpoint public so I went to the EKS Console and changed the API server endpoint to private. After that, I create the Transit gateway so I could route traffic between the two VPC however I still without seeing my EKS Cluster in the Rancher console. The Rancher Server console shows me the error that can't resolve the connection between my server and my EKS Cluster. Any suggestion?? Does this feature could help me with this ?? |
Provide customers with private endpoint access to EKS.
The text was updated successfully, but these errors were encountered: