-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[aws-eks] Support choosing public, private or CIDR whitelist endpoints #5220
Comments
Requested to CloudFormation team as well: aws-cloudformation/cloudformation-coverage-roadmap#118 |
I thought I could work my way around it by creating the EKS cluster using CDK and then manually change the public endpoint to a private endpoint using the AWS web console. That step works off course but now I'm running into the next issue: I can't access the cluster properly through the CDK.
But this results in kubectl timeouts in the underlying Lambda function. When I switch back to a public endpoint for my EKS cluster the code works. This should probably be fixed in the security group associated with the EKS cluster, but it is hard to figure out how to extend the security group in such a way that a private EKS cluster is accessible for the CDK lambda's... |
Hi EKS service team, Ramesh (https://github.com/rameshmimit) and I are also blocked by this issue. So after some digging, we have came up with a workaround and scripted the workaround as well, which is elaborated below. Could you please confirm this workaround looks OK? And also another question is when we can expect the official CDK support for EKS private K8S API endpoint. Thx a lot in advance. Root Cause Analysis: After some digging, we figured out the root cause behind the scene as elaborated below: i. To manage EKS cluster resources and K8S resources, the CDK aws-eks module is using CFN Custom Resources, which are underpinned by CR Provider Framework Lambda and Handler Lambda behind the scene. ii. For managing EKS cluster resources, e.g. creating and upgrading EKS cluster itself, CDK aws-eks treats them as Custom::AWSCDK-EKS-Cluster and Custom::AWSCDK-EKS-FargateProfile. And behind the Framework Lambdas, aws-cdk uses two Handler Lambda functions to provision those resources. Both Handler Lambdas are talking to AWS EKS Service API but not the EKS K8S API. iii. For managing EKS cluster resources, e.g. creating and updating K8S objects – deployment, helm chart, svc, etc, CDK aws-eks treats them as Custom::AWSCDK-EKS-KubernetesResource, Custom::AWSCDK-EKS-HelmChart, and Custom::AWSCDK-EKS-KubernetesPatch. And cdk-aws uses one Handler Lambda to provision them behind the scene. This Handler Lambda is talking to the EKS K8S API endpoint via the helm & kubelet binary shipped in the Lambda Layer of https://github.com/aws-samples/aws-lambda-layer-kubectl. iv. So when we change a CDK-created EKS cluster from public K8S API endpoint to private only, it will not impact the two Handler Lambdas which managing cluster resources, because they only talk to AWS EKS service API not EKS K8S API. But such change will fail the Handler Lambda which managing K8S resources. The reason is such Handler Lambda needs to talk to EKS K8S API but it cannot reach the IPs of the private EKS K8S API endpoint because by default it’s not associated with the VPC used by the EKS. In fact, it’s not associated with any VPC hence can only talks to public EKS K8S API endpoint. Workaround: So based on the root cause analysis described above, logically as long as we can add VPC association to the Custom Resource Handler Lambda which managing K8S resources, then it will be able to talk to the private EKS K8S API endpoint and the issue can be resolved. Around that thinking thread, we have done the following steps and proved that we indeed have made it work by the workaround: i. First run the CDK scripts to create a EKS cluster with public K8S API endpoint with required capacity or a managed worker node group. ii. Then use the AWS CLI command to change the EKS K8S API endpoint from public to private only, i.e. “aws eks update-cluster-config --name devEKSCluster --resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true”. iii. Next as inspired by aws-samples/aws-lambda-layer-kubectl#32, we've done the following actions:
v. Now run “cdk deploy” to deploy helm & K8S resources via CDK and it's all working. vi. Then we also verified the whole CDK lifecycle, i.e. adding/updating resources by CDK codes and also destroy & re-create the CDK packs, it all works fine. To automate the workaround, we also scripted it as the attached enable_eks_private_k8s_api_endpoint.sh. enable_eks_private_k8s_api_endpoint.zip So appreciate if you could validate if such workaround is logically sound and can be used before the CDK official support. |
Hi @JasperW01 - Amazing analysis, its spot on! I'm actually actively working on adding this support, you can have a look at this PR to get some visibility into the work. I expect it won't take long till its merged and released :) Thanks! |
Hi @iliapolo - Thx a lot for the confirmation and the PR link. Had a quick look at the PR. Looks cool. Just one point that maybe we need to open the choice to allow users to specify the subnets as well. Have left a comment in the PR to explain why. Also thx a lot for actively working on this point which is very important and time-critical to us. Much appreciated! |
Sure thing. I'll take a look at the comment. Thanks |
Add an option to configure endpoint access to the cluster control plane. Resolves #5220 In addition, there is now a way to pass environment variables into the kubectl handler. This is necessary for allowing private VPCs (with no internet access) to use an organizational proxy when installing Helm chart and in general needing to access the internet. See #9095 (comment). BREAKING CHANGE: endpoint access is configured to private and public by default instead of just public ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add an option to configure endpoint access to the cluster control plane. Resolves #5220 In addition, there is now a way to pass environment variables into the kubectl handler. This is necessary for allowing private VPCs (with no internet access) to use an organizational proxy when installing Helm chart and in general needing to access the internet. See #9095 (comment). BREAKING CHANGE: endpoint access is configured to private and public by default instead of just public ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add an option to configure endpoint access to the cluster control plane. Resolves aws#5220 In addition, there is now a way to pass environment variables into the kubectl handler. This is necessary for allowing private VPCs (with no internet access) to use an organizational proxy when installing Helm chart and in general needing to access the internet. See aws#9095 (comment). BREAKING CHANGE: endpoint access is configured to private and public by default instead of just public ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Enable choosing public or private endpoints at the time of cluster creation.
Use Case
EKS supports choosing public / private endpoints at the time of cluster creation - https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html
Proposed Solution
Add the properties 'publicEndpointEnabled: boolean' and 'privateEndpointEnabled: boolean' in @aws-cdk/aws-eks.ClusterProp
The text was updated successfully, but these errors were encountered: