Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[aws-eks] Support choosing public, private or CIDR whitelist endpoints #5220

Closed
Tracked by #6491
abhishekjawali opened this issue Nov 27, 2019 · 6 comments · Fixed by #9095
Closed
Tracked by #6491

[aws-eks] Support choosing public, private or CIDR whitelist endpoints #5220

abhishekjawali opened this issue Nov 27, 2019 · 6 comments · Fixed by #9095
Assignees
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service effort/small Small work item – less than a day of effort feature-request A feature should be added or improved. p1

Comments

@abhishekjawali
Copy link

Enable choosing public or private endpoints at the time of cluster creation.

Use Case

EKS supports choosing public / private endpoints at the time of cluster creation - https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html

Proposed Solution

Add the properties 'publicEndpointEnabled: boolean' and 'privateEndpointEnabled: boolean' in @aws-cdk/aws-eks.ClusterProp

@abhishekjawali abhishekjawali added feature-request A feature should be added or improved. needs-triage This issue or PR still needs to be triaged. labels Nov 27, 2019
@SomayaB SomayaB added the @aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service label Nov 27, 2019
@dnascimento
Copy link

Requested to CloudFormation team as well: aws-cloudformation/cloudformation-coverage-roadmap#118

@eladb eladb added the effort/small Small work item – less than a day of effort label Jan 23, 2020
@SomayaB SomayaB removed the needs-triage This issue or PR still needs to be triaged. label Mar 5, 2020
@eladb eladb added the p1 label Mar 9, 2020
@eladb eladb changed the title [EKS] Support choosing public or private endpoints [EKS] Support choosing public, private or CIDR whitelist endpoints Mar 9, 2020
@ccfife ccfife mentioned this issue Apr 8, 2020
19 tasks
@arjanschaaf
Copy link
Contributor

I thought I could work my way around it by creating the EKS cluster using CDK and then manually change the public endpoint to a private endpoint using the AWS web console. That step works off course but now I'm running into the next issue: I can't access the cluster properly through the CDK.
For example: I extended my CDK stack by adding an additional a kubernetes resource (in this example an additional cert-manager cluster issuer object):

cluster.addResource('production-cluster-issuer', {
            apiVersion: 'cert-manager.io/v1alpha2',
            kind: 'ClusterIssuer',
            metadata: { 
                name: 'letsencrypt-production',
                namespace: 'cert-manager'
            },
            spec: {
                acme: {
                    email: 'info@example.com',
                    server: 'https://acme-v02.api.letsencrypt.org/directory',
                    privateKeySecretRef: {
                        name: 'letsencrypt-production'
                    },
                    solvers: [{
                        http01: {
                            ingress: {
                                class: 'nginx'
                            }
                        }
                    }]
                }
            }
        })

But this results in kubectl timeouts in the underlying Lambda function. When I switch back to a public endpoint for my EKS cluster the code works. This should probably be fixed in the security group associated with the EKS cluster, but it is hard to figure out how to extend the security group in such a way that a private EKS cluster is accessible for the CDK lambda's...

@eladb eladb added this to the EKS Developer Preview milestone Jun 24, 2020
@eladb eladb changed the title [EKS] Support choosing public, private or CIDR whitelist endpoints [EKS Feature] Support choosing public, private or CIDR whitelist endpoints Jun 24, 2020
@eladb eladb removed this from the EKS Developer Preview milestone Jun 24, 2020
@JasperW01
Copy link

JasperW01 commented Jul 16, 2020

Hi EKS service team,

Ramesh (https://github.com/rameshmimit) and I are also blocked by this issue. So after some digging, we have came up with a workaround and scripted the workaround as well, which is elaborated below. Could you please confirm this workaround looks OK? And also another question is when we can expect the official CDK support for EKS private K8S API endpoint. Thx a lot in advance.

Root Cause Analysis:

After some digging, we figured out the root cause behind the scene as elaborated below:

i. To manage EKS cluster resources and K8S resources, the CDK aws-eks module is using CFN Custom Resources, which are underpinned by CR Provider Framework Lambda and Handler Lambda behind the scene.

ii. For managing EKS cluster resources, e.g. creating and upgrading EKS cluster itself, CDK aws-eks treats them as Custom::AWSCDK-EKS-Cluster and Custom::AWSCDK-EKS-FargateProfile. And behind the Framework Lambdas, aws-cdk uses two Handler Lambda functions to provision those resources. Both Handler Lambdas are talking to AWS EKS Service API but not the EKS K8S API.

iii. For managing EKS cluster resources, e.g. creating and updating K8S objects – deployment, helm chart, svc, etc, CDK aws-eks treats them as Custom::AWSCDK-EKS-KubernetesResource, Custom::AWSCDK-EKS-HelmChart, and Custom::AWSCDK-EKS-KubernetesPatch. And cdk-aws uses one Handler Lambda to provision them behind the scene. This Handler Lambda is talking to the EKS K8S API endpoint via the helm & kubelet binary shipped in the Lambda Layer of https://github.com/aws-samples/aws-lambda-layer-kubectl.

iv. So when we change a CDK-created EKS cluster from public K8S API endpoint to private only, it will not impact the two Handler Lambdas which managing cluster resources, because they only talk to AWS EKS service API not EKS K8S API. But such change will fail the Handler Lambda which managing K8S resources. The reason is such Handler Lambda needs to talk to EKS K8S API but it cannot reach the IPs of the private EKS K8S API endpoint because by default it’s not associated with the VPC used by the EKS. In fact, it’s not associated with any VPC hence can only talks to public EKS K8S API endpoint.

Workaround:

So based on the root cause analysis described above, logically as long as we can add VPC association to the Custom Resource Handler Lambda which managing K8S resources, then it will be able to talk to the private EKS K8S API endpoint and the issue can be resolved.

Around that thinking thread, we have done the following steps and proved that we indeed have made it work by the workaround:

i. First run the CDK scripts to create a EKS cluster with public K8S API endpoint with required capacity or a managed worker node group.

ii. Then use the AWS CLI command to change the EKS K8S API endpoint from public to private only, i.e. “aws eks update-cluster-config --name devEKSCluster --resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true”.

iii. Next as inspired by aws-samples/aws-lambda-layer-kubectl#32, we've done the following actions:

a.	Find the Custom Resource Handler Lambda managing K8S resources, which has a description of “onEvent handler for EKS kubectl resource provider”. 
b.	In this Lambda, add EC2 interface related permissions to the Lambda role’s inline IAM policy so that we can associate VPC to the Lambda in the next step. 
c.	Associate the VPC & subnets used by the EKS to such Handler Lambda. 
d.	Also associate the EKS cluster security group to this Handler Lambda. Such security group is found from the eks console.

v. Now run “cdk deploy” to deploy helm & K8S resources via CDK and it's all working.

vi. Then we also verified the whole CDK lifecycle, i.e. adding/updating resources by CDK codes and also destroy & re-create the CDK packs, it all works fine.

To automate the workaround, we also scripted it as the attached enable_eks_private_k8s_api_endpoint.sh. enable_eks_private_k8s_api_endpoint.zip

So appreciate if you could validate if such workaround is logically sound and can be used before the CDK official support.

@eladb eladb assigned iliapolo and unassigned eladb Jul 16, 2020
@iliapolo
Copy link
Contributor

Hi @JasperW01 - Amazing analysis, its spot on!

I'm actually actively working on adding this support, you can have a look at this PR to get some visibility into the work.

I expect it won't take long till its merged and released :)

Thanks!

@iliapolo iliapolo added the in-progress This issue is being actively worked on. label Jul 16, 2020
@JasperW01
Copy link

Hi @iliapolo - Thx a lot for the confirmation and the PR link.

Had a quick look at the PR. Looks cool. Just one point that maybe we need to open the choice to allow users to specify the subnets as well. Have left a comment in the PR to explain why.

Also thx a lot for actively working on this point which is very important and time-critical to us. Much appreciated!

@iliapolo
Copy link
Contributor

Sure thing. I'll take a look at the comment.

Thanks

@eladb eladb added this to the EKS Dev Preview milestone Jul 22, 2020
@mergify mergify bot closed this as completed in #9095 Aug 5, 2020
mergify bot pushed a commit that referenced this issue Aug 5, 2020
Add an option to configure endpoint access to the cluster control plane.

Resolves #5220

In addition, there is now a way to pass environment variables into the kubectl handler. This is necessary for allowing private VPCs (with no internet access) to use an organizational proxy when installing Helm chart and in general needing to access the internet. See #9095 (comment).

BREAKING CHANGE: endpoint access is configured to private and public by default instead of just public

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
eladb pushed a commit that referenced this issue Aug 10, 2020
Add an option to configure endpoint access to the cluster control plane.

Resolves #5220

In addition, there is now a way to pass environment variables into the kubectl handler. This is necessary for allowing private VPCs (with no internet access) to use an organizational proxy when installing Helm chart and in general needing to access the internet. See #9095 (comment).

BREAKING CHANGE: endpoint access is configured to private and public by default instead of just public

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
curtiseppel pushed a commit to curtiseppel/aws-cdk that referenced this issue Aug 11, 2020
Add an option to configure endpoint access to the cluster control plane.

Resolves aws#5220

In addition, there is now a way to pass environment variables into the kubectl handler. This is necessary for allowing private VPCs (with no internet access) to use an organizational proxy when installing Helm chart and in general needing to access the internet. See aws#9095 (comment).

BREAKING CHANGE: endpoint access is configured to private and public by default instead of just public

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@iliapolo iliapolo removed the in-progress This issue is being actively worked on. label Aug 16, 2020
@iliapolo iliapolo changed the title [EKS Feature] Support choosing public, private or CIDR whitelist endpoints [aws-eks] Support choosing public, private or CIDR whitelist endpoints Aug 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service effort/small Small work item – less than a day of effort feature-request A feature should be added or improved. p1
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants