Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(eks): programmatic definition of kubernetes resources #3510

Merged
merged 27 commits into from
Aug 7, 2019
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
4b128bb
chore: update package-lock.json
Jul 29, 2019
2a9dd48
feat(eks): define kubernetes resources
Jul 31, 2019
51d2fea
nice!
Aug 1, 2019
a944536
update readme
Aug 1, 2019
386c3d6
Merge remote-tracking branch 'origin/master' into benisrae/eks-kubectl
Aug 1, 2019
91b4305
Update README.md
Aug 1, 2019
7fdb502
feat(events): ability to add cross-account targets (#3323)
skinny85 Aug 1, 2019
d9598a4
chore(ci): add mergify config file (#3502)
NGL321 Aug 1, 2019
27ff73f
chore: update jsii to 0.14.3 (#3513)
shivlaks Aug 1, 2019
a5432e3
fix(iam): correctly limit the default PolicyName to 128 characters (#…
skinny85 Aug 2, 2019
15b806b
v1.3.0 (#3516)
shivlaks Aug 2, 2019
c223827
fix: typo in restapi.ts (#3530)
hoegertn Aug 5, 2019
1ae4c13
feat(ecs): container dependencies (#3032)
Don-CA Aug 5, 2019
297d91a
feat(s3-deployment): CloudFront invalidation (#3213)
hoegertn Aug 5, 2019
75d06fd
docs(core): findChild gets direct child only (#3512)
mirskiy Aug 5, 2019
fd02fc0
doc(iam): update references to addManagedPolicy (#3511)
mirskiy Aug 5, 2019
d2153d9
fix(sqs): do not emit grants to the AWS-managed encryption key (#3169)
RomainMuller Aug 5, 2019
535ab91
fix(lambda): allow ArnPrincipal in grantInvoke (#3501)
IainCole Aug 5, 2019
0200ec8
chore(contrib): remove API stabilization disclaimer
Aug 5, 2019
ede2f8e
fix(ssm): add GetParameters action to grantRead() (#3546)
jogold Aug 6, 2019
c0b74a1
misc
Aug 6, 2019
54bda92
Merge remote-tracking branch 'origin/master' into benisrae/eks-kubectl
Aug 6, 2019
1c1c886
addManifest => addResource
Aug 6, 2019
1f877ea
update test expectations
Aug 6, 2019
0f42503
add unit test for customresrouce.ref
Aug 7, 2019
121ce0f
fix sample link
Aug 7, 2019
a7c86c2
Merge branch 'master' into benisrae/eks-kubectl
mergify[bot] Aug 7, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions packages/@aws-cdk/aws-cloudformation/lib/custom-resource.ts
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,10 @@ export class CustomResource extends Resource {
this.resource.applyRemovalPolicy(props.removalPolicy, { default: RemovalPolicy.DESTROY });
}

public get ref() {
return this.resource.ref;
}

public getAtt(attributeName: string) {
return this.resource.getAtt(attributeName);
}
Expand Down
236 changes: 225 additions & 11 deletions packages/@aws-cdk/aws-eks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,20 +15,173 @@
---
<!--END STABILITY BANNER-->

This construct library allows you to define and create [Amazon Elastic Container Service for Kubernetes (EKS)](https://aws.amazon.com/eks/) clusters programmatically.
This construct library allows you to define [Amazon Elastic Container Service
for Kubernetes (EKS)](https://aws.amazon.com/eks/) clusters programmatically.

### Example
This library also supports programmatically defining Kubernetes resource
manifests within EKS clusters.

The following example shows how to start an EKS cluster and how to
add worker nodes to it:
This example defines an Amazon EKS cluster with a single pod:

[starting a cluster example](test/integ.eks-cluster.lit.ts)
```ts
const cluster = new eks.Cluster(this, 'hello-eks', { vpc });
eladb marked this conversation as resolved.
Show resolved Hide resolved

After deploying the previous CDK app you still need to configure `kubectl`
and manually add the nodes to your cluster, as described [in the EKS user
guide](https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html).
cluster.addCapacity('default', {
instanceType: new ec2.InstanceType('t2.medium'),
desiredCapacity: 10,
});

### SSH into your nodes
cluster.addManifest('mypod', {
apiVersion: 'v1',
kind: 'Pod',
metadata: { name: 'mypod' },
spec: {
containers: [
{
name: 'hello',
image: 'paulbouwer/hello-kubernetes:1.5',
ports: [ { containerPort: 8080 } ]
}
]
}
});
```

**NOTE**: in order to determine the default AMI for for Amazon EKS instances the
`eks.Cluster` resource must be defined within a stack that is configured with an
explicit `env.region`. See [Environments](https://docs.aws.amazon.com/cdk/latest/guide/environments.html)
in the AWS CDK Developer Guide for more details.

Here is a [complete sample](./test/integ.eks-kubectl.lit.ts).
eladb marked this conversation as resolved.
Show resolved Hide resolved

### Interacting with Your Cluster

The Amazon EKS construct library allows you to specify an IAM role that will be
granted `system:masters` privileges on your cluster.

Without specifying a `mastersRole`, you will not be able to interact manually
with the cluster.

The following example defines an IAM role that can be assumed by all users
in the account and shows how to use the `mastersRole` property to map this
role to the Kubernetes `system:masters` group:

```ts
// first define the role
const clusterAdmin = new iam.Role(this, 'AdminRole', {
assumedBy: new iam.AccountRootPrincipal()
});

// now define the cluster and make map the role to the "masters" group
eladb marked this conversation as resolved.
Show resolved Hide resolved
new eks.Cluster(this, 'Cluster', {
vpc: vpc,
mastersRole: clusterAdmin
});
```

Now, given AWS credentials for a user that is trusted by the masters role, you
should be able to interact with your cluster like this:
eladb marked this conversation as resolved.
Show resolved Hide resolved

```console
$ aws eks update-kubeconfig --name CLUSTER-NAME
eladb marked this conversation as resolved.
Show resolved Hide resolved
$ kubectl get all -n kube-system
...
```

**NOTE**: if the cluster is configured with `kubectlEnabled: false`, it
will be created with the role/user that created the AWS CloudFormation
stack. See [Kubectl Support](#kubectl-support) for details.

### Defining Kubernetes Resources

The `cluster.addManifest()` method can be used to apply Kubernetes resource
manifests to this cluster.

The following examples will deploy the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes)
service on the cluster:

```ts
const appLabel = { app: "hello-kubernetes" };

const deployment = {
apiVersion: "apps/v1",
kind: "Deployment",
metadata: { name: "hello-kubernetes" },
spec: {
replicas: 3,
selector: { matchLabels: appLabel },
template: {
metadata: { labels: appLabel },
spec: {
containers: [
{
name: "hello-kubernetes",
image: "paulbouwer/hello-kubernetes:1.5",
ports: [ { containerPort: 8080 } ]
}
]
}
}
}
};

const service = {
apiVersion: "v1",
kind: "Service",
metadata: { name: "hello-kubernetes" },
spec: {
type: "LoadBalancer",
ports: [ { port: 80, targetPort: 8080 } ],
selector: appLabel
}
};

cluster.addManifest('hello-kub', service, deployment);
```

Since manifests are modeled as CloudFormation resources. This means that if the
`addManifest` statement is deleted from your code, the next `cdk deploy` will
issue a `kubectl delete` command and the Kubernetes resources will be deleted.

You can also use the `KubernetesManifest` construct directly:

```ts
new KubernetesManifest(this, 'service', {
eladb marked this conversation as resolved.
Show resolved Hide resolved
manifest: [ {
apiVersion: "v1",
kind: "Service",
metadata: { name: "hello-kubernetes" },
spec: {
type: "LoadBalancer",
ports: [ { port: 80, targetPort: 8080 } ],
selector: appLabel
}
} ]
});
```

### AWS IAM Mapping

As described in the [Amazon EKS User Guide](https://docs.aws.amazon.com/en_us/eks/latest/userguide/add-user-role.html),
you can map AWS IAM users and roles to Kubernetes RBAC configuration.

The Amazon EKS construct manages the **aws-auth ConfigMap** Kubernetes resource
on your behalf and includes an APIs for adding user and role mappings:

For example, let's say you want to grant an IAM user administrative
privileges on your cluster:

```ts
const adminUser = new iam.User(this, 'Admin');
cluster.addUserMapping(adminUser, { groups: [ 'system:masters' ]});
```

Furthermore, when auto-scaling capacity is added to the cluster (through
`cluster.addCapacity` or `cluster.addAutoScalingGroup`), the IAM instance role
of the auto-scaling group will be automatically mapped to RBAC so nodes can
connect to the cluster. No manual mapping is required any longer.

### Node ssh Access

If you want to be able to SSH into your worker nodes, you must already
have an SSH key in the region you're connecting to and pass it, and you must
Expand All @@ -41,7 +194,68 @@ If you want to SSH into nodes in a private subnet, you should set up a
bastion host in a public subnet. That setup is recommended, but is
unfortunately beyond the scope of this documentation.

### kubectl Support

When you create an Amazon EKS cluster, the IAM entity user or role, such as a
[federated user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html)
that creates the cluster, is automatically granted `system:masters` permissions
in the cluster's RBAC configuration.

In order to allow programmatically defining **Kubernetes resources** in your AWS
CDK app and provisioning them through AWS CloudFormation, we will need to assume
this "masters" role every time we want to issue `kubectl` operations against your
cluster.

At the moment, the [AWS::EKS::Cluster](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html)
AWS CloudFormation resource does not support this behavior, so in order to
support "programmatic kubectl", such as applying manifests
and mapping IAM roles from within your CDK application, the Amazon EKS
construct library uses a custom resource for provisioning the cluster.
This custom resource is executed with an IAM role that we can then use
to issue `kubectl` commands.

The default behavior of this library is to use this custom resource in order
to retain programmatic control over the cluster. In order words: to allow
eladb marked this conversation as resolved.
Show resolved Hide resolved
you to define Kubernetes resources in your CDK code instead of having to
manage your Kubernetes applications through a separate system.

One of the implications of this design is that, by default, the user who
provisioned the AWS CloudFormation stack (executed `cdk deploy`) will
not have administrative privileges on the EKS cluster.

1. Additional resources will be synthesized into your template (the AWS Lambda
function, the role and policy).
2. As described in [Interacting with Your Cluster](#interacting-with-your-cluster),
if you wish to be able to manually interact with your cluster, you will need
to map an IAM role or user to the `system:masters` group. This can be either
done by specifying a `mastersRole` when the cluster is defined, calling
`cluster.addMastersRole` or explicitly mapping an IAM role or IAM user to the
relevant Kubernetes RBAC groups using `cluster.addRoleMapping` and/or
`cluster.addUserMapping`.

If you wish to disable the programmatic kubectl behavior and use the standard
AWS::EKS::Cluster resource, you can specify `kubectlEnabled: false` when you define
the cluster:

```ts
new eks.Cluster(this, 'cluster', {
vpc: vpc,
kubectlEnabled: false
});
```

**Take care**: a change in this property will cause the cluster to be destroyed
and a new cluster to be created.

When kubectl is disabled, you should be aware of the following:

1. When you log-in to your cluster, you don't need to specify `--role-arn` as long as you are using the same user that created
the cluster.
2. As described in the Amazon EKS User Guide, you will need to manually
edit the [aws-auth ConfigMap](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) when you add capacity in order to map
the IAM instance role to RBAC to allow nodes to join the cluster.
3. Any `eks.Cluster` APIs that depend on programmatic kubectl support will fail with an error: `addManifest`, `addRoleMapping`, `addUserMapping`, `addMastersRole`, `props.mastersRole`.

### Roadmap

- [ ] Add ability to start tasks on clusters using CDK (similar to ECS's "`Service`" concept).
- [ ] Describe how to set up AutoScaling (how to combine EC2 and Kubernetes scaling)
- [ ] Describe how to set up AutoScaling (how to combine EC2 and Kubernetes scaling)
eladb marked this conversation as resolved.
Show resolved Hide resolved
16 changes: 16 additions & 0 deletions packages/@aws-cdk/aws-eks/lib/aws-auth-mapping.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@

export interface Mapping {
eladb marked this conversation as resolved.
Show resolved Hide resolved
/**
* The user name within Kubernetes to map to the IAM role.
*
* @default - By default, the user name is the ARN of the IAM role.
*/
readonly username?: string;

/**
* A list of groups within Kubernetes to which the role is mapped.
*
* @see https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings
*/
readonly groups: string[];
}
78 changes: 78 additions & 0 deletions packages/@aws-cdk/aws-eks/lib/aws-auth.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
import iam = require('@aws-cdk/aws-iam');
import { Construct, Lazy, Stack } from '@aws-cdk/core';
import { Mapping } from './aws-auth-mapping';
import { Cluster } from './cluster';
import { KubernetesManifest } from './manifest-resource';

export interface AwsAuthProps {
readonly cluster: Cluster;
}

/**
* Manages mapping between IAM users and roles to Kubernetes RBAC configuration.
*
* @see https://docs.aws.amazon.com/en_us/eks/latest/userguide/add-user-role.html
*/
export class AwsAuth extends Construct {
private readonly stack: Stack;
private readonly roleMappings = new Array<{ role: iam.IRole, mapping: Mapping }>();
private readonly userMappings = new Array<{ user: iam.IUser, mapping: Mapping }>();

constructor(scope: Construct, id: string, props: AwsAuthProps) {
super(scope, id);

this.stack = Stack.of(this);

new KubernetesManifest(this, 'manifest', {
cluster: props.cluster,
resources: [ {
apiVersion: "v1",
kind: "ConfigMap",
metadata: {
name: "aws-auth",
namespace: "kube-system"
},
data: {
mapRoles: Lazy.anyValue({ produce: () => this.synthesizeMapRoles() }),
mapUsers: Lazy.anyValue({ produce: () => this.synthesizeMapUsers() }),
}
} ]
});
}

/**
* Adds a mapping between an IAM role to a Kubernetes user and groups.
*
* @param role The IAM role to map
* @param mapping Mapping to k8s user name and groups
*/
public addRoleMapping(role: iam.IRole, mapping: Mapping) {
this.roleMappings.push({ role, mapping });
}

/**
* Adds a mapping between an IAM user to a Kubernetes user and groups.
*
* @param user The IAM user to map
* @param mapping Mapping to k8s user name and groups
*/
public addUserMapping(user: iam.IUser, mapping: Mapping) {
this.userMappings.push({ user, mapping });
}

private synthesizeMapRoles() {
return this.stack.toJsonString(this.roleMappings.map(m => ({
rolearn: m.role.roleArn,
username: m.mapping.username,
groups: m.mapping.groups
})));
}

private synthesizeMapUsers() {
return this.stack.toJsonString(this.userMappings.map(m => ({
userarn: this.stack.formatArn({ service: 'iam', resource: 'user', resourceName: m.user.userName }),
username: m.mapping.username,
groups: m.mapping.groups
})));
}
}
Loading