Skip to content
This repository has been archived by the owner on Sep 4, 2021. It is now read-only.

Ability to use existing route tables for controller and workers #716

Closed

Conversation

cknowles
Copy link

@cknowles cknowles commented Oct 7, 2016

We want to be able to use an existing VPC to deploy our k8s clusters into in order to share that part of our infrastructure as well as deploy some legacy services into and connect to RDS privately. We also would like to put our workers in a private subnet behind a NAT Gateway.

Old solution - reuse existing subnets

I started off by allowing use of existing subnets in #671 but after using that for a little time I've found that controlling more than one cluster inside the same VPC alongside various other processes in those same subnets is a bit problematic, possibly just cumbersome.

New solution - reuse existing route tables

As a consequence to this, I tried to find an alternative solution which would alleviate these issues. I decided to work on allowing the use of existing route tables which solved the subnet problems since it means the subnets are still created by kube-aws, are k8s specific ones and CloudFormation will automatically check if there is a conflict with any existing subnet CIDR.

Using existing route tables ended up being quite easy to add and means nodes can be placed either privately or publicly depending on the requirements - it should even allow some worker nodes to sit in a publicly accessible subnets and others not if someone really wished. I also added the ability for kube-aws to generate a separate set of controller subnets so we can control the route table there as well separately. The main use case for that would be simple access to the dashboard although some may wish to also place it in a private subnet and use a bastion host and/or VPN. I'm not 100% familiar with how the dashboard connectivity works yet so there could be a better way to do that part but having control over the subnet certainly helps some use cases.

Workers in private subnet

One slight kink I found is that if we configure the workers with private subnets and the controller with public subnet then k8s controlled ELBs only get enabled in a single AZ which is obviously not what we want. This is due to the k8s ELBs being controlled by subnet tags and IGW attachment which I believe is also being worked on.

I know there's additional work coming for this #340 but in the meantime to work around this I've allowed multiple controller subnets. In this use case would be linked to public route tables and hence allow multi-AZ ELBs. The controller subnets without the single AZ controller would just be empty. Perhaps this calls for a rename of subnets and controllerSubnets to public/private and then mark them enabled for workers/controllers. Some similar information on this use case in kubernetes/kops#428.

As a consequence to this, also allow a controller subnet to be
generated separate to the worker subnets. Then we can control its route
table separately as we may wish to place it in a public subnet to
access the dashboard.
Chris Knowles added 5 commits October 7, 2016 18:39
By allowing the definition of multiple controller subnets, we simplify
config.go and also allow this scenario:
- workers in private subnets
- controller in public subnet
- ELBs created by k8s annotations will still be multi-AZ enabled
`{{.ClusterName}}` does not work inside loops
@aaronlevy
Copy link
Contributor

Hi @c-knowles, sorry for the super-delayed response.

The kube-aws work has moved to its own top-level repository here: https://github.com/coreos/kube-aws

If this is still something you want to have merged, please re-open this PR under the new repo.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants