This repository has been archived by the owner on Sep 4, 2021. It is now read-only.
Ability to use existing route tables for controller and workers #716
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
We want to be able to use an existing VPC to deploy our k8s clusters into in order to share that part of our infrastructure as well as deploy some legacy services into and connect to RDS privately. We also would like to put our workers in a private subnet behind a NAT Gateway.
Old solution - reuse existing subnets
I started off by allowing use of existing subnets in #671 but after using that for a little time I've found that controlling more than one cluster inside the same VPC alongside various other processes in those same subnets is a bit problematic, possibly just cumbersome.
New solution - reuse existing route tables
As a consequence to this, I tried to find an alternative solution which would alleviate these issues. I decided to work on allowing the use of existing route tables which solved the subnet problems since it means the subnets are still created by kube-aws, are k8s specific ones and CloudFormation will automatically check if there is a conflict with any existing subnet CIDR.
Using existing route tables ended up being quite easy to add and means nodes can be placed either privately or publicly depending on the requirements - it should even allow some worker nodes to sit in a publicly accessible subnets and others not if someone really wished. I also added the ability for kube-aws to generate a separate set of controller subnets so we can control the route table there as well separately. The main use case for that would be simple access to the dashboard although some may wish to also place it in a private subnet and use a bastion host and/or VPN. I'm not 100% familiar with how the dashboard connectivity works yet so there could be a better way to do that part but having control over the subnet certainly helps some use cases.
Workers in private subnet
One slight kink I found is that if we configure the workers with private subnets and the controller with public subnet then k8s controlled ELBs only get enabled in a single AZ which is obviously not what we want. This is due to the k8s ELBs being controlled by subnet tags and IGW attachment which I believe is also being worked on.
I know there's additional work coming for this #340 but in the meantime to work around this I've allowed multiple controller subnets. In this use case would be linked to public route tables and hence allow multi-AZ ELBs. The controller subnets without the single AZ controller would just be empty. Perhaps this calls for a rename of
subnets
andcontrollerSubnets
to public/private and then mark them enabled for workers/controllers. Some similar information on this use case in kubernetes/kops#428.