-
Notifications
You must be signed in to change notification settings - Fork 295
Create etcd and workers in private subnets, controllers in public subnet #169
Create etcd and workers in private subnets, controllers in public subnet #169
Conversation
Current coverage is 67.86% (diff: 85.10%)
|
@neoandroid Thanks for the PR! Could I help you with anything towards #152 (comment)? |
I'm still working on it. I plan to add the proposed changes before the end of this week. |
f654e6d
to
3b0562a
Compare
3b0562a
to
e8fab45
Compare
I finally merged the work from @icereval in this PR , I liked the approach and made a couple of modifications to have the same capabilities that I had with the previous PoC.
The rest of the changes I've added are small fixes: a typo in cluster.yml and a wrong reference for the Internet Gateway on the cfn template. |
@neoandroid I also prefer to be able to create workers on a private subnet that are not in a node pool. |
@neoandroid Could you please post a working cluster.yaml? I have tried the
But it does not create any private subnets. Probably I am missing something? |
Here you have a full example:
At first glance I see that you have repeated the same CIDRs for worker, controller and etcd subnets. Probably that is the reason why its failing. |
Tried to differ subnets too does not help:
It creates three public subnets, but no private subnets. No time to dig into sources now, maybe in the new year :-) |
Can you elaborate on the use case for needing more than three public subnets? |
Also over at the KOPS project a contributor created an awesome diagram of their desired network (mostly matching) that might make it easier to discuss. |
For us, we require only 3 private and 3 public subnets (1 private+1 public per AZ). Therore we would prefer to put etcd, workers and controllers spread over 3 private subnets. |
@icereval What I meant is that three public subnets were created as expected, but no private subnets. I have updated the comment to make it more obvious. |
Hmmm, I just checked out the latest code and I'm seeing the correct results. Let me get my config together and share it. |
I'm running an environment setup script via cloudformation which generates the base VPC/DNS/Bastion/VPC Peering and so on, but other than that its a typical setup. With this configuration running cluster.yamlclusterName: cluster-1
externalDNSName: cluster-1.dev.example.internal
amiId: ami-69cbda7e # copy of coreos stable for encrypted root volume
createRecordSet: true
hostedZoneId: ZWB2UKRTYXHY0 # private hosted zone dev.example.internal
region: us-east-1
keyName: ...
kmsKeyArn: ...
vpcId: vpc-76c11e10
internetGatewayId: igw-adce55ca
vpcCIDR: 10.0.0.0/16
tlsCADurationDays: 3650
tlsCertDurationDays: 1825 # modified (5 years)
useCalico: true
# limited public subnet cidr needed since the bastion will grant
# access into this space via vpc peering
subnets:
- availabilityZone: us-east-1a
instanceCIDR: 10.0.0.0/26
- availabilityZone: us-east-1b
instanceCIDR: 10.0.0.64/26
- availabilityZone: us-east-1d
instanceCIDR: 10.0.0.128/26
controller:
autoScalingGroup:
minSize: 3
maxSize: 4
rollingUpdateMinInstancesInService: 1
privateSubnets:
- availabilityZone: us-east-1a
instanceCIDR: 10.0.1.0/26
- availabilityZone: us-east-1b
instanceCIDR: 10.0.1.64/26
- availabilityZone: us-east-1d
instanceCIDR: 10.0.1.128/26
etcdCount: 3
etcd:
privateSubnets:
- availabilityZone: us-east-1a
instanceCIDR: 10.0.2.0/26
- availabilityZone: us-east-1b
instanceCIDR: 10.0.2.64/26
- availabilityZone: us-east-1d
instanceCIDR: 10.0.2.128/26
worker:
autoScalingGroup:
minSize: 3
maxSize: 4
rollingUpdateMinInstancesInService: 1
privateSubnets:
- availabilityZone: us-east-1a
instanceCIDR: 10.0.3.0/24
- availabilityZone: us-east-1b
instanceCIDR: 10.0.4.0/24
- availabilityZone: us-east-1d
instanceCIDR: 10.0.5.0/24 SubnetsRoute Tables
Nat GatewaysEC2 Instances |
} | ||
|
||
{{if (or .WorkerTopologyPrivate .Etcd.TopologyPrivate .Controller.TopologyPrivate)}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should also add .Worker.TopologyPrivate
for the case where workers are private yet somehow etcd & controllers are public and not using node pools.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Nice catch! 👍
Hmmmm not sure what is going on. I more-or-less have copied your cluster.yaml now (except that I don't create VPC before running
No NATs are created, no private subnets.
|
@mumoshu Trying to think of a use case, perhaps using one of the controllers as the bastion host to cut down on instance cost? Not that I especially want to support that case! We could write the minimum cluster instance count in the docs we agreed on above. Do you want me to write up some of that? I'm good with whatever you decide :) I'd prefer the work to be merged than not. How about we stick with what is here but agree we may later decide to simplify (and we may have to drop some compatibility to keep it simple)? |
@c-knowles I'll decide to go with the simpler one: Even with that:
could both be achieved. Am I correct? |
@mumoshu I think the second point is less achievable that way. It's not a huge issue, more a logical one of how we structure in general. Sorry I hijacked this PR a little bit! Happy to chat outside of here too. For example, if sets of both private and public subnets are specified which "wins"? Currently I believe it's the private subnets. We can document it but ultimately we will still have multiple places where one piece of infrastructure can be specified (in this case the worker subnets). |
My assumption is that:
Also note that:
Did I cover all the general concerns you might have considered? @c-knowles Are you in the kubernetes slack? 😃 |
Hi @neoandroid, sorry for a long silence but I'm still wanting this to be merged! In addition to #169 (comment),
|
@mumoshu great yes, I agree we can simplify slightly later especially if we can leave behind some legacy prior to v1. I am indeed on the k8s slack, username without the hyphen. |
ControllerCount int `yaml:"controllerCount,omitempty"` | ||
ControllerCreateTimeout string `yaml:"controllerCreateTimeout,omitempty"` | ||
ControllerInstanceType string `yaml:"controllerInstanceType,omitempty"` | ||
ControllerLoadBalancerPrivate string `yaml:"controllerLoadBalancerPrivate,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be a bool
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right. Good catch 👍
Hi @neoandroid, I'm going to merge several PRs from now on. |
rename privateSubnets prpoerty to subnets in cluster.yml file merge public and private subnet variables into one generic variable
Finally I've applied all the changes discussed in the comments, I hope you like them. I've done some e2e tests in my environment and haven't found bugs using the different supported scenarios but I encourage you to make your own verifications. Since the parameter in |
@neoandroid Thanks for all the hard works! (1) Node pools fail while creating when the topology isn't private, because of the missing import:
(2) and the missing subnets for the worker ASG:
They were not that hard to fix. Please give me a day or two until it gets merged 👍 |
Merged 😄 |
This looks fab @neoandroid! But does it remove the ability to deploy to private subnets without creating a NAT? We already provide non-AWS NAT for our VPCs, and I was using the add-security-group and add-route-table |
No, that scenario is not possible. I didn't expect users were not using AWS NAT gateways. If you need that, you'll probably have to create your own subnets and NAT's and use the ability to reuse preexisting infrastructure #227 |
Well it was possible before, so this is a substantial breaking change for me if I can't work around it 😞. I would be moving from the fully automated cluster deployment I guess the workaround is to patch out the AWS NATs from the The other possible problem with mandating people create AWS NAT's is that AWS imposes is a limit of only 5 NAT gateways per AZ. If every cluster deployed with Anyway, I don't want to rain on your parade 🎈 🎉 I'll do my best to make this new approach work for me too. There's always a way! |
I guess it's still possible for kube-aws to create the subnets but link to existing route tables? That's how I'm using kube-aws, at least on an older version. This does remind me a little of @redbaron's comments, perhaps we could discuss as a group how we can allow extensions so not every customisation is embedded deeply in kube-aws. |
@c-knowles I believe that I've already implemented a support for an existing route table per subnet in #284 (comment) |
Let me also add that as it isn't an general purpose aws provisioning tool, Also, I'll try to support your existing use-case(s) as much as possible so please don't hesitate to leave your comments in #284! |
@whereisaaron For your interest I inform you that you can request a limit increase for NAT gateways. 5 is the default limit but it's not a hard limit. |
This PR belongs to issue #152
The issue talks about the following use-cases supported by the PR: