-
Notifications
You must be signed in to change notification settings - Fork 295
Add support for customization of network topologies #284
Add support for customization of network topologies #284
Conversation
…rker/controller/etcd nodes and a controller loadbalancer
Codecov Report@@ Coverage Diff @@
## master #284 +/- ##
==========================================
- Coverage 57.53% 55.08% -2.45%
==========================================
Files 6 6
Lines 1288 1387 +99
==========================================
+ Hits 741 764 +23
- Misses 449 505 +56
- Partials 98 118 +20
Continue to review full report at Codecov.
|
May I propose slightly different structure, which matches underneath CF structure closer.
Therefore cluster yaml would look something like this:
|
@redbaron thanks as always for your comments!
subnets:
- availabilityZone: ap-northeast-1a
instanceCIDR: "10.0.1.0/24"
private: true
- availabilityZone: ap-northeast-1c
instanceCIDR: "10.0.2.0/24"
private: true
- availabilityZone: ap-northeast-1a
instanceCIDR: "10.0.3.0/24"
- availabilityZone: ap-northeast-1c
instanceCIDR: "10.0.4.0/24"
controller:
private: true
loadBalancer:
private: false
etcd:
private: true
worker:
private: false i.e. we can omit
|
That is what I am proposing to change. You always refer to a subnet by name therefore it is explicit what is created where and it also perfectly matches what happens in CF, therefore it simplifies kube-aws code so it doesn't need to translate one representation into another. By listing subnets explicitly you can have controller and etcd in following setup to pick subnets which might even be different from each other but both private. Or some subnets exist and some - not.
If we mandate that every subnet wheter it created by kube-aws or already exist and references by it's ID, then rest of |
Thanks again! Regarding the latter part of your commet, I guess I already know the convenience and efficiency of utilizing cfn parameters after reading through @icereval's great work. I already introduced a few essences of his work via this PR. Now I'm open for a future PR to achieve what you suggested hence I've suggested to open an another issue dedicated to that 👍 I'll try to sort out all the issues eventually but it is definitely impossible to manage those alone. Merging everyone's works and desires to make it work together while adding tests and refactoring to make it maintainable at least for me already took several days to happen! |
Yes, it is why I made it to allow referencing subnets by names. |
my concern about introducing names and not making them keys in a hashmap is that it is naturally can lead to duplicates and you need yet another piece of Go code to validate and report error if there are dupes. If you make all subnet names to be keys in hashmap, then you get uniqueness for free, datastructure itself enforces desired properties. Apart from that it all looks fine by me given the current state of the code. IN the background I am preparing kube-aws overhaul which takes different approach in the way it translates cluster.yaml into CF template. I'll present it as a separate bracnh for discussion and collaboration once it achieves feature parity. |
Certainly.
Definitely. Hmm, this is a bit hard decision for me to make at this stage alone. Please let me leave comments to request for confirmations in the related github issues. controller:
private: true
etcd:
private: true for demo purpose, we can safely switch to the way of utilizing hashes as you've suggested. Btw,
Thanks for the kind words! I'm also looking forward to see your overhaul work! |
I've updated the description of this PR to cover the overview of all the changes and improvements made. |
This all looks cool and flexible @mumoshu, thanks for your work! For my use cases it is considerably more flexibility than I would need. The one capability here I'd definitely find useful though, and will use if this goes ahead, is the ability to deploy private clusters with only the API and Ingress/Service load balancers public.
The |
@whereisaaron you already can achieve that scenario since #169 was merged |
Hi @whereisaaron, thanks for the request! subnets:
- name: private1
availabilityZone: ap-northeast-1a
instanceCIDR: "10.0.1.0/24"
private: true
- name: private2
availabilityZone: ap-northeast-1c
instanceCIDR: "10.0.2.0/24"
private: true
- name: public1
availabilityZone: ap-northeast-1a
instanceCIDR: "10.0.3.0/24"
- name: public2
availabilityZone: ap-northeast-1c
instanceCIDR: "10.0.4.0/24"
controller:
subnets:
- name: private1
- name: private2
loadBalancer:
private: false
# Setting `loadBalancer.private` to false leads kube-aws to make it an `internet-facing` lb while choosing public subnets for the lb like
# subnets:
# - name: public1
# - name: public2
etcd:
subnets:
- name: private1
- name: private2
worker:
subnets:
- name: private1
- name: private2 As can be seen in the above example, explicitly choosing private subnets for all the |
@whereisaaron Regarding your comment #169 (comment), would it be enough for your use-case to add an option to disable creation of a NAT gateway and a route to it like: - name: private1
availabilityZone: ap-northeast-1a
instanceCIDR: "10.0.1.0/24"
private: true
natGateway:
create: false
- name: private2
availabilityZone: ap-northeast-1c
instanceCIDR: "10.0.2.0/24"
private: true
natGateway:
create: false ? |
@whereisaaron Ah, or you'll even need to do something like: - name: private1
availabilityZone: ap-northeast-1a
instanceCIDR: "10.0.1.0/24"
private: true
natGateway:
create: false
# this, in combination with `natGateway.create=false`, implies that the route table already has a route to an existing NAT gateway
routeTable:
id: <id of your route table mentioned in https://github.com/coreos/kube-aws/pull/169#issuecomment-275895730>
- name: private2
availabilityZone: ap-northeast-1c
instanceCIDR: "10.0.2.0/24"
private: true
natGateway:
create: false
# this, in combination with `natGateway.create=false`, implies that the route table already has a route to an existing NAT gateway
routeTable:
id: routeTable:
id: <id of your route table mentioned in https://github.com/coreos/kube-aws/pull/169#issuecomment-275895730>
# public subnets used for the external elb for api
- name: public1
# snip
- name: public2
# snip
Update: Instead of: natGateway:
create: false I made it: natGateway:
preconfigured: true so that the zero-value(=false) can be the default value for the setting (which is now |
Replying to my own comment above.
This should not be allowed because then we don't have a clear way to differentiate public/private route tables;
as implied by what @whereisaaron mentioned in #284 (comment) |
@mumoshu as per your comment, I think the above looks fine. My understanding of this change is that it adds more flexibility to subnet customisation. Currently using a slightly older 0.9.2 release it is possible to change the route table associated with subnets generated by kube-aws and therefore possible to make them private/public. I'm not sure if #169 removed that and it's re-added here or just here adds more flexibility, either way is fine. Still a bit concerned about struggling to ensure all these different options are all well tested but I don't know how we can deal with that other than cutting the options and perhaps providing some network CF examples out of the box. |
Thanks for looking in this @mumoshu. It is no problem to specify the route table per subnet, so if So I think the only missing piece to retain the previous capabilities is to have some way to not have any NAT instances created, since they serve no purpose if you have specified One question about I think the proposed @c-knowles creating private subnets with a specified route table is still possible in v0.9.3-rc.5 and it looks like this new way of doing things still retains or re-adds that capability via @c-knowles as soon as there is an alpha/beta/rc release I'll get on with testing private subnet deployments and report back. |
@c-knowles @whereisaaron Thanks again! The intention of this PR is to provide enough flexibility to e.g. create worker/etcd/controller nodes in private/public subnet(s) and create an api load balancer in private/public subnet(s) plus reusing existing AWS resources including:
where necessary but I began to believe that I had unintentionally broken your use-cases(=everything in private subnets with a smaller config?) not in functionality but in configuration syntax. You had been using a configuration like the below to put all the nodes and lbs to private subnets i.e. nothing including nodes and lbs in public subnets, right? routeTableId: rtb-for-privatesubnet-withpreconfigurednat
mapPublicIPs: false
controllerLoadBalancerPrivate: true If so, I'm considering to improve this PR so that the above is translated to something like: subnets:
- private: true
natGateway:
preconfigured: true
routeTable:
id: rtb-for-privatesubnet-withpreconfigurednat
controller:
loadBalancer:
private: true // However, to be honest, such translation could be soon deprecated and removed if it turns out to be too hard to maintain. Sorry! Does it make sense to you two? |
99a442a
to
242783d
Compare
…preconfigured NAT gateway See kubernetes-retired#284 (comment) for more context
@whereisaaron I've implemented
Yes, I think so too and it is addressed in my last commit 242783d which adds |
@c-knowles @whereisaaron I've updated my comment #284 (comment) several times. Please re-read if you came from email notifications from github 😃 |
My assumption is that: one or more subnet(s) may rely on an NGW created by kube-aws and others may rely on an NGW preconfigured for an existing subnet(s). Providing the flexibility to customize ngw per subnet also support it. However I believe that it is true in some aspects that a setting like |
@whereisaaron @mumoshu thanks, it seems my use case is still accommodated. Specifically, the use case is detailed in #44. Summary:
So in turn we end up with the solution for kube-aws to create the subnets and link to existing route tables which are easily defined in a shared VPC (some private with NAT and some public with IGW). YAML setup looks a bit like this: subnets:
- availabilityZone: eu-west-1a
instanceCIDR: "10.0.1.0/24"
# private route table to NAT AZ a
routeTableId: rtb-ID1
- availabilityZone: eu-west-1b
instanceCIDR: "10.0.2.0/24"
# private route table to NAT AZ b
routeTableId: rtb-ID2
- availabilityZone: eu-west-1c
instanceCIDR: "10.0.3.0/24"
# private route table to NAT AZ c
routeTableId: rtb-ID3
controllerIP: 10.0.4.50
controllerSubnets:
- availabilityZone: eu-west-1a
instanceCIDR: "10.0.4.0/24"
# public route table
routeTableId: rtb-ID4
- availabilityZone: eu-west-1b
instanceCIDR: "10.0.5.0/24"
# public route table
routeTableId: rtb-ID4
- availabilityZone: eu-west-1c
instanceCIDR: "10.0.6.0/24"
# public route table
routeTableId: rtb-ID4 We have a similar setup with node pools as well, the config is essentially the same. @mumoshu I'm happy for you to break config compatibility to keep kube-aws simple, it's trivial to move config around and we could even write a small version migration script. |
@c-knowles @whereisaaron To sync up, let me confirm that you've never tried to do something like: routeTableId: rtb-for-privatesubnet-withpreconfigurednat
mapPublicIPs: false
# !!!!!
controllerLoadBalancerPrivate: false which is IMHO not recommended because it shouldn't work if you configured your existing AWS resources properly. My reasoning here is that:
Anyways, fyi, this use-case is intended to be newly supported via this PR with configuration like explained in #284 (comment). |
@c-knowles Thanks as always! |
@c-knowles @whereisaaron And thanks for the kind words about following breaking changes in configuration syntax! It will definitely accelerate the development of kube-aws. |
@mumoshu for your first comment, I'm not using private controller ELBs at all right now although if that's the preferred way once I add in a bastion host then I will use it (happy to go off best practice). For your second comment, I believe there was previously a bug in kube-aws which meant |
Wow, thanks for the quick work to add This is my current successful use case under 0.9.3. Same settings for the cluster and all node-pools.
I deploy everything with private subnets and then get k8s to create public ELBs for Services and Ingress Controllers that should be exposed. Kubernetes specifically supports private subnet clusters with public ELBs with the
When Kubernetes is creating a public ELB, it collects all the public subnets associated with the cluster (tagged I keep the controller API ELB private, but you can certainly have controllers in private subnets but with a public API ELB. You (or For |
…ublicIPs, routeTableId to let kube-aws create all the nodes and the api load-balancer inside either private or public subnets
@whereisaaron @c-knowles To make your migration path extra clear, I've added an additional commit with more validations and backward-compatibility with the older config syntax based on my idea in #284 (comment) Specifically, I guess your (potential) use-case is covered by this test 80885cb#diff-4fd4a4a9a3755708c6909f7f824f5754R207 // Please forgive me if I'm being too nervous here but I don't really like to break existing use-cases of yours! |
Thanks @mumoshu as I read it that does cover the two use cases and seems pretty clean.
Thank you very much for keeping these use cases in play, In the use case where the user has no VPC and kube-aws is creating one, then creating the NAT instances for
|
Thanks as always @whereisaaron 🙇 For example, I'm wondering if we could deprecate
Once again,
I'm not intended to break that use-case! |
Oh yeah, I totally misunderstood! Sorry @mumoshu! 💩 |
….preconfigured completely and induce these from other settings like before
@whereisaaron FYI, for simplicity of cluster.yaml, I've dropped the |
Although important parts are covered by |
This is an implementation of kubernetes-retired#238 from @redbaron especially what I've described in my comment there kubernetes-retired#238 (comment), and an answer to the request "**3. Node pools should be more tightly integrated**" of kubernetes-retired#271 from @Sasso . I believe this also achieves what was requested by @andrejvanderzee in kubernetes-retired#176 (comment). After applying this change: 1. All the `kube-aws node-pools` sub-commands are dropped 2. You can now bring up a main cluster and one or more node pools at once with `kube-aws up` 3. You can now update all the sub-clusters including a main cluster and node pool(s) by running `kube-aws update` 4. You can now destroy all the AWS resources spanning main and node pools at once with `kube-aws destroy` 5. You can configure node pools by defining a `worker.nodePools` array in cluster.yaml` 6. `workerCount` is dropped. Please migrate to `worker.nodePools[].count` 7. `node-pools/` and hence `node-pools/<node pool name>` directories, `cluster.yaml`, `stack-template.json`, `user-data/cloud-config-worker` for each node pool are dropped. 8. A typical local file tree would now look like: - `cluster.yaml` - `stack-templates/` (generated on `kube-aws render`) - `root.json.tmpl` - `control-plane.json.tmpl` - `node-pool.json.tmpl` - `userdata/` - `cloud-config-worker` - `cloud-config-controller` - `cloud-config-etcd` - `credentials/` - *.pem(generated on `kube-aws render`) - *.pem.enc(generated on `kube-aws validate` or `kube-aws up`) - `exported/` (generated on `kube-aws up --export --s3-uri <s3uri>`) - `stacks/` - `control-plane/` - `stack.json` - `user-data-controller` - `<node pool name = stack name>/` - `stack.json` - `user-data-worker` 9. A typical object tree in S3 would now look like: - `<bucket and directory from s3URI>`/ - kube-aws/ - clusters/ - `<cluster name>`/ - `exported`/ - `stacks`/ - `control-plane/` - `stack.json` - `cloud-config-controller` - `<node pool name = stack name>`/ - `stack.json` Implementation details: Under the hood, kube-aws utilizes CloudFormation nested stacks to delegate management of multiple stacks as a whole. kube-aws now creates 1 root stack and nested stacks including 1 main(or currently named "control plane") stack and 0 or more node pool stacks. kube-aws operates on S3 to upload all the assets required by all the stacks(root, main, node pools) and then on CloudFormation to create/update/destroy a root stack. An example `cluster.yaml` I've been used to test this looks like: ```yaml clusterName: <your cluster name> externalDNSName: <your external dns name> hostedZoneId: <your hosted zone id> keyName: <your key name> kmsKeyArn: <your kms key arn> region: ap-northeast-1 createRecordSet: true experimental: waitSignal: enabled: true subnets: - name: private1 availabilityZone: ap-northeast-1a instanceCIDR: "10.0.1.0/24" private: true - name: private2 availabilityZone: ap-northeast-1c instanceCIDR: "10.0.2.0/24" private: true - name: public1 availabilityZone: ap-northeast-1a instanceCIDR: "10.0.3.0/24" - name: public2 availabilityZone: ap-northeast-1c instanceCIDR: "10.0.4.0/24" controller: subnets: - name: public1 - name: public2 loadBalancer: private: false etcd: subnets: - name: public1 - name: public2 worker: nodePools: - name: pool1 subnets: - name: asgPublic1a - name: pool2 subnets: # former `worker.subnets` introduced in v0.9.4-rc.1 via kubernetes-retired#284 - name: asgPublic1c instanceType: "c4.large" # former `workerInstanceType` in the top-level count: 2 # former `workerCount` in the top-level rootVolumeSize: ... rootVolumeType: ... rootVolumeIOPs: ... autoScalingGroup: minSize: 0 maxSize: 10 waitSignal: enabled: true maxBatchSize: 2 - name: spotFleetPublic1a subnets: - name: public1 spotFleet: targetCapacity: 1 unitRootVolumeSize: 50 unitRootvolumeIOPs: 100 rootVolumeType: gp2 spotPrice: 0.06 launchSpecifications: - spotPrice: 0.12 weightedCapacity: 2 instanceType: m4.xlarge rootVolumeType: io1 rootVolumeIOPs: 200 rootVolumeSize: 100 ```
This is an implementation of kubernetes-retired#238 from @redbaron especially what I've described in my comment there kubernetes-retired#238 (comment), and an answer to the request "**3. Node pools should be more tightly integrated**" of kubernetes-retired#271 from @Sasso . I believe this also achieves what was requested by @andrejvanderzee in kubernetes-retired#176 (comment). After applying this change: 1. All the `kube-aws node-pools` sub-commands are dropped 2. You can now bring up a main cluster and one or more node pools at once with `kube-aws up` 3. You can now update all the sub-clusters including a main cluster and node pool(s) by running `kube-aws update` 4. You can now destroy all the AWS resources spanning main and node pools at once with `kube-aws destroy` 5. You can configure node pools by defining a `worker.nodePools` array in cluster.yaml` 6. `workerCount` is dropped. Please migrate to `worker.nodePools[].count` 7. `node-pools/` and hence `node-pools/<node pool name>` directories, `cluster.yaml`, `stack-template.json`, `user-data/cloud-config-worker` for each node pool are dropped. 8. A typical local file tree would now look like: - `cluster.yaml` - `stack-templates/` (generated on `kube-aws render`) - `root.json.tmpl` - `control-plane.json.tmpl` - `node-pool.json.tmpl` - `userdata/` - `cloud-config-worker` - `cloud-config-controller` - `cloud-config-etcd` - `credentials/` - *.pem(generated on `kube-aws render`) - *.pem.enc(generated on `kube-aws validate` or `kube-aws up`) - `exported/` (generated on `kube-aws up --export --s3-uri <s3uri>`) - `stacks/` - `control-plane/` - `stack.json` - `user-data-controller` - `<node pool name = stack name>/` - `stack.json` - `user-data-worker` 9. A typical object tree in S3 would now look like: - `<bucket and directory from s3URI>`/ - kube-aws/ - clusters/ - `<cluster name>`/ - `exported`/ - `stacks`/ - `control-plane/` - `stack.json` - `cloud-config-controller` - `<node pool name = stack name>`/ - `stack.json` Implementation details: Under the hood, kube-aws utilizes CloudFormation nested stacks to delegate management of multiple stacks as a whole. kube-aws now creates 1 root stack and nested stacks including 1 main(or currently named "control plane") stack and 0 or more node pool stacks. kube-aws operates on S3 to upload all the assets required by all the stacks(root, main, node pools) and then on CloudFormation to create/update/destroy a root stack. An example `cluster.yaml` I've been used to test this looks like: ```yaml clusterName: <your cluster name> externalDNSName: <your external dns name> hostedZoneId: <your hosted zone id> keyName: <your key name> kmsKeyArn: <your kms key arn> region: ap-northeast-1 createRecordSet: true experimental: waitSignal: enabled: true subnets: - name: private1 availabilityZone: ap-northeast-1a instanceCIDR: "10.0.1.0/24" private: true - name: private2 availabilityZone: ap-northeast-1c instanceCIDR: "10.0.2.0/24" private: true - name: public1 availabilityZone: ap-northeast-1a instanceCIDR: "10.0.3.0/24" - name: public2 availabilityZone: ap-northeast-1c instanceCIDR: "10.0.4.0/24" controller: subnets: - name: public1 - name: public2 loadBalancer: private: false etcd: subnets: - name: public1 - name: public2 worker: nodePools: - name: pool1 subnets: - name: asgPublic1a - name: pool2 subnets: # former `worker.subnets` introduced in v0.9.4-rc.1 via kubernetes-retired#284 - name: asgPublic1c instanceType: "c4.large" # former `workerInstanceType` in the top-level count: 2 # former `workerCount` in the top-level rootVolumeSize: ... rootVolumeType: ... rootVolumeIOPs: ... autoScalingGroup: minSize: 0 maxSize: 10 waitSignal: enabled: true maxBatchSize: 2 - name: spotFleetPublic1a subnets: - name: public1 spotFleet: targetCapacity: 1 unitRootVolumeSize: 50 unitRootvolumeIOPs: 100 rootVolumeType: gp2 spotPrice: 0.06 launchSpecifications: - spotPrice: 0.12 weightedCapacity: 2 instanceType: m4.xlarge rootVolumeType: io1 rootVolumeIOPs: 200 rootVolumeSize: 100 ```
…preconfigured NAT gateway See kubernetes-retired#284 (comment) for more context
…rk-topology Add support for customization of network topologies
This is an implementation of kubernetes-retired#238 from @redbaron especially what I've described in my comment there kubernetes-retired#238 (comment), and an answer to the request "**3. Node pools should be more tightly integrated**" of kubernetes-retired#271 from @Sasso . I believe this also achieves what was requested by @andrejvanderzee in kubernetes-retired#176 (comment). After applying this change: 1. All the `kube-aws node-pools` sub-commands are dropped 2. You can now bring up a main cluster and one or more node pools at once with `kube-aws up` 3. You can now update all the sub-clusters including a main cluster and node pool(s) by running `kube-aws update` 4. You can now destroy all the AWS resources spanning main and node pools at once with `kube-aws destroy` 5. You can configure node pools by defining a `worker.nodePools` array in cluster.yaml` 6. `workerCount` is dropped. Please migrate to `worker.nodePools[].count` 7. `node-pools/` and hence `node-pools/<node pool name>` directories, `cluster.yaml`, `stack-template.json`, `user-data/cloud-config-worker` for each node pool are dropped. 8. A typical local file tree would now look like: - `cluster.yaml` - `stack-templates/` (generated on `kube-aws render`) - `root.json.tmpl` - `control-plane.json.tmpl` - `node-pool.json.tmpl` - `userdata/` - `cloud-config-worker` - `cloud-config-controller` - `cloud-config-etcd` - `credentials/` - *.pem(generated on `kube-aws render`) - *.pem.enc(generated on `kube-aws validate` or `kube-aws up`) - `exported/` (generated on `kube-aws up --export --s3-uri <s3uri>`) - `stacks/` - `control-plane/` - `stack.json` - `user-data-controller` - `<node pool name = stack name>/` - `stack.json` - `user-data-worker` 9. A typical object tree in S3 would now look like: - `<bucket and directory from s3URI>`/ - kube-aws/ - clusters/ - `<cluster name>`/ - `exported`/ - `stacks`/ - `control-plane/` - `stack.json` - `cloud-config-controller` - `<node pool name = stack name>`/ - `stack.json` Implementation details: Under the hood, kube-aws utilizes CloudFormation nested stacks to delegate management of multiple stacks as a whole. kube-aws now creates 1 root stack and nested stacks including 1 main(or currently named "control plane") stack and 0 or more node pool stacks. kube-aws operates on S3 to upload all the assets required by all the stacks(root, main, node pools) and then on CloudFormation to create/update/destroy a root stack. An example `cluster.yaml` I've been used to test this looks like: ```yaml clusterName: <your cluster name> externalDNSName: <your external dns name> hostedZoneId: <your hosted zone id> keyName: <your key name> kmsKeyArn: <your kms key arn> region: ap-northeast-1 createRecordSet: true experimental: waitSignal: enabled: true subnets: - name: private1 availabilityZone: ap-northeast-1a instanceCIDR: "10.0.1.0/24" private: true - name: private2 availabilityZone: ap-northeast-1c instanceCIDR: "10.0.2.0/24" private: true - name: public1 availabilityZone: ap-northeast-1a instanceCIDR: "10.0.3.0/24" - name: public2 availabilityZone: ap-northeast-1c instanceCIDR: "10.0.4.0/24" controller: subnets: - name: public1 - name: public2 loadBalancer: private: false etcd: subnets: - name: public1 - name: public2 worker: nodePools: - name: pool1 subnets: - name: asgPublic1a - name: pool2 subnets: # former `worker.subnets` introduced in v0.9.4-rc.1 via kubernetes-retired#284 - name: asgPublic1c instanceType: "c4.large" # former `workerInstanceType` in the top-level count: 2 # former `workerCount` in the top-level rootVolumeSize: ... rootVolumeType: ... rootVolumeIOPs: ... autoScalingGroup: minSize: 0 maxSize: 10 waitSignal: enabled: true maxBatchSize: 2 - name: spotFleetPublic1a subnets: - name: public1 spotFleet: targetCapacity: 1 unitRootVolumeSize: 50 unitRootvolumeIOPs: 100 rootVolumeType: gp2 spotPrice: 0.06 launchSpecifications: - spotPrice: 0.12 weightedCapacity: 2 instanceType: m4.xlarge rootVolumeType: io1 rootVolumeIOPs: 200 rootVolumeSize: 100 ```
This change allows us to define private and public subnets in the top-level of
cluster.yaml
to be chosen for worker/controller/etcd nodes and a controller loadbalancer.Thanks to @neoandroid and @Sasso for submitting the pull request #169 and #227 respectively which had been the basis of this feature.
Let me also add that several resources including subnets, NAT gateways, route tables can now be reused by specifying
id
oridFromStackOutput
.Thanks to @icereval for his PR #195 to firstly introducing the idea of
type Identifier
to add support for existing AWS resources in an universal way.A minimum config utilizing this feature would look like:
This will create 2 private subnets and 2 public subnets. Private ones are used by etcd and controller nodes and the public ones are used by worker nodes and the controller loadbalancer.
It is flexible enough to differentiate private subnets between etcd and controllers:
It even support using existing subnets by specifying subnet IDs:
Or importing subnet IDs from another cfn stack(s):