Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ec2: allow adding subnet groups/AZs after initial VPC deployment #28644

Open
2 tasks
rix0rrr opened this issue Jan 10, 2024 · 3 comments
Open
2 tasks

ec2: allow adding subnet groups/AZs after initial VPC deployment #28644

rix0rrr opened this issue Jan 10, 2024 · 3 comments
Labels
@aws-cdk/aws-ec2 Related to Amazon Elastic Compute Cloud effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. p1

Comments

@rix0rrr
Copy link
Contributor

rix0rrr commented Jan 10, 2024

Describe the feature

With our current Vpc construct, it's easy to get going. What's not obvious however is that once you deploy any machines into your VPC, it becomes impossible to change the layout, not even additively.

The reason for that is that because of the way default CIDR allocations are done, whenever any groups or AZs are added, the CIDRs are changed. However, changing a CIDR requires replacing the subnet, and that is not possible as long as any machines are attached to the Subnet. This means that changing the Vpc layout is a very disruptive operation that requires tearing down all infrastructure.

There are two decisions that cause the current behavior:

  • The CIDRs assigned to the subnets will automatically adjust to fill up available space. That is done so as many machines as possible can fit into a particular subnet, but also means that any addition of a new subnet requires shrinking all other subnets to make space in the IP space.
  • CIDR range assignment is stateless: it depends only on the current state of the code, and not on history. CIDR ranges are tightly packed into the available IP space. That means that even if subnet sizes were fixed, whenever any new subnets are added they need to be inserted in between other subnet ranges, which will mean that some of them need to be shifted.

These problems are prominent in IPv4, where the available IP space is (comparatively) small and must be used efficiently. That's not to say they couldn't be lifted for IPv4 as well, but that's where the motivation for the current design comes from.

In IPv6-land though, IP space is effectively infinite, and we can do whatever.

Use Case

Schematically, this diagram shows the current problem and the proposed solution. The solution can be implemented both for IPv4 and IPv6, but should definitely be considered once IPv6-only VPCs become a thing.

In this use case, a customer has a VPC with 3 Subnet Groups spanning 3 AZs (a, b, c) and they want to add a 4th AZ (d). The same problem would occur in a slightly different shape when a new subnet group would be added instead. You can see the sizes of all subnets shifting when the change is made, necessitating a replacement that will be impossible in practice:

image

Proposed Solution

The proposed solution is:

  • Make the Subnet CIDRs have a default size, instead of automatically spanning the entire available address space. This allow adding subnet groups at the end. For IPv6 this is already true (subnet CIDRs are /64 by default), for IPv4 this is not true. A reasonable default size of IPv4 would probably be /21, allowing 2046 machines per subnet, but I will leave someone with more experience of real life workloads to opine on this (*).
  • Don't pack the subnets tightly into the available IP space, but leave a gap to go up to 4 AZs if necessary. AZs are unlikely to need to be added later (groups are much more likely to change), and 4 seems like a nice power-of-2 upper bound that most users will not need to exceed.

Of course, all of these sizes should be configurable.

(*) The default VPC is created with a /16 CIDR, leaving /18 room per AZ (if we assume 4 AZs), leaving /20 (4094 machines) per subnet if we assume max 4 subnets/groups per AZ or /21 (2046 machines) if we assume max 8 subnets/groups per AZ. The downside would be that we would waste more than 70% of the available IP space in the default setup ((2^11*9) / (2^16) is effectively used). We could also do things like say that Public subnets by default have a smaller size than either Private or Isolated subnets.

Other Information

No response

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

CDK version used

Environment details (OS name and version, etc.)

@rix0rrr rix0rrr added feature-request A feature should be added or improved. needs-triage This issue or PR still needs to be triaged. labels Jan 10, 2024
@github-actions github-actions bot added the @aws-cdk/aws-ec2 Related to Amazon Elastic Compute Cloud label Jan 10, 2024
@rix0rrr
Copy link
Contributor Author

rix0rrr commented Jan 10, 2024

Duplicate: #28369

@pahud pahud added p1 effort/medium Medium work item – several days of effort and removed needs-triage This issue or PR still needs to be triaged. labels Jan 10, 2024
@NetDevAutomate
Copy link

VPC CIDR allocations and can’t be changed after deployment, however a secondary CIDR can be added for expansion.

If there is any potential requirement for subnets to be added later i.e. a new AZ, then it’s recommended to use an explicit subnet strategy with room for growth, ideally using a summarisable range.

@nbaillie
Copy link
Contributor

A few areas of comment below:

Size
Often a /24 may be for a normal app subnet when they have some EC2 or some Lambdas landed there etc, i guess this is partly conservative use of IPs and partly due to the easy math that it provides for working out the spaces, perhaps a bit of a hangover from traditional network building.

Growing use of container hosting accounts for EKS or other platforms often go bigger as the IPs are/can be allocated to the containers. so /21 (2046) IPs perhaps would be reasonable here.

/21 would be enough in most cases. but could be seen to be either wasteful or not enough, i can imagine that the masses would want to be able to supply a CIDR for the size or something like that which again moves away from predictable implementation or perhaps some T-Shirt sizes like small /28, medium /24, large /21or20.

Expandable vs Fixed
Mostly when a VPC / Subnet is created there is a rough idea of what it will contain, Fixed in this case could work quite well, and the trade off on economy of use may be desirable to allow for additions if needed later.

economy of use
In thinking about the idea of planning ahead for AZs as described above, i wonder if there could be some regional awareness built in such that the number of AZs with space reserved is allocated accordingly for example eu-west-2 only has 3AZs so perhaps as we can know this we just stick with reserving for 3, obviously this does raise the question of what if the number of AZs ever increase. Over all could we have the user decide what proportion of the space they want to reserve for expansion, and then use a fixed pattern over the top of that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-ec2 Related to Amazon Elastic Compute Cloud effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. p1
Projects
None yet
Development

No branches or pull requests

4 participants