-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
latest capa-controller is not working properly with our cluster-eks and spawns infinite number of VPC due some error #3048
Comments
The mess on the test AWS account is cleaned up. I don't have an effort estimate yet for CAPA, given how its |
The bug is very basic: CAPA creates a VPC, fails to store it for whatever reason. On next reconciliation, CAPA pretends it doesn't know anything about the VPC (which it really doesn't without making AWS requests) and happily creates a new one. In our case of repeated errors, this happens again and again. I implemented a basic unit test for EKS, and VPC creation idempotence (applies to EC2 and EKS based clusters alike). Upstream PR coming up soon. |
Unfortunately, my pending PR kubernetes-sigs/cluster-api-provider-aws#4637 blocks opening the follow-up which adds the EKS unit test. However I managed to extract the fix for the blatant bug in a small, separate PR kubernetes-sigs/cluster-api-provider-aws#4723 so we can go on nevertheless and fix the terrifying issue. |
Image pull errors fixed via giantswarm/cluster-api-provider-aws-app#211, so now this should be done (except for developer-reserved MCs where Flux is paused, currently |
The storage error will be solved via #2870. |
during network creation it will fail and that cause the process to restart and create VPC again
Probably related:
The text was updated successfully, but these errors were encountered: