-
Notifications
You must be signed in to change notification settings - Fork 578
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Update ASG - do not set desired value for machinepool which have externally managed replicas #4654
🐛 Update ASG - do not set desired value for machinepool which have externally managed replicas #4654
Conversation
Hi @calvix. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@@ -284,35 +284,35 @@ func (s *Service) DeleteASG(name string) error { | |||
} | |||
|
|||
// UpdateASG will update the ASG of a service. | |||
func (s *Service) UpdateASG(scope *scope.MachinePoolScope) error { | |||
subnetIDs, err := s.SubnetIDs(scope) | |||
func (s *Service) UpdateASG(machinePoolScope *scope.MachinePoolScope) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the renaming of the variable from scope
to machinePoolScope
is necessary to be able to access the package scope
and the function scope.ReplicasExternallyManaged
/ok-to-test |
Change looks good per se, however the existing implementation of the annotation check function seems strange:
CAPA requires a specific value which is defined like this:
But the CAPI "contract" for this annotation says:
So the check should be patched as follows (untested):
And we should delete the const This is out of scope for the PR, but I quickly wrote a test + fixed the above, so that will be pushed to this PR branch soon by my colleague @calvix. We should have someone from another company, and with expertise with that code, to review as well, please. |
@calvix Please squash into a single commit with a reasonable commit message (e.g. the PR title). Also, please edit the PR to include a release note (see PR template if you need to know the formatting). This now contains my commits, so I shouldn't review. |
…rnally managed replicas
/lgtm |
/test ? |
@richardcase: The following commands are available to trigger required jobs:
The following commands are available to trigger optional jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test pull-cluster-api-provider-aws-e2e |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: richardcase The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/kind bug
What this PR does / why we need it:
For machine pools that have externally managed replicas (for example by
cluster-autoscaler
) the cluster-api-provider-aws should not set the desired replicas otherwise it can cause a problem where the controller is trying to update an ASG with desired replicas that are out of bounds of min/max.Example:
The machine pool is set to have
min:1
instances andmax: 3
instances and it's currently running on 2 desired replicas. The user wants to update to new valuesmin: 5
andmax: 10
. The update operation fails with an AWS API errorUpdating the
MachinePool.spec.replicas
will not help as the field is updated based on the current amount of nodes/replicas due to externally managed replicas configuration so it will be set back to the current value of2
Release note: