-
Notifications
You must be signed in to change notification settings - Fork 578
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating MachinePool, AWSMachinePool, and KubeadmConfig resources does not trigger an ASG instanceRefresh #4071
Comments
This issue is currently awaiting triage. If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
We have experienced the same behavior using CAPA |
In the current code, this seems on purpose. There are several comments, even in the machine pool controller, explicitly saying that changed user data should lead to a new launch template version, but not instance refresh. I confirmed it currently works like that. #2354 explicitly turned off instance refresh if only user data changed. That fix is to ensure the bootstrap token gets updated by creating a new launch template version automatically. If we change the controller to trigger instance refresh, we would most likely see that nodes get rolled over every
What we could do is to diff the user data without the token value. If there is an effective change, trigger instance refresh. |
Since we agreed in chat to try the above suggestion, I gave it a shot. Draft implementation is ready and I will open a PR once tested in a real scenario. |
Sounds good. |
My fix is working, but still any change to the |
I wonder what the backstory around this is and why it's working the way it does. I'm pretty sure there are some valid reasons somewhere in there... :D |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale PR will be merged soon to fix this. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/reopen |
@AndiDog: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This was actually fixed via #4619. /close |
@AndiDog: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/kind bug
What steps did you take and what happened:
Updating the MachinePool, AWSMachinePool or KubeadmConfig resources does not trigger an instanceRefresh on the AWS ASG.
I expect that with the
awsmachinepool.spec.refreshPreferences.disable
left on it's default value orfalse
, that changes to the MachinePool, AWSMachinePool, and KubeadmConfig would automatically trigger an instance refresh to rotate nodes in the pool to use the updated settings. Currently, I must manually start instance refreshes using the AWS UI or CLI in order for instances to be replaced when my specs change.What did you expect to happen:
These are the MachinePool, AWSMachinePool, and KubeadmConfig I'm working with.
I have not set
disable: true
in myrefreshPreferences
in the AWSMachinePool spec.This is the current state of the
runcmd
in the LaunchTemplate in AWS, version 1566.I apply a change to add a command to the KubeadmConfig, such as this.
I see the LaunchTemplate, has a new version, and wait for 10 minutes.
I notice that there is no active instance refresh started for my ASG in the instance refresh tab, and my instances are still using the old LaunchTemplate version.
Environment:
kubectl version
):The text was updated successfully, but these errors were encountered: