-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Authorization token for ignition server does not refresh #169
Comments
I wonder if there is any reason for creating the |
The secret is duplicated because when new Its look like the solution should be not skipping the secret creation code if the duplication secret is already exists, but need to update the secret if the secret data was changed since the last time.
|
I think that's the simplest solution for now. Long term i want us to get out of the business of creating that separate secret for the VM/VMIs entirely. That requires deprecating the ssh and capk user support for the cloud-config types though. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
After 24 hours the userdata secret for a VM contains an outdated value. In order not to return a stale copy of a VM secret refetch it and store. Fixes: kubernetes-sigs#169 Signed-off-by: Roy Golan <rgolan@redhat.com>
After 24 hours the userdata secret for a VM contains an outdated value. In order not to return a stale copy of a VM secret refetch it and store. Fixes: kubernetes-sigs#169 Signed-off-by: Roy Golan <rgolan@redhat.com>
After 24 hours the userdata secret for a VM contains an outdated value. In order not to return a stale copy of a VM secret refetch it and store. Fixes: kubernetes-sigs#169 Signed-off-by: Roy Golan <rgolan@redhat.com>
After 24 hours the userdata secret for a VM contains an outdated value. We must refetch and update the cloud init secret with the latest data, otherwise a node will fail the ignition stage. Fixes: kubernetes-sigs#169 Signed-off-by: Roy Golan <rgolan@redhat.com>
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
After 24 hours the userdata secret for a VM contains an outdated value. We must refetch and update the cloud init secret with the latest data, otherwise a node will fail the ignition stage. Fixes: kubernetes-sigs#169 Signed-off-by: Roy Golan <rgolan@redhat.com>
After 24 hours the userdata secret for a VM contains an outdated value. We must refetch and update the cloud init secret with the latest data, otherwise a node will fail the ignition stage. Fixes: kubernetes-sigs#169 Signed-off-by: Roy Golan <rgolan@redhat.com>
When a new VM is provisioned, it pulls its authorization token from the
user-data-{NAMESPACE}-{HASH}-userdata
secret. The creation of this secret is happening in the kubevirtmachine controller , the source of the data contained in the secret comes from the
user-data-{NAMESPACE}-{HASH}
secret, which is created by the nodepool controller in HyperShift, the secret is referenced in the Machine object under the field spec.bootstrap.dataSecretName.The nodepool controller rotates the authorization token every 24H (inside the
user-data-{NAMESPACE}-{HASH}
secret), but there is no mechanized for rotating theuser-data-{NAMESPACE}-{HASH}-userdata
which means new VMs can not pull their ignition file. The only way to workaround it is to delete theuser-data-{NAMESPACE}-{HASH}-userdata
secret./kind bug
The text was updated successfully, but these errors were encountered: