-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle status.storedVersions migration during upgrades #6691
Comments
Based on the slack conversation we had, this is a small summary of all points we had:
Seems, the way we could tackle this as of now is:
|
This is not necessarily true. The apiserver only upgrades the storage version on write. So if an object never gets written, it won't get upgraded. So ideally we would automatically write the object during reconcile if the storage version is outdated, even if nothing changed. If I'm not mistaken, controllers will reconcile all of their objects after start, so that would automatically convert everything. I think we discussed this at KubeCon during the CAPI meeting. One way to get around this would be to have some annotation/label on objects whose storage version is outdated. The idea was to talk to sig-apimachinery and suggest adding such an annotation. But I think we don't even need their support: conversion happens through conversion webhooks, which are implemented by operators. It should be possible to add an annotation during migration, when the target version is not the storage version. This could potentially be added on controller-runtime level, solving this problem for lots of operators at once. Operators probably always reconcile the latest apiversion of an object, right? So if the controller gets an object, no conversion should occur. If conversion webhooks add a label during conversion, the controller would notice that the object was piped through conversion, and therefore isn't using the latest apiversion for storage. |
Hey folks, I think it's a clean and straightforward solution for our issue and won't require a multi-month/year effort to get something implemented upstream in Kubernetes. I looked over the KEPs and it seems like this will take a while: (and then is probably not available in old versions of Kubernetes)
I would highly prefer a straightforward solution in clusterctl over modifying webhook and controllers in all providers to write all CRs on upgrades (this would also not work for cert-manager). If there will be an upstream solution in either controller-runtime or Kubernetes over time which works with all controller-runtime/Kubernetes version that we need it for we can of course deprecate our mechanism as it's then not necessary anymore. |
User Story
As a developer/user/operator, I would like to do an upgrade from old API versions to newer ones in sequence (v1a3 => v1a4 => v1beta1 ) without manually removing stored versions and applying them in the middle of the upgrade process (v1a4 => v1beta1 since v1a4 would have v1a3
storedVersion
in the status).Detailed Description
we have a scenario where we do an upgrade from CAPM3 v1a4 => v1a5 => v1beta1 (CAPI v1a3 => v1a4 => v1beta1). The problem is, clusterctl fails (see logs below) to upgrade from v1a5 => v1b1 because of the
status.storedVersions
in the CRD which includesv1a4
API version in the status.The request would be, should the clusterctl take care of removing of old version from the
status.storedVersions
or how is it handled in general?Anything else you would like to add:
There has been a long discussion in the slack thread, xref: https://kubernetes.slack.com/archives/C8TSNPY4T/p1655805341632159
/kind feature
The text was updated successfully, but these errors were encountered: