-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Removing v1alpha3 & v1alpha4 apiVersions #8038
Comments
/cc |
cc @kubernetes-sigs/cluster-api-release-team |
I see no mention of how to ensure that old objects are "upgraded"? One way to solve this is the storage version migrator (also linked from ref below). Another option would be to ensure that clusterctl takes care of it as part of Edit: Ref for cert-manager: https://cert-manager.io/docs/installation/upgrading/remove-deprecated-apis/#upgrading-existing-cert-manager-resources |
@lentzi90 Thx for bringing this up. This is a very good point. But I think we have this covered as we ~ implemented what cert-manager has in clusterctl: #6749 + #6691 (comment) P.S. I think we also have test-coverage for this with v1.6 as the clusterctl upgrade tests should cover this (added a sub-task to verify that) |
Amazing! Thank you very much for this! Looks like it will even work automagically for the provider CRDs! 🎉 |
Due to the discussions at the CAPI SIG Meeting: If v1.6 drops the code (which includes migration won't work anymore): Note: from quickly reading the docs (especially https://main.cluster-api.sigs.k8s.io/contributing), I was not able to find a wording about this special case or that we should name bump the major version number. |
AFAIK we follow semver so I think https://semver.org/ is enough justification +1 for bumping to v2.0 |
Do we actually follow semver or are we following a Kubernetes-style versioning scheme? (they also treat their API versions totally independent of the Kubernetes version number) |
|
That's fair, I guess as long as we deprecate the APIs first it's fair game to remove them as a breaking change in a later minor version. One suggestion: maybe instead of doing both v1alpha3 and v1alpha4 together we should offset them by one minor release, chances of users out there still using v1alpha3 are probably extremely low (and much lower than v1alpha4 which itself is probably low too), so maybe we can do v1alpha3 as a first "safer" step so we can learn from anything unexpected when we do v1alpha4? |
this seems a good idea to me, with probably a little bit more work, but it seems a good compromise IMO |
So v1alpha3 v1.5 + v1.6 (+ potential delay depending on feedback) and then v1alpha4 with v1.7 + v1.8 (+ potential delay depending on feedback)? Will probably take us until at least mid 2024 to get there :/ |
How about:
This way it's still the same timeline (everything done by v1.6) but we give ourselves more of a safe rollout instead of removing everything at once. Another option is to delay everything by one release (so stop serving v1alpha3 in v1.5 as planned) so everything is done by release v1.7 but I don't think we really need an extra 4 months to announce removal of v1alpha3, it's been deprecated for 2 years (!) https://github.com/kubernetes-sigs/cluster-api/blob/main/CONTRIBUTING.md#support-and-guarantees, it's more of a matter if we think we can get the work needed in time for v1.4 (by end of March). |
Sounds good to me! |
+1 |
Ah no we need the fixes I mentioned above in v1.4.0 first and can start removing in v1.5 |
Yeah - We'd need to delay the process by one release, but that sounds alright to me. It could all be done by 1.7 in this case? |
i'm +1 for splitting the action to remove v1alpha3, then v1alpha4 in the next release. in general the plan here lgtm |
Thx for the comments. I updated the issue description accordingly. Please take another look, thx :) |
+1 for me! |
I've opened a PR to document the removal plan: #8117 I will also ask for a lazy consensus for the PR in the office hours today to "formalize" the approval of the plan |
Added an item to the task list around communicating the requirement to restart the controller manager in order to get deletion working after updating to a version which doesn't serve v1alpha3. |
/unassign |
@killianmuldoon I think the last step should be unblocked now. IIRC you wanted to take over that one once you're back, right? (No rush!!) |
Yup - I'll do a PR for this this week! |
/reopen Not 100% sure how we want to handle the release note thing (i.e. if we want to keep the issue open or if there is a way to get this done now) |
@sbueringer: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I think it's fine to close the issue - the release note should already be there in a basic form - i.e. with the breaking change. I think it's just important for the release team to be generally aware of it to add the extra information as was done in previous releases. This issue is probably too complicated to track that, but I'm not sure if the release team generally tracks information for future release notes somewhere. |
@cahillsf ^^ Sounds reasonable /close |
@sbueringer: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Looks like we have to fix a link somehwere: https://github.com/sbueringer/cluster-api/actions/runs/7696026837/job/20970116673 |
I'll pick that up |
cool -- i will poke around at how this was communicated the last time we deprecated an API version. please let me know if there was anything aside from clearly indicating this change in the upcoming patch release that the release team needs to do |
I think something like the deprecation notice in https://github.com/kubernetes-sigs/cluster-api/releases/tag/v1.6.0 would be fine i.e. Note: this would be for the minor release |
Ah right right, cool sounds good thanks Killian |
Motivation
The last releases using v1alpha3 (v0.3.x) and v1alpha4 (v0.4.x) apiVersions have been both EOL since ~ April 2022 (~ a year). I think now it is time to have a plan to remove v1alpha3 & v1alpha4.
Proposal
I would propose the following timeline:
served
tofalse
for the apiVersions in all our CRDs.served
back totrue
on the CRD.Tasks
Prerequisites: (have to be done for v1.4.0)
v1.4:
v1.5: (@killianmuldoon)
v1alpha3: Stop serving via
kubebuilder:unservedversion
⚠️ Stop serving v1alpha3 API types #8549Ensure clusterctl upgrade warns users if the upgrade would break them (e.g. v0.3 => v1.x (version without v1alpha3 apiVersion))
Document the requirement to restart the kube-controller-manager after updating to a CAPI version which no longer serves v1alpha3 📖 Add note about v1alpha3 removal to book #8740
Release notes should reflect the changes and make a note of the controller-manager restart requirement.
v1.6: (@killianmuldoon)
kubebuilder:unservedversion
https://github.com/kubernetes-sigs/cluster-api/pull/8996/filesv1.7: (@killianmuldoon)
Misc:
/kind cleanup
/area api
The text was updated successfully, but these errors were encountered: