-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
⚠️ Remove v1alpha3 API Version #8997
⚠️ Remove v1alpha3 API Version #8997
Conversation
/test pull-cluster-api-e2e-full-main |
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a high-level review. Looks good so far!
8715d73
to
e95f656
Compare
/test pull-cluster-api-e2e-full-main |
e95f656
to
040b73b
Compare
/retest |
/test pull-cluster-api-e2e-full-main |
In the meeting today, Vince suggested adding a check to clusterctl to tell users if they try to install a provider trying to use v1alpha3 (just wanted this to get written down somewhere, i'm aware this is probably a separate issue/PR). |
Today if we try to install versions on the old contract |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some smaller findings
@@ -352,6 +349,7 @@ func (u *providerUpgrader) getUpgradeComponents(ctx context.Context, provider Up | |||
|
|||
func (u *providerUpgrader) doUpgrade(ctx context.Context, upgradePlan *UpgradePlan, opts UpgradeOptions) error { | |||
// Check for multiple instances of the same provider if current contract is v1alpha3. | |||
// TODO(killianmuldoon) Assess if we can remove this piece of code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on the link to the docs in CheckSingleProviderInstance I assume this check should be for >= v1alpha3. So basically for all versions that we support upgrading today (?)
(xref: https://cluster-api.sigs.k8s.io/developer/architecture/controllers/support-multiple-instances.html)
Let's circle back with @fabriziopandini next week before we change anything here though (and merge the PR independent of that)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
generally looks good to me
Signed-off-by: killianmuldoon <kmuldoon@vmware.com>
040b73b
to
a64d0b1
Compare
@killianmuldoon: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Thx! /lgtm |
LGTM label has been added. Git tree hash: 454582da5fb54c731df06f57537c23b37240ee83
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sbueringer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@sbueringer WDYT about removing hold on this one? |
I didn't notice that there is a hold on it :D /hold cancel |
/area api |
Remove v1alpha3 versions from across the Cluster API codebase.
/hold
The intention isn't to remove this until some time after the 1.5 release. Just opened this PR to get a sense of the size and the problems it might surface.