-
Notifications
You must be signed in to change notification settings - Fork 39.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provision vsphere volume as per zone #72731
Provision vsphere volume as per zone #72731
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Hi @skarthiksrinivas. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
/sig vmware |
@@ -104,6 +104,7 @@ func (util *VsphereDiskUtil) CreateVolume(v *vsphereVolumeProvisioner) (volSpec | |||
Name: name, | |||
} | |||
|
|||
volumeOptions.Zone = selectedZone |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if I understand this correctly, this feature does not uses NodeAffinity
field of PVs and hence does not uses topology aware provisioning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Your understanding is correct. The scope of this fix is limited to honouring the allowedTopologies zones during volume provisioning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if I don't specify allowedTopology
in my default storageClass and my nodes are spread across zones then it will basically result in PVCs which can not be used by pods. Previously - we blocked/errored out on volume provisioning altogether if datastore being used is not shared with all VMs (#72497). How does this interact with that bug?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue in #72497 is that volume provisioning will fail if there is no shared datastore available across all kubernetes node VMs. Now, with this change, by specifying allowedTopologies in SC, the volume provisioning can happen as long as there is a shared datastore available for the nodes within the zone.
There is no change in behaviour when allowedTopology is not specified. It will continue to work in the same way today, i.e by looking for shared datastore across all nodes and succeed or fail as the case is.
nm.zoneInfoLock.Lock() | ||
nm.zoneInfoMap[nodeName] = zone | ||
nm.zoneInfoLock.Unlock() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should use defer pattern?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. Done.
|
||
nm.zoneInfoLock.Lock() | ||
delete(nm.zoneInfoMap, node.ObjectMeta.Name) | ||
nm.zoneInfoLock.Unlock() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets use defer
if possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still not fixed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad. Fixed the code to use defer pattern for all access to zoneInfoLock.
Addressed comments. |
e00fc45
to
64c82ec
Compare
@skarthiksrinivas: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: frapposelli, SandeepPissay, skarthiksrinivas The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@SandeepPissay will this be cherry picked to earlier k8s version ? |
@vladimirvivien I believe this will not be backported. /lgtm |
Does anyone know if there are plans to implement the |
64c82ec
to
a309d8a
Compare
/lgtm |
🎉 |
return "", err | ||
} | ||
|
||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be ahead of the if block starting at line 1260 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. That's correct. Thanks for pointing out. The current sequence makes this check a no-op. Fixed it. Have created a PR for this - #74263
|
||
for _, host := range hosts { | ||
var hostSystemMo mo.HostSystem | ||
host.Properties(ctx, host.Reference(), []string{"datastore"}, &hostSystemMo) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the effect of this call (I don't see assignment) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The last parameter in the this method call is an OUT parameter which will be set and that's how hostSystemMo gets assigned. However, this method returns error which is not processed in this step. I have fixed that now in the PR mentioned above.
Yes. We do have that task in the pipeline. |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Currently vsphere cloud provider (VCP) insists on provisioning a volume only on a globally shared datastore. Hence, in a zoned environment, even in presence of a shared datastore within a specific zone, the volume provisioning can fail if that datastore is not shared across all the zones hosting kubernetes nodes. This change fixes this issue by considering the zone information provided in allowedTopologies for selection of the datastore. If allowedTopologies is not provided, the current behaviour is retained as-is.
This PR addresses one part of issue #67703. The other part to attach zone labels to the created volumes is here - #72687
Which issue(s) this PR fixes:
Fixes #
Does this PR introduce a user-facing change?:
Yes