-
Notifications
You must be signed in to change notification settings - Fork 39.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VSAN policy support for storage volume provisioning inside kubernetes #42974
VSAN policy support for storage volume provisioning inside kubernetes #42974
Conversation
Hi @BaluDontu. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/approve |
42dd5c9
to
f060456
Compare
@BaluDontu Does this PR require a release note? If it does, please add one |
@k8s-bot ok to test |
examples/volumes/vsphere/README.md
Outdated
|
||
The policy settings can be one or more of the following: | ||
|
||
* hostFailuresToTolerate: represents NumberOfFailuresToTolerate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The rendering of this is not clear. Either quote it as fixed width or bold the text strings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will make it to italics.
examples/volumes/vsphere/README.md
Outdated
``` | ||
[Download example](vsphere-volume-sc-vsancapabilities.yaml?raw=true) | ||
|
||
Here a persistent volume will be created with the Virtual SAN capabilities - hostFailuresToTolerate to 2 and cachereservation is 20% read cache reserved for storage object. Also the persistent volume will be zeroedthick disk. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
zeroedthick
vs zeroedthick
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will make it to italics.
@@ -66,6 +66,8 @@ const ( | |||
ZeroedThickDiskType = "zeroedThick" | |||
VolDir = "kubevols" | |||
RoundTripperDefaultCount = 3 | |||
DummyVMName = "kubernetes-helper-vm" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is a race condition possible if 2 volumes are being created for the same VM name? Should the name by dynamically generated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are keeping this Dummy VM around for all the create disk operations.
If 2 requests for createDisk come at same time, the VC would serialize these operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the second create disk succeed or fail since create vm fails?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First time when VM create fails it will return back and kubernetes would try again that it when it finds existing dummyVM and will create a disk successfully.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. I guess since the VM is not deleted this is true only for the first and second volume creation race.
// 2. Add a new disk to the VM by performing VM reconfigure. | ||
// 3. Detach the new disk from the dummy VM. | ||
if volumeOptions.StorageProfileData != "" { | ||
// Check if the datastore is VSAN if any capability requirements are specified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move the code into it's own private function will keep attach disk function readable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just wanted to keep it there as it would easy to read all the steps if storageProfileData is not null.
The code seems to be minimum as well.
// Check if the DummyVM exists in kubernetes cluster folder. | ||
// The kubernetes cluster folder - vs.cfg.Global.WorkingDir is where all the nodes in the kubernetes cluster are created. | ||
vmRegex := vs.cfg.Global.WorkingDir + DummyVMName | ||
dummyVM, err := f.VirtualMachine(ctx, vmRegex) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check the serialization logic within the controller manager. It might be possible to have 2 requests try to create the same VM. Using a new VM name that is specific for a POD might work around the serialization problem or we might have to check on error to see if VM is created.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If 2 requests come to create the same VM at exactly same time, the 1st operations would succeed and 2nd would fail. If 2nd fails, kubernetes attempts to try again and it would eventually find the already created VM and would create a volume on that disk.
All PVC creations are retried by kubernetes. It assumes as if these are recoverable errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is clarified to you after our discussion and hence not needed anymore.
Partial review done, will complete review once tests pass. |
volumeOptions.DiskFormat = value | ||
case "datastore": | ||
volumeOptions.Datastore = value | ||
case Policy_HostFailuresToTolerate: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cases enumerated here can be a bit hard to read, can these be moved into independent methods?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would creating so many methods for all the VSAN capabilities. This way it is easy to understand as for each case it would validate the VSAN capability and concatenate it to volumeOptions.StorageProfileData.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On further inspection this case structure and the one below can be combined. Instead of the second nested case being only for validation you can combine them into one validate and update string.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same opinion as Ritesh on this one.. The validation function just did another switch which you have already done here. Better to combine them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done!
case Policy_HostFailuresToTolerate: | ||
if !validateVSANCapability(Policy_HostFailuresToTolerate, value) { | ||
return "", 0, fmt.Errorf(`Invalid value for hostFailuresToTolerate in volume plugin %s. | ||
The default value is 1, minimum value is 0 and maximum value is 3.`, v.plugin.GetPluginName()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The limits here and below should be constants that get put into the log.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
U meant the "The default value is 1, minimum value is 0 and maximum value is 3.",
"The value can be either 0 or 1",
"The minimum percentage is 0 and maximum percentage is 100."
and so on to be constants?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The values for the numbers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done!
@k8s-bot non-cri node e2e test this |
c97a300
to
a0a4351
Compare
|
||
// Validate the capability requirement for the user specified policy attributes. | ||
func validateVSANCapability(capabilityName string, capabilityValue string) bool { | ||
switch strings.ToLower(capabilityName) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be cleaned up. The conversion can be done once and then the validation instead of doing the validation for each case which is duplication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
@@ -1378,3 +1636,49 @@ func makeDirectoryInDatastore(c *govmomi.Client, dc *object.Datacenter, path str | |||
|
|||
return err | |||
} | |||
|
|||
// Get the folder |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update the comment or rename method to say that get the folder for a given VM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done!
@@ -1210,18 +1189,6 @@ func (vs *VSphere) CreateVolume(volumeOptions *VolumeOptions) (volumePath string | |||
datastore = volumeOptions.Datastore | |||
} | |||
|
|||
// Default diskformat as 'thin' | |||
if volumeOptions.DiskFormat == "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not leave this at the top level to not replicate setting the default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done!
c372155
to
dbe9483
Compare
/lgtm |
1 similar comment
/lgtm |
@thockin : I have received an lgtm from @kerneltime @luomiao . Can you please provide an lgtm if we are good to go. Thanks! |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BaluDontu, kerneltime, luomiao, thockin
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
Automatic merge from submit-queue (batch tested with PRs 42835, 42974) |
/approve |
…42974-upstream-release-1.6 Automatic merge from submit-queue Automated cherry pick of #41929 #42974 upstream release 1.6 Cherry pick of #41929 #42974 on release-1.6. #41929: vSphere Cloud Provider: Fstype in storage class #42974: VSAN policy support for storage volume provisioning inside kubernetes @saad-ali
Automatic merge from submit-queue e2e tests for VSAN policy support in Kubernetes for vSphere Following e2e test cases have been implemented for VSAN policy support in Kubernetes for vSphere. These e2e tests are for PR #42974 which out for review. A total of 8 test cases for below mentioned 5 different scenarios are implemented. Test cases: 1. Validation of VSAN capabilities. - hostFailuresToTolerate : Minimum 1 and Max 3. Should be integer. - stripeWidth: Minimum is 1 and Maximum is 12. Should be integer. - proportionalCapacity: Expressed in percentage. Should be between 0 and 100. Should be integer. - iopsLimit: Should be greater than 0. Should be integer. 2. Use a VSAN testbed setup. Try valid VSAN capabilities which are supported by VSAN testbed. Make sure the disk is created with policy configured with it. - Ex: Using hostFailuresToTolerate=0 and cacheReservation=12 Ex: diskStripes=1 and objectSpaceReservation=30 3. Use a VSAN testbed setup. Try valid VSAN capabilities which are not supported by VSAN testbed. Make sure that the disk creation fails and PVC status is pending. 4. Try using VSAN capabilities on a non-VSAN datastore. PVC status will be pending and it errors to the user saying to VSAN capabilities are not supported on a non-VSAN testbed. 5. Try all 1 to 5 with custom datastore specified by the user. @jeffvance @divyenpatel **Release note**: ```release-note None ```
Automatic merge from submit-queue vSphere storage policy support for dynamic volume provisioning Till now, vSphere cloud provider provides support to configure persistent volume with VSAN storage capabilities - #42974. Right now this only works with VSAN. Also there might be other use cases: - The user might need a way to configure a policy on other datastores like VMFS, NFS etc. - Use Storage IO control, VMCrypt policies for a persistent disk. We can achieve about 2 use cases by using existing storage policies which are already created on vCenter using the Storage Policy Based Management service. The user will specify the SPBM policy ID as part of dynamic provisioning - resultant persistent volume will have the policy configured with it. - The persistent volume will be created on the compatible datastore that satisfies the storage policy requirements. - If there are multiple compatible datastores, the datastore with the max free space would be chosen by default. - If the user specifies the datastore along with the storage policy ID, the volume will created on this datastore if its compatible. In case if the user specified datastore is incompatible, it would error out the reasons for incompatibility to the user. - Also, the user will be able to see the associations of persistent volume object with the policy on the vCenter once the volume is attached to the node. For instance in the below example, the volume will created on a compatible datastore with max free space that satisfies the "Gold" storage policy requirements. ``` kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: fast provisioner: kubernetes.io/vsphere-volume parameters: diskformat: zeroedthick storagepolicyName: Gold ``` For instance in the below example, the vSphere CP checks if "VSANDatastore" is compatible with "Gold" storage policy requirements. If yes, volume will be provisioned on "VSANDatastore" else it will error that "VSANDatastore" is not compatible with the exact reason for failure. ``` kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: fast provisioner: kubernetes.io/vsphere-volume parameters: diskformat: zeroedthick storagepolicyName: Gold datastore: VSANDatastore ``` As a part of this change, 4 commits have been added to this PR. 1. Vendor changes for vmware/govmomi 2. Changes to the VsphereVirtualDiskVolumeSource in the Kubernetes API. Added 2 additional fields StoragePolicyName, StoragePolicyID 3. Swagger and Open spec API changes. 4. vSphere Cloud Provider changes to implement the storage policy support. **Release note**: ```release-note vSphere cloud provider: vSphere Storage policy Support for dynamic volume provisioning ```
The vsphere users will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a storage volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.
For example, User creates a storage class with VSAN storage capabilities:
The vSphere Cloud provider provisions a virtual disk (VMDK) on VSAN with the policy configured to the disk.
When you know storage requirements of your application that is being deployed on a container, you can specify these storage capabilities when you create a storage class inside Kubernetes.
@pdhamdhere @tthole @abrarshivani @divyenpatel
Release note: