Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

api: Update folder.PlaceVMsXCluster to support several placement types #3562

Conversation

derekbeard
Copy link
Contributor

@derekbeard derekbeard commented Sep 24, 2024

Description

The PlaceVmsXCluster API is being updated to provide placement recommendations for several new "op-types", relocate and reconfigure, in addition to the original "createAndPowerOn" behavior. To support the new capability, the PlaceVmsXCluster API is updated to add a new placement type field, add 2 new supported placement types (relocate and reconfigure) and to extend the spec and faults to carry addition info referencing existing VMs and an optional relocate spec.

This change does the following:

  • extends the API definitions to reflect the changes.
  • extends the simulator to support these new placement types.
  • adds simulator tests to validate the new placement types.

NOTE: This still a draft that needs more testing.

Type of change

Please mark options that are relevant:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to
    not work as expected)
  • This change requires a documentation update
  • Build related change

How Has This Been Tested?

go test -v -count=20 ./simulator -run TestPlaceVmsXClusterReconfigure
go test -v -count=20 ./simulator -run TestPlaceVmsXClusterRelocate
go test -v -count=20 ./simulator -run TestPlaceVmsXClusterCreateAndPowerOn
make check
make test (failing with other unrelated tests, which I'll look into).
TBD: Additional integration testing

Checklist:

  • My code follows the CONTRIBUTION guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged

@derekbeard derekbeard marked this pull request as draft September 24, 2024 19:11
akutz
akutz previously approved these changes Sep 24, 2024
Copy link
Member

@akutz akutz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me, but I'll let @dougm take a pass as well. FWIW, I'll approve this, but you'll need to resend it (and thus require another approval) due to the copyright headers. Any file that is touched will require updating the date with a -2024 in the range and dedenting the license if it is, ex.:

/*
Copyright (c) 2017-2024 VMware, Inc. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

@dougm
Copy link
Member

dougm commented Sep 24, 2024

Realizing we don't mention this in the doc, but we have scripts/license.sh - only touches files you'd see with git diff --name-status main

@derekbeard
Copy link
Contributor Author

I'm root causing that unit test issue, not obvious from the test output.

@dougm
Copy link
Member

dougm commented Sep 24, 2024

I'm root causing that unit test issue, not obvious from the test output.

Looks like an infra flake, we get test flakes too. You can re-run from here: https://github.com/vmware/govmomi/actions/runs/11020367271
w/ Re-run failed jobs

dougm
dougm previously approved these changes Sep 24, 2024
Copy link
Member

@dougm dougm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @derekbeard , lgtm!

cluster := Map.Any("ClusterComputeResource").(*ClusterComputeResource)
pool := cluster.ResourcePool
host := Map.Get(cluster.Host[0]).(*HostSystem)
vm := Map.Get(host.Vm[0]).(*VirtualMachine)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this could be an issue, if the random/Any ClusterComputeResource doesn't have any VMs placed on it.
Could do the following instead:

  • disable standalone host
  • choose random/Any VirtualMachine, then derive host + cluster from that vm
diff --git a/simulator/folder_test.go b/simulator/folder_test.go
index 42dc36be..706c2966 100644
--- a/simulator/folder_test.go
+++ b/simulator/folder_test.go
@@ -762,6 +762,7 @@ func TestPlaceVmsXClusterRelocate(t *testing.T) {
 func TestPlaceVmsXClusterReconfigure(t *testing.T) {
 	vpx := VPX()
 	vpx.Cluster = 3
+	vpx.Host = 0 // no standalone hosts
 
 	Test(func(ctx context.Context, c *vim25.Client) {
 		finder := find.NewFinder(c, true)
@@ -771,10 +772,10 @@ func TestPlaceVmsXClusterReconfigure(t *testing.T) {
 		}
 		finder.SetDatacenter(datacenter)
 
-		cluster := Map.Any("ClusterComputeResource").(*ClusterComputeResource)
+		vm := Map.Any("VirtualMachine").(*VirtualMachine)
+		host := Map.Get(*vm.Runtime.Host).(*HostSystem)
+		cluster := Map.Get(*host.Parent).(*ClusterComputeResource)
 		pool := cluster.ResourcePool
-		host := Map.Get(cluster.Host[0]).(*HostSystem)
-		vm := Map.Get(host.Vm[0]).(*VirtualMachine)
 
 		var poolMoRefs []types.ManagedObjectReference
 		poolMoRefs = append(poolMoRefs, pool.Reference())

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, exactly. Once I disabled standalone host, I was able to traverse up to CCR. My latest has that change.

@derekbeard derekbeard dismissed stale reviews from dougm and akutz via c7224a0 September 24, 2024 20:28
@derekbeard derekbeard force-pushed the placevmsxcluster-extend-to-support-placementtypes branch 2 times, most recently from c7224a0 to f22b502 Compare September 24, 2024 20:31
@derekbeard derekbeard force-pushed the placevmsxcluster-extend-to-support-placementtypes branch from f22b502 to 773b8b1 Compare October 3, 2024 16:25
The PlaceVmsXCluster API is being updated to provide placement
recommendations for several new "op-types", relocate and reconfigure, in
addition to the original "createAndPowerOn" behavior.  To support the
new capability, the PlaceVmsXCluster API is updated to add a new placement
type field, add 2 new supported placement types (relocate and reconfigure)
and to extend the spec and faults to carry addition info referencing
existing VMs and an optional relocate spec.

This change does the following:
- extends the API definitions to reflect the changes.
- extends the simulator to support these new placement types.
- adds simulator tests to validate the new placement types.
- updates licenses

Testing Done:
make check - PASSED

make test - PASSED

vm-operator:
go test -v -count=1 ./pkg/providers/vsphere - PASSED
go test -v -count=1 ./pkg/providers/vsphere/placement - PASSED
@derekbeard derekbeard force-pushed the placevmsxcluster-extend-to-support-placementtypes branch from 773b8b1 to 733f1c0 Compare October 3, 2024 16:42
@derekbeard derekbeard marked this pull request as ready for review October 3, 2024 16:43
@derekbeard derekbeard merged commit 3db76c0 into vmware:main Oct 3, 2024
10 checks passed
@derekbeard derekbeard deleted the placevmsxcluster-extend-to-support-placementtypes branch October 11, 2024 02:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants