A namespace sets the resource boundaries where vSphere Pods and Tanzu Kubernetes clusters created by using the Tanzu Kubernetes Grid Service can run. When initially created, the namespace has unlimited resources within the Supervisor Cluster. vSphere administrators can set limits for CPU, memory, storage, as well as the number of Kubernetes objects that can run within the namespace. Workload Management creates a resource pool for each namespace in vSphere and storage quotes in Kubernetes for enforcing storage limitations.
This test procedure demonstrates centralized management and monitoring, and enforcement with namespace resource limits. It demonstrates deploying Kubernetes workloads within the constraints of namespace quotas.
- Completion of SC01-TC01, SC01-TC02
- vSphere Administrator console and user credentials
- DevOps Engineer console and user credentials
Container Image: nginx:latest
-
Using the vSphere Administrator console and credentials, login to the VC Web UI and navigate to Menu > Workload Management.
-
Select the Namespaces tab then the ${SC_NAMESPACE_01} hyperlink.
-
On the Capacity and Usage tile, select EDIT LIMITS, and enter the following values then, select OK to save and close.:
- CPU 1 GHz
- Memory 4 GB
- Storage 10 GB
-
Verify the CPU, Memory, and Storage limits update with the entered values.
Note: If the limits do not update immediately, allow a few seconds for auto-refresh or manually refresh the page.
-
On the Status tile, under Location, select the vSphere_Cluster_Name
-
In the left pane, select Namespaces. Next, select the Resource Pools tab. Verify the ${SC_NAMESPACE_01} resource pool CPU and Memory Limit values.
Expected: CPU Limit (MHz) 1000
Memory Limit (MB) 4294 -
Using the DevOps Engineer console and credentials, login to the SC API.
kubectl vsphere login --vsphere-username ${DEVOPS_USER_NAME} --server=https://${SC_API_VIP} --insecure-skip-tls-verify
Expected:
Logged in successfully.
You have access to the following contexts:
${SC_API_VIP}
${SC_NAMESPACE_01} -
Set the context for the ${SC_NAMESPACE_01} namespace with the command
kubectl config use-context ${SC_NAMESPACE_01}
-
View the namespace resource limits with the command
kubectl describe ns ${SC_NAMESPACE_01}
Convert the vmware-system-resource-pool-cpu-limit core value to millicores. For instance, cores x 1000 = millicores.
Note: As a reference for converting the
vmware-system-resource-pool-cpu-limit
value, which is described in cores, to the GHz equivalent set in Step 3, run the bash script, sc01/v7k8s-ns-cpu-limit-calc.sh or manually calculate with the following formula:vmware-system-resource-pool-cpu-limit
= (vSphere Namespace CPU GHz Limit / (vSphere Cluster GHz Capacity / vSphere Cluster Total Processors))Expected:
...
vmware-system-resource-pool-cpu-limit: 0.4760
vmware-system-resource-pool-memory-limit: 4096Mi
...
Resource Quotas
Name: ${SC_NAMESPACE_01}
Resource Used Hard
-------- --- ---
requests.storage 0 10Gi
... -
Using a text-editor, open the file, deploy-resource-quota-test.yaml. Edit the
spec.template.spec.containers.resources.requests.cpu
value in millicores to exceed thevmware-system-resource-pool-cpu-limit
value identified in Step 9. Then, edit thespec.template.spec.containers.resources.limits.cpu
value in millicores to twice the value ofspec.template.spec.containers.resources.requests.cpu
. Save and close the file.vi scenarios/operator/deploy-resource-quota-test.yaml
-
Apply the deployment spec to the SC API
kubectl apply -f scenarios/operator/deploy-resource-quota-test.yaml
Expected:
deployment.apps/deploy-resource-constraints created
-
View the deployment status with the command and describe the pod to view its status and related events
kubectl get deployment deploy-resource-constraints
Expected:
NAME READY UP-TO-DATE AVAILABLE
deploy-resource-constraints 0/1 1 0 -
Describe the pods to view its status recent events
kubectl describe pods -l app=deploy-resourcequota-test
Expected:
…
Status: Pending
...
Events:
Type Reason Age From Message
---- ------ ---- ----
Warning FailedScheduling 3m21s default-scheduler The amount of CPU resource available in the parent resource pool is insufficient for the operation. -
Delete the deployment
kubectl delete deploy deploy-resource-constraints
-
Reopen the file deploy-resource-quota-test.yaml and reduce both the
spec.template.spec.containers.resources.requests.cpu
andspec.template.spec.containers.resources.limits.cpu
values to approximately two-thirds (2/3) thevmware-system-resource-pool-cpu-limit
equivalent millicore value, identified in step 9. Save the changes and reapply the deployment spec.vi scenarios/operator/deploy-resource-quota-test.yaml
kubectl apply -f scenarios/operator/deploy-resource-quota-test.yaml
Expected:
deployment.apps/deploy-resource-constraints created
-
Monitor status of the deployment, allowing up to 60 secs for the nodes to pull the container image
kubectl get deploy deploy-resource-constraints
Expected:
NAME READY UP-TO-DATE AVAILABLE
deploy-resource-constraints 1/1 1 1 -
Scale out the deployment, increasing the number of replicas to three (3).
kubectl scale deploy deploy-resource-constraints --replicas=3
Expected:
deployment.apps/deploy-resource-constraints scaled
-
Monitor status of the deployment, allowing up to 60 secs for the nodes to pull the container image
kubectl get deploy deploy-resource-constraints
Expected:
NAME READY UP-TO-DATE AVAILABLE
deploy-resource-constraints 1/3 3 1 -
List the deployment’s pods.
kubectl get pods -l app=deploy-resourcequota-test
Expected:
NAME READY STATUS
deploy-resource-constraints-x 0/1 Pending
deploy-resource-constraints-y 1/1 Running
deploy-resource-constraints-z 0/1 Pending -
Identify the vSphere storage-policy name, also the Kubernetes storage class name, by describing the namespace storage quota. Record the resource name, which is
Storage_Resource_Name
.storageclass.storage.k8s.io/requests.storage
; omit the first ".
" and everything to the right of it.kubectl describe resourcequota ${SC_NAMESPACE_01}-storagequota
-
With a text-editor, open the file pvc-resource-quota-test.yaml. Update the
spec.storageClassName
with theStorage_Resource_Name
, identified in Step 20. Then, set thespec.resources.requests.storage
value to20Gi
. Save and close the file.vi scenarios/operator/pvc-resource-quota-test.yaml
-
Create a PVC with the command
kubectl apply -f scenarios/operator/pvc-resource-quota-test.yaml
Expected:
Error from server (Forbidden): error when creating "pvc-resource-quota-test.yaml": persistentvolumeclaims "pvc-resourcequota-test" is forbidden: exceeded quota: ${SC_NAMESPACE_01}, requested: requests.storage=20Gi, used: requests.storage=0, limited: requests.storage=10Gi
-
Reopen the file pvc-resource-quota-test.yaml. Change the
spec.resources.requests.storage
value to9Gi
. Save and close the file.vi scenarios/operator/pvc-resource-quota-test.yaml
-
Reattempt to create the PVC with the command
kubectl apply -f scenarios/operator/pvc-resource-quota-test.yaml
Expected:
persistentvolumeclaim/pvc-resourcequota-test created
-
Verify the PVC status and record the volume name
kubectl get pvc pvc-resourcequota-test
Expected:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-resourcequota-test Bound pvc-6980d377-f344-48b5-877a-eaefbc3d94b0 9Gi RWO *Storage_Class_Name* 63s -
Using the vSphere Administrator console and credentials, login to the VC Web UI and navigate to Menu > Workload Management. Select the Namespaces tab then, select the ${SC_NAMESPACE_01} hyperlink. Confirm that the Capacity and Usage tile values for CPU, Memory, and Storage reflect the resources consumed by the Running pods in the Kubernetes deployment, deploy-resource-constraints and pvc,
pvc-resourcequota-test
.Note: Verify the CPU utilization with the following formulas:
CPU Utilization GHz Min = ((spec.template.spec.containers.resources.requests.cpu
/ 1000)* n pods Running)(vSphere Cluster GHz Capacity / vSphere Cluster Total Processors)
CPU Utilization GHz Max = ((spec.template.spec.containers.resources.limits.cpu
* / 1000)* n pods Running)*(vSphere Cluster GHz Capacity / vSphere Cluster Total Processors)Expected: The Capacity and Usage utilization reports metrics are within the calculated range(s) or 10% of a calculated value.
-
In the Pods tile, record the number of Running, Pending, and Failed pods.
Note: The quantity reported under the tile title, Pods, reflects the total number of pods regardless of status. Hover the cursor over the line graph for a pop-up summary that breaks down the number of pods by status.
Expected: Running: 1
Pending: 2
Failed: 0 -
In the Capacity and Usage tile, select EDIT LIMITS. Increase the CPU value to 2 GHz. Then, select OK to save and close.
-
Monitor the line graph as the as the Pending pods transition to Running. Allow up to 90 secs for the changes to take effect and the graph to update. If the tile doesn't auto-refresh with the updated pod count, refresh the page. After 90 seconds, record the number of Running, Pending, and Failed pods.
Expected: Running: 3
Pending: 0
Failed: 0Actual:
-
In the Capacity and Usage tile, select EDIT LIMIT. Clear all values so that each line reports No limit. Select OK to save and close the window.
-
Switch back to the DevOps Engineer console, and clean up the namespace by running the following commands:
kubectl delete deploy deploy-resource-constraints kubectl delete pvc pvc-resourcequota-test
- [ ] Pass
- [ ] Fail
Return to Test Cases Inventory