Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Display PersistentVolumeSize in crc status #4265

Merged
merged 1 commit into from
Aug 7, 2024

Conversation

vyasgun
Copy link
Contributor

@vyasgun vyasgun commented Jul 10, 2024

Enhance the crc status command to display the PersistentVolumeSize from the CRC configuration.

Fixes: Issue #4191

Solution/Idea

The /status handler should return fields specifying the PVC size and usage (allocated storage).

Proposed changes

  • Fetch and display the PersistentVolumeSize value in the crc status output.
  • Add new fields to the ClusterStatus and CrcStatus structs to support this feature.
  • Update the CRC daemon to include these changes.

Testing

Run crc status

gvyas-mac:~ gvyas$ crc status
CRC VM:                  Running
MicroShift:              Running (v4.15.12)
RAM Usage:               1.62GB of 3.904GB
Disk Usage:              6.824GB of 20.41GB (Inside the CRC VM)
Persistent Volume Usage: 10.67GB of 16GB (Allocated)
Cache Usage:             74.36GB
Cache Directory:         /Users/gvyas/.crc/cache

@vyasgun
Copy link
Contributor Author

vyasgun commented Jul 11, 2024

/retest

@vyasgun vyasgun requested a review from anjannath July 15, 2024 08:39
@anjannath
Copy link
Member

it is showing the value of the persistent-volume-size config option, but it'd be more useful if the usage was shown like the Disk Usage (how much is used and how much is free)

e.g

CRC VM:                  Running
OpenShift:               Running (v4.16.0)
RAM Usage:               6.374GB of 10.92GB
Disk Usage:              22.86GB of 32.68GB (Inside the CRC VM)
Persistent Volume Usage: 7GB of 15GB
Cache Usage:             64.58GB
Cache Directory:         /home/anjan/.crc/cache

also this should be shown only when using the microshift preset for now, since for openshift preset the config doesn't have any effect yet

% crc config get preset
Configuration property 'preset' is not set. Default value 'openshift' is used

% crc status
CRC VM:                 Running
OpenShift:              Running (v4.16.0)
RAM Usage:              6.374GB of 10.92GB
Disk Usage:             22.86GB of 32.68GB (Inside the CRC VM)
Persistent Volume Size: 15GB
Cache Usage:            64.58GB
Cache Directory:        /home/anjan/.crc/cache

@vyasgun vyasgun force-pushed the pr/pvc-status branch 3 times, most recently from 50fb17e to b60f808 Compare July 23, 2024 12:53
@vyasgun
Copy link
Contributor Author

vyasgun commented Jul 23, 2024

@anjannath Just updated the code. Please take a look again, thanks!

exit 0
fi
sudo df -B1 --output=used $mountpoints | awk ' { sum += $1 } END { printf "%d", sum} '
`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you check $ sudo pvs -o pv_free --noheadings --nosuffix --units k this provide the free space by default instead of all those grepping.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but it does not give the exact usage of mounted volumes.
On my system, the output is:

sudo pvs -o pv_free --noheadings --nosuffix --units k
  9453568.00

provides 9.4G of free space which means 15-9.4 = 5.6G of used space.

[core@api ~]$ cat print_pv_usage.sh
#!/bin/bash
mountpoints=$(lsblk --output=mountpoints | grep pvc | tr '\n' ' ')
if [ -z "$mountpoints" ]; then
    exit 0
fi
sudo df -B1 --output=used $mountpoints | awk ' { sum += $1 } END { printf "%d", sum} '

[core@api ~]$ ./print_pv_usage.sh
111681536

which is 111.7MB of used space

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And which one is correct? Do you have 111MB of data in your PVs, or 5,6G of data? Or another value?

Copy link
Contributor Author

@vyasgun vyasgun Jul 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

111MB is the correct one. The way I am fetching this value is as follows:

I have two PVCs:

gvyas-mac:~ gvyas$ kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
my-pvc     Bound    pvc-57f25d94-68de-4e76-99a6-983c488d8ba8   2Gi        RWO            topolvm-provisioner   5d8h
my-pvc-1   Bound    pvc-ba22fa21-5f01-4496-9044-3e0a8f8a8cd0   4Gi        RWO            topolvm-provisioner   5d8h

Inside the VM, I am using lsblk to find the relevant mountpoints.

[core@api ~]$ lsblk
NAME                                              MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
vda                                               252:0    0   31G  0 disk
├─vda1                                            252:1    0  200M  0 part /boot/efi
├─vda2                                            252:2    0  800M  0 part /boot
├─vda3                                            252:3    0    1M  0 part
├─vda4                                            252:4    0    1M  0 part
└─vda5                                            252:5    0   30G  0 part
  ├─rhel-root                                     253:0    0   15G  0 lvm  /var/lib/containers/storage/overlay
  │                                                                        /var
  │                                                                        /sysroot/ostree/deploy/rhel/var
  │                                                                        /usr
  │                                                                        /etc
  │                                                                        /
  │                                                                        /sysroot
  ├─rhel-0ad559a7--8b01--4ed0--b460--5aae3f0fa7c8 253:1    0    4G  0 lvm  /var/lib/kubelet/pods/a1473da7-14fb-4aeb-99fc-f411c97aec1c/volumes/kubernetes.io~csi/pvc-ba22fa21-5f01-4496-9044-3e0a8f8a8cd0/mount
  └─rhel-4be6fea3--c245--42f3--b74a--95d4cf7c4e0a 253:2    0    2G  0 lvm  /var/lib/kubelet/pods/e57ecde7-20dd-4ce6-8180-2b837dc0bffd/volumes/kubernetes.io~csi/pvc-57f25d94-68de-4e76-99a6-983c488d8ba8/mount

The bottom two mountpoints correspond to the two PVCs on the cluster. Then I am using df to get the usage.

[core@api ~]$ sudo df -h /var/lib/kubelet/pods/a1473da7-14fb-4aeb-99fc-f411c97aec1c/volumes/kubernetes.io~csi/pvc-ba22fa21-5f01-4496-9044-3e0a8f8a8cd0/mount /var/lib/kubelet/pods/e57ecde7-20dd-4ce6-8180-2b837dc0bffd/volumes/kubernetes.io~csi/pvc-57f25d94-68de-4e76-99a6-983c488d8ba8/mount
Filesystem                                         Size  Used Avail Use% Mounted on
/dev/topolvm/0ad559a7-8b01-4ed0-b460-5aae3f0fa7c8  4.0G   61M  3.9G   2% /var/lib/kubelet/pods/a1473da7-14fb-4aeb-99fc-f411c97aec1c/volumes/kubernetes.io~csi/pvc-ba22fa21-5f01-4496-9044-3e0a8f8a8cd0/mount
/dev/topolvm/4be6fea3-c245-42f3-b74a-95d4cf7c4e0a  2.0G   47M  1.9G   3% /var/lib/kubelet/pods/e57ecde7-20dd-4ce6-8180-2b837dc0bffd/volumes/kubernetes.io~csi/pvc-57f25d94-68de-4e76-99a6-983c488d8ba8/mount

After writing 100MB of data to my-pvc:

gvyas-mac:~ gvyas$ crc status
CRC VM:                  Running
MicroShift:              Running (v4.15.12)
RAM Usage:               2.037GB of 3.904GB
Disk Usage:              6.778GB of 16.1GB (Inside the CRC VM)
Persistent Volume Usage: 216.5MB of 15GB
Cache Usage:             74.36GB
Cache Directory:         /Users/gvyas/.crc/cache

Inside the VM:

[core@api ~]$ sudo df -h /var/lib/kubelet/pods/a1473da7-14fb-4aeb-99fc-f411c97aec1c/volumes/kubernetes.io~csi/pvc-ba22fa21-5f01-4496-9044-3e0a8f8a8cd0/mount /var/lib/kubelet/pods/e57ecde7-20dd-4ce6-8180-2b837dc0bffd/volumes/kubernetes.io~csi/pvc-57f25d94-68de-4e76-99a6-983c488d8ba8/mount
Filesystem                                         Size  Used Avail Use% Mounted on
/dev/topolvm/0ad559a7-8b01-4ed0-b460-5aae3f0fa7c8  4.0G   61M  3.9G   2% /var/lib/kubelet/pods/a1473da7-14fb-4aeb-99fc-f411c97aec1c/volumes/kubernetes.io~csi/pvc-ba22fa21-5f01-4496-9044-3e0a8f8a8cd0/mount
/dev/topolvm/4be6fea3-c245-42f3-b74a-95d4cf7c4e0a  2.0G  147M  1.8G   8% /var/lib/kubelet/pods/e57ecde7-20dd-4ce6-8180-2b837dc0bffd/volumes/kubernetes.io~csi/pvc-57f25d94-68de-4e76-99a6-983c488d8ba8/mount
[core@api ~]$ ./print_pv_usage.sh
216539136

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed answer :)
Seeing the output of kubectl get pvc, it looks like the command Praveen gave shows you the claimed space in the PVs, while your command gives you the space actually used on disk. Not sure which one is more important to show.
The PV claims will (I think) limit us, for exmpale if you claimed 15GB of disk space, and you only have 15GB on the partition for the PVs, then you won't be able to do anything more even if you have only used 100MB in the claimed storage.
So maybe it's just the "non claimed" space we want to report, regardless of whether the claimed space is free on not free?

(disclaimer, I'm not that familiar with PVs/..., I hope I did not get the basics wrong in the explanation above :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another point to consider is overprovisioning. If I use a storage class with overprovisioning ratio set to greater than 1, I can allocate more space than the configured storage. For instance, in the example below, 18Gi was allocated in total to the two PVCs even though the configured upper limit is 16Gi.

gvyas-mac:~ gvyas$ crc config get persistent-volume-size
persistent-volume-size : 16

gvyas-mac:~ gvyas$ kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
my-pvc-1   Bound    pvc-4d9f8015-ce4c-4d1c-8c4a-e4e5622d5489   10Gi       RWO            thin-provisioner   12s
my-pvc-2   Bound    pvc-584e7136-d8ea-49ac-8569-3c8e1f47613d   8Gi        RWO            thin-provisioner   4m55s

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think we should show the allocated space and not the actual used space, as it'd be complicated if we want to show actual space used if user has define their own SC with higher over provisioning ratio and only allocated space when over provisioning is 1

maybe we can add 6.778GB of 16.1GB (Allocated) to hint to the user that its allocated space and not used space!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another point to consider is overprovisioning

I would discourage that since if that pod really consumed that much space then it lead to data corruption so for our end better to tell user how much storage PV used during creation and how much left to allocate for the user.

Copy link
Contributor Author

@vyasgun vyasgun Aug 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please check the updated PR, thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated output:

gvyas-mac:~ gvyas$ kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
my-pvc-2   Bound    pvc-1612bdc6-ce6d-40f3-beb4-d68f57eb6a34   10Gi       RWO            topolvm-provisioner   55m

gvyas-mac:~ gvyas$ crc status
CRC VM:                  Running
MicroShift:              Running (v4.15.12)
RAM Usage:               1.62GB of 3.904GB
Disk Usage:              6.824GB of 20.41GB (Inside the CRC VM)
Persistent Volume Usage: 10.67GB of 16GB (Allocated)
Cache Usage:             74.36GB
Cache Directory:         /Users/gvyas/.crc/cache

- Enhance the crc status command to display the PersistentVolumeUsage and PersistentVolumeSize from the CRC VM.
- Fetch and display the persistent volume usage value in the crc status output.
- Add new fields to the ClusterStatus and CrcStatus structs to support this feature.
- Update the CRC daemon to include these changes.
Copy link

openshift-ci bot commented Aug 2, 2024

@vyasgun: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/security 944a250 link false /test security
ci/prow/integration-crc 944a250 link true /test integration-crc

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@anjannath
Copy link
Member

@vyasgun one thing i forgot to mention earlier is that the response for /status from the daemon API also needs to be updated to include the persistent volume size

@vyasgun
Copy link
Contributor Author

vyasgun commented Aug 5, 2024

The /status changes are there: https://github.com/crc-org/crc/pull/4265/files#diff-33aaddc3c8c7beec8f2bd2c494769cf4d50dafc32852da086948ad308127d19bR73

gvyas-mac:~ gvyas$ curl --unix-socket /Users/gvyas/.crc/crc-http.sock http://127.0.0.1/api/status
{"CrcStatus":"Running","OpenshiftStatus":"Running","OpenshiftVersion":"4.15.12","DiskUse":5831663616,"DiskSize":24702353408,"RAMUse":2200383488,"RAMSize":3904045056,"PersistentVolumeUse":10670309377,"PersistentVolumeSize":16000000000,"Preset":"microshift"}

@anjannath -- let me know if anything else needs to be added!

Copy link

openshift-ci bot commented Aug 5, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: anjannath

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved label Aug 5, 2024
@praveenkumar praveenkumar merged commit e28d25f into crc-org:main Aug 7, 2024
23 of 29 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants