Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Systemd-provided cgroup IO limits #125

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

zatricky
Copy link

@zatricky zatricky commented Aug 25, 2024

Maintenance tasks may starve system IO resources for other more urgent workloads. This PR enables cgroup IO resource limits when using systemd timers.

Notes:

  • This needs some testing especially on other distributions. I've tested this with balance operations in Fedora 40 (systemd 255.10-3.fc40).
  • This works for systemd but I'm not sure how best to achieve the same for cron. Wrapping systemd-run seems redundant since it technically temporarily creates a service for each run.
  • I am not sure if the defaults I have suggested are good. I have based these defaults on the limits I feel would be appropriate for spindles where I expect high IO demand for non-maintenance services and I don't mind if the maintenance tasks take a very long time to complete.
  • I am not sure if there is a good way to configure different limits for different disk classes. For example if you have a RAID1 OS filesystem on SSD and a large RAID5 backup filesystem on spindles, it would be useful to be able to have different sets of IO limits.

To view these cgroup limits in action outside of a regular service, you can wrap a command with systemd-run. For example, the following is a balance with -musage=30 on a two-disk filesystem on /dev/dm-0 and /dev/dm-1, with IOPS limits of 60 and bandwidth of 10MBps:

$ systemd-run --property="IOReadBandwidthMax=/dev/dm-0 10M" --property="IOWriteBandwidthMax=/dev/dm-0 10M" --property="IOReadIOPSMax=/dev/dm-0 60" --property="IOWriteIOPSMax=/dev/dm-0 60" --property="IOReadBandwidthMax=/dev/dm-1 10M" --property="IOWriteBandwidthMax=/dev/dm-1 10M" --property="IOReadIOPSMax=/dev/dm-1 60" --property="IOWriteIOPSMax=/dev/dm-1 60" btrfs balance start -musage=30 /
Running as unit: run-r0fa03384626b4245b07857fc38089744.service; invocation ID: 7869d843bd814cb7bc1e5db9d35bc46f
$ cat /sys/fs/cgroup/system.slice/run-r0fa03384626b4245b07857fc38089744.service/io.max
252:0 rbps=10000000 wbps=10000000 riops=60 wiops=max
252:128 rbps=10000000 wbps=10000000 riops=60 wiops=max
$ cat /sys/fs/cgroup/system.slice/run-r0fa03384626b4245b07857fc38089744.service/io.stat
253:6 rbytes=1114112 wbytes=15321464832 rios=68 wios=117555 dbytes=0 dios=0
.... many similar lines here in my system
$ journalctl -u run-r0fa03384626b4245b07857fc38089744.service
Aug 25 15:55:25 <hostname> systemd[1]: Started run-r0fa03384626b4245b07857fc38089744.service - /usr/sbin/btrfs balance start -musage=30 /.
Aug 25 15:55:52 <hostname> btrfs[86830]: Done, had to relocate 3 out of 12787 chunks
Aug 25 15:55:52 <hostname> systemd[1]: run-r0fa03384626b4245b07857fc38089744.service: Deactivated successfully.
Aug 25 15:55:52 <hostname> systemd[1]: run-r0fa03384626b4245b07857fc38089744.service: Consumed 17.403s CPU time.

Further to the above example, if I run lsblk it shows the corresponding device IDs 252:0 and 252:128 for dm-0 and dm-1 respectively.

Signed-off-by: Brendan Hide <brendan@swiftspirit.co.za>
Signed-off-by: Brendan Hide <brendan@swiftspirit.co.za>
Signed-off-by: Brendan Hide <brendan@swiftspirit.co.za>
@zatricky
Copy link
Author

zatricky commented Aug 26, 2024

Limits tested and working for btrfs-scrub also

@zatricky
Copy link
Author

zatricky commented Sep 9, 2024

I have tested and confirmed that this also works for btrfs-defrag. I'm not sure if it applies to btrfs-trim, so maybe the insertion of the IO limits configs should specifically skip the trim service.

@kdave I'd appreciate any comments on this PR, especially regards testing and what else should be done to have this ready to merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant