Hybrid 64 GB RAM + SSD plotting - C30 under 5 minutes #309
miwojcik
started this conversation in
Show and tell
Replies: 1 comment 2 replies
-
Any info what should be min temp space for C29 ? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi! Here is my idea how can we utilize our setups with 64GB of RAM to produce C30 compressed plots quicker and reduce SSD wear.
My setup
OS: Debian 12
HW: Ryzen 5950x, RTX 3070 Ti, 64GB/3200MHz RAM, 2 x 1TB Corsair MP510 NVMe SSDs, small SSD for OS
Problem
Because of B-550I mini-ITX board and case I cannot easily upgrade my ram to 128GB. While my SSDs are quite fast for my needs, during plotting they quickly run out cache space or overheat and slow down. While placing additional fan to cool them helped a little I wanted to put my RAM to use.
Solution
After experimenting with tmpfs + swap, loop devices placed in tmpfs combined with SSD by LVM etc. I think I found the best candidate - linux brd module allows to create block device directly in ram, that we can later use in mdadm RAID0 with SSD.
How to:
The commands below assume that the system will be used only for plotting. If you need it for other things, you may need to decrease the size of ram disk and increase the size of NVMe partition. Their sizes don't have to match - mdadm will use all available space. For C30 compressionIn you should end up with size reported by
mdadm -D /dev/md0
that's at least:Array Size : 102305792 (97.57 GiB 104.76 GB)
. Instructions can be also tuned for setups with 32GB of RAM by reducing the RAM disk to 22-23GB, but YMMV.modprobe brd rd_size=55500000 max_part=1 rd_nr=1
55500000 is the size in KB
parted /dev/nvmeXnX mklabel gpt
parted -a optimal /dev/nvmeXnX mkpart primary 0% 48000MB
mdadm --create --verbose /dev/md0 -l 0 -n 2 /dev/ram0 /dev/nvmeXnXp1
parted /dev/nvmeYnY mklabel gpt
parted -a optimal /dev/nvmeYnY mkpart primary 0% 400GB
If you use only one SSD for plotting, create second partition on it instead.
for ramdisk raid:
mkfs.ext4 -O ^has_journal,bigalloc -T largefile4 -m 0 /dev/md0
for slow SSD:
mkfs.ext4 -O ^has_journal,bigalloc -T largefile4 -m 0 /dev/nvmeYnY
mount -o noatime,barrier=0 /dev/md0 ~/tmp_fast/
mount -o noatime,barrier=0 /dev/nvmeYnY ~/tmp_slow/
./cuda_plot_k32_v3 -C 30 -2 ./tmp_slow/ -3 ./tmp_fast/ -t ./tmp_slow/ -d @REMOTE -c $POOL_CONTRACT -f $FARMER_KEY
-t defaults to current dir, but setting it the same as -2 prevents moving the plots again when they are done
Removing the ramdisk
Before shutdown or to start from scratch again run the following:
umount /dev/md0
mdadm --stop /dev/md0
mdadm --zero-superblock /dev/ram0 /dev/nvmeXnXpX
rmmod brd
Results
Data written: 240GB to fast SSD, 193GB to slow SSD
I hope that helps somebody! If so, please share your results :)
Beta Was this translation helpful? Give feedback.
All reactions