You divide a dataset into b-sized blocks. You access the b-sized blocks randomly. How does your memory/disk bandwidth change depending on b? This tiny project answers that question.
- $ make
For measuring memory bandwidth. One cacheline is 64B:
- $ test mem <blockSizeInCacheLines> <totalSizeInMB>
For measuring disk bandwidth:
- Write a temporary file:
- $ test write 0 <totalSizeInMB>
- Drop cached pages in linux:
- $ echo 3 | sudo tee /proc/sys/vm/drop_caches
- Read the temporary file in blocks:
- $ test disk <blockSizeInCacheLines> <totalSizeInMB>