Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance is slower than sshfs #19

Open
LionsAd opened this issue Jan 26, 2022 · 2 comments
Open

Performance is slower than sshfs #19

LionsAd opened this issue Jan 26, 2022 · 2 comments

Comments

@LionsAd
Copy link

LionsAd commented Jan 26, 2022

Great project - looks really nice.

I would have expected 9p with the nice rust tokio runtime to easily beat sshfs, but it turns out sshfs is faster by factor of 3x.

Note: I am using a Mac and a qemu VM as that is the use case for me.

Anything I can tweak in terms of using more CPUs and better threading for the tokio co-routines?

Edit: Here are the benchmarks (all done on QEMU on Silicon Mac):

TL;DR:

  • SSHFS has the lowest latency and okay raw data throughput
  • virtio has the best throughput for raw data writing by far
  • rust-p9 is unfortunately slowest right now, latency for find is comparable with virtio so this might be a 9P protocol limitation

Setup

QEMU option for virtio:

-virtfs local,path=projects/oss/rust-9p,mount_tag=foo,id=foo,security_model=mapped-xattr

All 9p mounts have been done with:

# virtio
sudo mount -t 9p -o version=9p2000.L,trans=virtio,uname=$USER,msize=104857600,cache=mmap foo /mnt

# rust-9p
sudo mount -t 9p -o version=9p2000.L,trans=tcp,port=8564,uname=$USER,msize=104857600,cache=mmap 192.168.5.2 /mnt

and the target directory is the one of rust-9p.

Benchmarks

9p-virtio:

colima:~$ sh /mnt/perf.sh 
+ cd /mnt/foo
+ time find target
real	0m 0.78s
user	0m 0.00s
sys	0m 0.10s
+ time cp -r target target3
real	0m 8.83s
user	0m 0.00s
sys	0m 1.10s
+ time rm -rf target3
real	0m 1.56s
user	0m 0.00s
sys	0m 0.30s
+ time dd 'if=/dev/zero' 'of=test.dat' 'bs=1G' 'count=3'
3+0 records in
3+0 records out
real	0m 1.72s
user	0m 0.00s
sys	0m 0.70s

SSHFS:

+ cd /mnt/foo
+ time find target
real	0m 0.45s
user	0m 0.04s
sys	0m 0.00s
+ time cp -r target target3
real	0m 5.28s
user	0m 0.03s
sys	0m 0.62s
+ time rm -rf target3
real	0m 0.74s
user	0m 0.00s
sys	0m 0.09s
+ time dd 'if=/dev/zero' 'of=test.dat' 'bs=1G' 'count=3'
3+0 records in
3+0 records out
real	0m 17.55s
user	0m 0.00s
sys	0m 2.26s

https://github.com/pfpacket/rust-9p via TCP transport:

colima:/$ sh /mnt/perf.sh 
+ cd /mnt/foo
+ time find target
real	0m 0.77s
user	0m 0.00s
sys	0m 0.11s
+ time cp -r target target3
real	0m 10.25s
user	0m 0.00s
sys	0m 1.45s
+ time rm -rf target3
real	0m 1.40s
user	0m 0.03s
sys	0m 0.17s
+ time dd 'if=/dev/zero' 'of=test.dat' 'bs=1G' 'count=3'
3+0 records in
3+0 records out
real	1m 8.48s
user	0m 0.00s
sys	0m 5.01s
colima:/$ 
@LionsAd
Copy link
Author

LionsAd commented Jan 26, 2022

Added benchmarks ^^

@ERnsTL
Copy link

ERnsTL commented Jan 15, 2023

Greetings everyone, I also noticed this like you @LionsAd - tried doing "ls" of a remote directory in cable LAN shared via unpfs and Linux (Ubuntu 22.04) with the exact mount settings given in the Readme file, latest rustc via rustup etc. - compared to sshfs there is a noticeable delay. This is unexpected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants