Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPFS Repo Storage And Files Over A Huge Amount(30G+), "ipfs add" action slow down in a high concurrency environment #5438

Open
daijiale opened this issue Sep 7, 2018 · 3 comments
Labels
topic/perf Performance

Comments

@daijiale
Copy link
Member

daijiale commented Sep 7, 2018

Version information:

go-ipfs version: 0.4.17-
Repo version: 7
System version: amd64/linux
Golang version: go1.10.3

Type:

Bug, Feature

Description:

I tested the "ipfs add" performance in a high concurrency scenario on my centOS node environment

When my IPFS repo storage and files over a huge amount(30G+), "ipfs add" action slow down ,Surprisingly,I simulated 100 processes work on it at the same time,and it's cost morn than 10 seconds ,and I adjusted the process to 1000,it cost almost 2 mins 。

This is our test shell script:

#!/bin/bash
echo "IPFS-Swing-Test Start!"
Njob=100
for ((i=0; i<$Njob; i++)); do
          echo  "progress $i is testing"
          dd if=/dev/urandom of=$i bs=1K count=50
          time docker exec ipfs_host ipfs add $i &
done
wait
#等待循环结束再执行wait后面的内容
echo -e "time-consuming: $SECONDS    seconds"    
#显示脚本执行耗时

This is result of time consuming:

"real	0m9.627s
user	0m0.077s
sys	0m0.017s

real	0m9.634s
user	0m0.089s
sys	0m0.007s

real	0m9.826s
user	0m0.083s
sys	0m0.019s

real	0m9.784s
user	0m0.090s
sys	0m0.013s

real	0m9.949s
user	0m0.089s
sys	0m0.011s

real	0m10.046s
user	0m0.089s
sys	0m0.017s

real	0m10.108s
user	0m0.102s
sys	0m0.012s

real	0m10.383s
user	0m0.084s
sys	0m0.017s

real	0m10.530s
user	0m0.105s
sys	0m0.015s

real	0m10.527s
user	0m0.080s
sys	0m0.017s
time-consuming: 12    seconds

The cause of the problem, I guess, may be due to the seriousness of the Merkle tree's shard retrieval, Is official team encountered this problem? What is the reason of this issue? Can I help you optimize this issue together?

@magik6k
Copy link
Member

magik6k commented Sep 7, 2018

IPFS by default stores blocks as individual files, and some filesystems aren't great at dealing with large numbers of them. Can you try using experimental badger datastore - https://github.com/ipfs/go-ipfs/blob/master/docs/experimental-features.md#badger-datastore - and see how that affects your test case?

@daijiale
Copy link
Member Author

daijiale commented Sep 7, 2018

Thanks you @magik6k ,I will have a try :

$ ipfs config profile apply badgerds
$ ipfs-ds-convert convert

Is this a feature of an experiment used to replace level-DB?I will take the time to find out badget KV DB and I will also think about whether there is a better optimization plan for this issue and submit it to you.

@Stebalien
Copy link
Member

The issue here is probably in pinning multiple small files. See: #5221.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic/perf Performance
Projects
None yet
Development

No branches or pull requests

3 participants