-
Notifications
You must be signed in to change notification settings - Fork 20k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RocksDB opens too many files on osx Mojave #18373
Comments
Am I reading it correct that you are referring to rocksdb here? Or was it a typo and you intended to say leveldb? |
@AyushyaChitransh RocksDB is based on LevelDB as far as I can tell. I'm not sure which flavor is geth using right now. Why does it matter? |
Go-ethereum uses leveldb, not Rocksdb. Geth queries the operating system for the allowance, and makes sure to stay below that. Decreasing the number of files allowed could be done, but it would only mean more processing time, since files needs to be closed and opened. So, this is not a 'bug' -- it would be it if allocated more files than it was actually given, and threw errors when running out of filehandles. If we actually wanted to lower the number of files used by leveldb, we could increase the leveldb file size. However, every attempt we have done at that degrades performance; since doing so increases the compaction overhead. It might be that certain filesystems have a built-in overhead for large number of files, specifically windows-users on NTFS have sometimes reported that. I'll close this for now, since it's not really a 'bug' and not really actionable. Please reopen if you have something more concrete. |
Edit to add: during the fast-sync phase where both headers, bodies, receipts and state is downloaded, geth is extremely write-intense. It will go down once the data is downloaded. |
I really appreciate your work on geth, and please don't consider this as trolling, but for some reason geth behaves much worse then Ethereum Parity client for me. The initial fast sync time is almost an order of magnitude lower and hardware utilization much better. Again, I'm not saying this to troll you guys, I would really want to use your client since it's de facto standard, compatibility with the ecosystem is better and if something goes south this will probably be the reference behavior. I appreciate the 1.9.0 release and all of the improvements that were done there, but for some reason it seems like it's excruciatingly slower and almost kills my hardware. This issue was reported almost half a year ago, so excuse me for not remembering the exact details, but when closing the client for a day it took almost half an hour to catch up for a single day and it usually queried the disk multiple a couple of hundred MB/s. I've tried this on multiple computers, multiple SSDs, still the same. Are there at least some settings where I can tradeoff memory for speed and disk utilization? |
System information
Geth version:
Version: 1.8.20-stable
Architecture: amd64
Protocol Versions: [63 62]
Network Id: 1
Go Version: go1.11.4
Operating System: darwin
GOPATH=/Users/kzaher/go
GOROOT=/usr/local/Cellar/go/1.11.4/libexec
OS & Version: OSX Mojave 10.14.2 (18C54)
Commit hash : v1.8.20 24d727b
Expected behaviour
For it to sync properly.
Actual behaviour
If fails with "too many open files".
Steps to reproduce the behaviour
run:
geth --cache=4096 --maxpeers=100 --syncmode "fast" --rpcapi personal,web3,eth,net --datadir /Volumes/evo/ethereum/chains/main
wait for 11 hours.
Backtrace
This is command output, should be clear enough.
Level DB is holding a bunch of files open.
Running
lsof -n -c geth
displays that rocks db is holding a bunch of files open.lsof -n -c geth | wc -l
>24392
My disk is writing constantly ~100MB/s. There is probably additional write amplification on filesystem level. Not to mention rocks db compacting. This is wearing out my SSD significantly.
Is rocks db really the optimal database for this use case? Isn't there anything more efficient? I can increase the file limit temporarily with
sudo sysctl -w kern.maxfilesperproc=70000
, but this is tearing my computer apart.The text was updated successfully, but these errors were encountered: