-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Full node syncing extremely slow #302
Comments
If you don’t need all the block data, I suggest you use the fast sync mode.I used the fast sync mode to sync all the data in 9 hours. |
Same thing for me :/
It took me about 12 hours to sync...
|
@maxthedev How big is the full synced blockchain now? I tried doing full sync with snapshot, but unfrotunately it drained whole 4 TB SSD space I had. How many TBs does full instance take? |
I stopped the instance, so I can't answer you precisely but something between 500 & 600 Go or a little bit higher but definitely not 1Tb ;) |
If you just need to monitor transactions and send transactions, I suggest you switch to fast synchronization mode. I only used a hard drive of less than 300GB in this mode |
@maxthedev then you must have been running prunned node instead of archive one. Why did you need snapshopt in that case? @ares0x I have tried that too after running out of the space in archiving mode, but node never finishes syncing with fast. I posted my issue here #283 |
|
@zhongfu we're running the node with After a while the node is syncing with @khelle @maxthedev at block no. 8,061,576 the node takes ~5.8Tb and the config file is: https://gist.github.com/aaadipopt/ad06b6225c3542714ae129924ff973e2 @ares0x is the |
@maxthedev from where can we get the latest snapshot to download? is there any one for |
Hm? Isn't that log message showing up precisely because (As a side note, it seems like
No clue, honestly. I've never tried a full sync, because it'd take too long -- I'm currently only running non-archive nodes anyway It should definitely be way faster than 20 mgas/s ( You mention that you're running on AWS EBS storage -- I think that might be a bit of an issue, since latency is pretty mediocre even for io2 volumes:
Even SATA SSDs are an order of magnitude faster than that. You may want to consider picking an instance type with locally-attached (but unfortunately ephemeral) NVMe storage, like (Or find another host that can provide you with locally-attached storage at a lower price.)
Full, non-archive nodes -- yes, here But you're running an archive node. I don't think there's any archive node chaindata snapshots lying around. You could find someone else that's also running an archive node and ask for their help, I guess? |
@zhongfu thanks, but I don't think |
Hm, not sure about that then. Have you set I was just assuming that you were running an archive node, judging by the size of the storage you specced for your node. 7TBish is way more than required for a non-archive full node (for that, something like 1-2TB should be more enough). On the other hand, an archive node probably requires more than 7TB of storage -- or if they don't, they probably will soon require more than that, so idk
There's also another sync mode available newer geth clients -- You probably won't be able to use (Also, sync mode != gcmode) |
Maybe I enabled it in the past, i don't remember exactly, but
You're sure that Anyway, the issue here is that after a while the geth client is syncing with 1blk/30s. Any help on this? :) Thanks |
Ah yeah, you're right,
But otherwise, sync mode is still != gcmode -- the former determines how the node syncs, the latter determines if the node will garbage collect old data, etc. (I'm not sure if you can do a fast/snap sync (i.e. syncmode=fast or snap) with gcmode=archive -- if you've got archive node peers, it feels like it might just be possible? Unless geth decides to ignore one of those flags, or refuses to start up with such a configuration, of course.)
Possibly poor I/O performance (very high latency on EBS, even on io2) -- see here |
thanks, but probably poor I/O performance will be constant from first block, not after few minutes |
eh, maybe. ¯_(ツ)_/¯ I'm not really going to bother figuring out why that's the case, but here are some ideas
also, a quick look around the issue tracker reveals that most (?) of the people who manage to catch up to chain head on AWS instances (especially within a reasonable amount of time) are using of course, there are those who (apparently) have managed to sync on EBS volumes -- on gp3 even! (but did they sync from a chaindata snapshot, or was it from before BSC's block gas limit hit 60? I'm not too sure about that.) but the numbers don't lie:
|
ok, I started a new node and it looks good, and it's syncing pretty fast. ( first 5M blks in one day ) with the initial node had the older geth version and after upgrade to as for the |
and, if I restore the disk from a snapshot ( aws snapshot ) have the same awkward behaviour
|
pretty sure a BSC archive node will take more than 5.5TiB now |
System information
Geth version:
geth version
Version: 1.1.0-beta
Git Commit: 032970b
OS & Version: Windows/Linux/OSX
"Ubuntu 20.04.2 LTS"
Commit hash : (if
develop
)Expected behaviour
Be able to catch up the network
Actual behaviour
Can't sync with the network
Steps to reproduce the behaviour
Create a 'c5a.8xlarge' vm with 7Tb io2 (32k iops) on aws and follow the steps from https://docs.binance.org/smart-chain/developer/fullnode.html
Backtrace
logs cached at a random time:
logs cached right after restart:
Those are the client logs. when I restart the service it runs a bit faster but not enough.
What we're doing wrong?
The text was updated successfully, but these errors were encountered: