-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New blocks aren't fetched and RPC goes down on periodic snapshot #10388
Comments
This is unfortunately known behaviour, ie, snapshotting is an extremely resource intensive process and can grind the rest of your node to a halt - especially networking, which includes rpc (notice the loss of peers) I would recommend running with |
I don't think this should be known behaviour as this process has been fine for me on Ropsten until I upgraded to 2.2.10 for the security fix. It also works fine on kovan/main with similar spec'd boxes. I've disabled snapshots in the meantime, but I'd really like not to as I use my nodes as reserved peers for others syncing. |
which version did you upgrade from? |
2.2.9. Although, I run a templating job daily against the stable docker version, so the boxes that were created would of pulled down the db from 2.2.10. If it helps, I can explain the process in a bit more detail. |
You're most certainly affected by #10361 which would explain why you didn't have problems with 2.2.9 |
Seems to be related @c0deright and thanks, as I'm seeing the same issue with disk on our 2.2.10 boxes. This makes sense to be related, as the disk we provision for our Ropsten boxes is a fraction of the perf of our main boxes. The only outlier is that I don't see the same issue on Kovan which has the same box specs as Ropsten. |
Closing issue due to its stale state. |
This is a constant whenever a periodic snapshot is starting to be created, and has happened around 10+ times on different physical instances. The parity instances will be active and working fine for around 12 hours until a snapshot is starting to be created.
This logging output for the snapshot never finishes and during this, the RPC is down. The parity node will continue to slowly sync over time, but often gets stuck on the same block for many hours.
Logging output:
The text was updated successfully, but these errors were encountered: