-
Notifications
You must be signed in to change notification settings - Fork 20.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failing to store most recent snapshot #22463
Comments
On discord:
|
The After it was done doing so, it started regenerating the snapshot. It does so in the periods between blocks, which usually means it can crunch snapshot for 14 seconds until the next block comes along. In the case of a full-sync, it seems to be a bit counter-productive.
It managed to go through an additional ~1500 storage slots, but the starts and stops don't help. |
@holiman we are seeing this happen on some nodes after a restart even if the snapshot flag isn't toggled. For example, this clean shutdown triggered it, notice the error on shutdown around snapshot generation:
On the next startup the node then rebuilds the snapshot index:
I think there might be a race of sorts around node shutdown leading to unneeded regeneration. |
@ryanschneider do you have more logs about what happens during that clean shutdown ? |
block: https://ropsten.etherscan.io/block/9810918 stateroot |
Unfortunately the server in question has already been recycled, but I was able to recover some of the log from my terminal buffer, hopefully this is enough: https://gist.github.com/ryanschneider/62f06149a215ccf7ccc72a239fa7e42f Seems like the server was operating fine before the shutdown. FWIW this node was only being used to sync the chain and make backups of the leveldb (we stop geth every 4 hours and create backup then start it up again), so it shouldn't have had any load except for the usual devp2p traffic, no RPC traffic was going to this node. |
This is interesting: |
There are only two uncles in the chain around that time
One minted at |
What if instead of genesis that's the state root of some partial block? Like if the shutdown context triggered mid import and a state transition was aborted but was accidentally sent to the snapshotter instead of being discarded? I haven't looked at the code enough to say if this is possible just throwing it out there as a possible hypothesis. |
@holiman looking a bit closer at my log, isn't it a little odd that AFAICT that would only happen if I feel like there's potentially something a little racy going on between |
This issue has been automatically closed because there has been no response to our request for more information from the original author. With only the information that is currently in the issue, we don't have enough information to take action. Please reach out if you have more relevant information or answers to our questions so that we can investigate further. |
Sorry, the no-response bot is a bit hyperactive sometimes. It doesn't recognize "more information" from others than the op and closes some issues prematurely |
This issue has been automatically closed because there has been no response to our request for more information from the original author. With only the information that is currently in the issue, we don't have enough information to take action. Please reach out if you have more relevant information or answers to our questions so that we can investigate further. |
Hah! The bot doesn't respect your authoritah!
|
Any news here? I get the See here (but it's the same with Fast Sync): |
I'm running an archive node:
Removed some irrelevant flags: (E.G.
http
,ws
,graphql
,ethstats
)I asked over on the discord if I needed to be concerned about any of the new flags, I was told no, not really because I run wtih
--syncmode=full --gcmode=archive
This node is not yet fully in sync, it's back in the 10M block range.
The first time I started it up, I got a lot of logs like this:
I then got this log entry:
Then, my node started spitting this out A LOT:
Question: Is this expected behavior?
The text was updated successfully, but these errors were encountered: