-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Stage 1 block verification failed ... Block(TemporarilyInvalid(OutOfBounds ... #8128
Comments
@taoeffect Could you please try to synchronize your local time? |
I just checked, and it already is synchronized, per output of |
@taoeffect could you please run the beta docker Just for information about the time, you checked manually right ? we're talking about milliseconds difference eventually here. The easiest to verify is to visit https://time.is |
I'll give it a try and report back, but just fyi, i think this is ridiculous. Surely you can figure out a way to make it less time dependent and robust, otherwise you're asking for network faults and flakiness. See also Timestamps Unnecessary In Proof-of-Work?. |
@taoeffect The message is not critical, Parity will just ignore a block until it becomes valid according to the miner. If your time is in sync it's miner faults to produce "future block" and it may become and uncle instead. Miners are incentivised by difficulty algorithm to declare a fair timestamp from one side and by preference to include blocks that match current timestamp from the other. |
I updated to
So it's not completely gone, but definitely seems to be complaining less at least. Also, strangely, when I checked the fullnode (before updating), it had stopped syncing completely, displaying a long series of this for several hours (up to the point that I checked):
Starting at:
After updating to the beta, it is now connecting to peers again, but I'm concerned about the sudden failure. This is something you may want to keep an eye on, as it might be another bug, or it might be an attack by ISPs now that net neutrality has been neutered. |
Also, this just happened with the beta, don't know if it's worth opening another issue or not, so I'm posting it here (but let me know if you want me to put it in another issue):
|
@taoeffect the periodic snapshot failing is expected behavior on a pruned node. |
This issue is happening on my private network also. My parity version:
I have synchronized my time using chrony, and I hope that my node does not go out of sync again. I am running chrony on centos using these commands:
|
This is not a client issue, this could be also an Authority node with wrong time settings. This is rather a warning not an error. |
Well, it wasn't an "Authority node" and the time settings were correct. However, I'll note that the issue does seem to have mostly gone away with the latest beta. |
What I meant is that the authority node authoring a block with a wrong timestamp sending it to your node with correct time settings. So not much you can do about it |
"Authority node"? I thought those were only for Proof-of-Authority? This was happening on the main chain which doesn't have authorities AFAIK. |
@taoeffect on main chain it will be a miner producing blocks with timestamp set tiny bit in future. I will suppress this warning and make it a debug message instead, it's something that we were always doing anyway it just became a bit more visible now. |
Your issue description goes here below. Try to include actual vs. expected behavior and steps to reproduce the issue.
After updating my system and rebooting, parity is now returning strange errors (?) whose meaning and severity is indiscernible to me:
So, I don't know what to do... It seems to still be importing new blocks. How bad is this? Do I need to start over? How? And what's going on?
The text was updated successfully, but these errors were encountered: