-
Notifications
You must be signed in to change notification settings - Fork 699
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
warp_sync
hanging on AwaitPeers
if warp_sync_params
is None
#14
Comments
After paritytech/substrate#12761
|
In my testing, actually, And it downloads state for a certain period of time, but it gets halted at |
@shunsukew how many peers are you connected to? |
@bkchr |
@shunsukew your problem is something different! If you run the warp sync you will see that when it finishes downloading the state it prints out an error. Then the warp sync is stuck as you see it. The first thing you need is to add this "fix":
This will skip the verification when we import state changes directly. However, you will still see an issue about storage root being "invalid". The problem here is the following pr: AstarNetwork/Astar#567 It set @cheme should be able to provide you some guide on how to setup the state migration pallet for parachains. CC @akru |
Thank you so much for checking and letting us know the required changes. As for Verifier fix,
So, considering Astar had Aura runtime API from the beginning, this shouldn't be directly related to the problem we have I think. (It is necessary for Shiden, but I'm only testing Astar warp sync). |
@shunsukew , small write up as info may be a bit sparse. If the chain was not using state_version at first and when version 1 afterward, then it is very likely that the state is in "hybrid" mode, some nodes runing on state 0 and some on state 1. To check that, the first step would be to run this rpc: state.trieMigrationStatus The rpc is just iterating on all the chain trie nodes and look for state version 0 that should be migrated to state version 1 (paritytech/cumulus#1424). If there is items to migrate in the result, indeed those node needs to be migrated before warpsync can be use. Changing an node from state v0 to v1 just require to rewrite the value. Thus the pallet migration is doing it. See : https://hackmd.io/JagpUd8tTjuKf9HQtpvHIQ There is two scenarii:
Should be noticed that parachain process require specific right for a parachain account and an external process that emit extrinsic. Personally, if using the manual migration is a no go (for instance if giving right to emit these extrinsic for a given account is a no go), I would run the automatic migration with a modification:
obviously only work if there is not too many of these big values, and if the chain will never create new key with these big value (could not be skipped if unknown when we include the pallet). Regarding the migration testing I would suggest doing so with |
Thank you so much for the detailed info. |
* Improve the error handling in BitcoindBackend * Add tools subcommand * Add blockchain subcommand * Update getting_started.md * Make gettxoutsetinfo interruputible * Split into installation.md and usage.md
Is there an existing issue?
Experiencing problems? Have you tried our Stack Exchange first?
Description of bug
If
warp_sync_params
is set toNone
:warp_sync
will hang onAwaitingPeers
:https://github.com/paritytech/substrate/blob/53c58fbc3dc6060a2f611a8d7adfdeea655fe324/client/network/sync/src/lib.rs#L516-L517
Steps to reproduce
Set
warp_sync_params
here toNone
.The text was updated successfully, but these errors were encountered: