-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support forced GRANDPA validators change #76
Comments
@svyatonik is that still relevant? Given we can do #490 ? |
I don't see how #490 resolves this issue. It maybe look like possible solution for resolving effects of forced change (i.e. restarting the sync), but it won't prevent client from stopping sync. So the client may not be trusted => we can't trust any applications built on top of it. To add to that issue, as I said elsewhere, we'll also need to protect against long range attacks as well before deploying to production. Maybe add basic header verification + reject all headers from outside of some time-frame + implement something like #2496 for? Re the latter - I'm not sure that simply broadcasting justifications would be enough. Probably manually crafting equivocation tx to GRANDPA pallet would help. Needs investigation |
Where can I read more about "forced changes" is this part of the protocol? I thought this is an abnormal situation, that is only needed in case GRANDPA goes wrong?
Do you mean a case when the client has to catch up a lot? I'm assuming this should never be the case, cause we can bootstrap the client with #490 and then should expect to have it always in sync (well there might be a problem if we halt the bridge though). |
Most of my knowledge is in the description. I believe the GRANDPA code (client and runtime) is the only place to read about that. TLDR (if I'm not missing something): authorities set is failing to finalize headers, hence the mechanism that is switching to new set without waiting for justification from previous set. And justification is what lc relies on.
That's my understanding too.
Yes - I mean situation when we (re)start syncing with current set set to some old authorities set that can't be punished directly (by broadcasting round votes). Right now we have no any guarantees about that in the code && assuming that the bridge may be trusted only if other system components (here: either GRANDPA, or relayer + pallet owner) are running smoothly, makes me feel insecure about the bridge. Imo there should be automatic mechanism that does the thing without additional assumptions about other components reliability. Something like:
There may be another way to achieve the same. We should just not forget about it before deployment. |
Some extra context from @andresilva and @AlistairStewart:
So seems we could have a |
Moving to "Nice to Have", we can rely on governance for now. |
We will rely on governance |
There's no way currently to support that in light clients - we need some way to trust that handoff. So we need to stop syncing (that fork) when we see a forced change.
The text was updated successfully, but these errors were encountered: