Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support forced GRANDPA validators change #76

Closed
svyatonik opened this issue Apr 24, 2020 · 7 comments
Closed

Support forced GRANDPA validators change #76

svyatonik opened this issue Apr 24, 2020 · 7 comments
Labels
A-feat New feature or request P-Runtime
Milestone

Comments

@svyatonik
Copy link
Contributor

There's no way currently to support that in light clients - we need some way to trust that handoff. So we need to stop syncing (that fork) when we see a forced change.

@tomusdrw
Copy link
Contributor

@svyatonik is that still relevant? Given we can do #490 ?

@svyatonik
Copy link
Contributor Author

I don't see how #490 resolves this issue. It maybe look like possible solution for resolving effects of forced change (i.e. restarting the sync), but it won't prevent client from stopping sync. So the client may not be trusted => we can't trust any applications built on top of it.

To add to that issue, as I said elsewhere, we'll also need to protect against long range attacks as well before deploying to production. Maybe add basic header verification + reject all headers from outside of some time-frame + implement something like #2496 for? Re the latter - I'm not sure that simply broadcasting justifications would be enough. Probably manually crafting equivocation tx to GRANDPA pallet would help. Needs investigation

@tomusdrw
Copy link
Contributor

Where can I read more about "forced changes" is this part of the protocol? I thought this is an abnormal situation, that is only needed in case GRANDPA goes wrong?

we'll also need to protect against long range attacks as well before deploying to production.

Do you mean a case when the client has to catch up a lot? I'm assuming this should never be the case, cause we can bootstrap the client with #490 and then should expect to have it always in sync (well there might be a problem if we halt the bridge though).

@svyatonik
Copy link
Contributor Author

Where can I read more about "forced changes" is this part of the protocol?

Most of my knowledge is in the description. I believe the GRANDPA code (client and runtime) is the only place to read about that. TLDR (if I'm not missing something): authorities set is failing to finalize headers, hence the mechanism that is switching to new set without waiting for justification from previous set. And justification is what lc relies on.

I thought this is an abnormal situation, that is only needed in case GRANDPA goes wrong?

That's my understanding too.

Do you mean a case when the client has to catch up a lot? I'm assuming this should never be the case, cause we can bootstrap the client with #490 and then should expect to have it always in sync (well there might be a problem if we halt the bridge though).

Yes - I mean situation when we (re)start syncing with current set set to some old authorities set that can't be punished directly (by broadcasting round votes). Right now we have no any guarantees about that in the code && assuming that the bridge may be trusted only if other system components (here: either GRANDPA, or relayer + pallet owner) are running smoothly, makes me feel insecure about the bridge. Imo there should be automatic mechanism that does the thing without additional assumptions about other components reliability. Something like:

  1. auto-halt if pallet believes that the current session should have been ended;
  2. stop accepting headers to the fork with scheduled forced changes;
  3. optionally - submit/broadcast equivocation proof when it sees misbehaving. Optionally, because if (1) is implemented, then we'll only deal with current set signatures.

There may be another way to achieve the same. We should just not forget about it before deployment.

@tomusdrw
Copy link
Contributor

Some extra context from @andresilva and @AlistairStewart:

in GRANDPA there are two mechanisms for switching the authority set:

  • Regular authority set changes - these are announced at a given block (through an header digest) and enacted when that block is finalized. This is what should happen if GRANDPA is working properly.
  • Forced authority set changes - these are announced at a given block (potentially with some enactment delay) and they are enacted when the block after the given delay is imported. Forced changes should only be triggered if the current GRANDPA authority set is not finalizing.
    Forced changes are pretty much a way of forking the GRANDPA authority set but coordinated through the runtime. The only scenarios where we'd want to use this is if the existing authority set stops working (e.g. >1/3 go offline), in which case we'd be stuck with the current authority set since we wouldn't be able to transition to a later one.
    The only way to trigger a forced change is through a root call (https://github.com/paritytech/substrate/blob/master/frame/grandpa/src/lib.rs#L300), and the light clients won't be able to verify it since they aren't executing the runtime (how can they trust the digest in the header if they didn't execute the code that generated it?)
    Alistair
    I'd anticipated light clients having to have a manual update for that. For some bridge applications that isn't even possible and we are just doomed if the current authority set fails.

So seems we could have a ensure_owner_or_root call to approve a forced change then + (1) or (2) from your proposal.

@tomusdrw
Copy link
Contributor

Moving to "Nice to Have", we can rely on governance for now.

@EmmanuellNorbertTulbure
Copy link

We will rely on governance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-feat New feature or request P-Runtime
Projects
None yet
Development

No branches or pull requests

3 participants