Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

Groundwork for generalized warp sync #5454

Merged
merged 23 commits into from
Apr 25, 2017
Merged

Groundwork for generalized warp sync #5454

merged 23 commits into from
Apr 25, 2017

Conversation

rphmeier
Copy link
Contributor

@rphmeier rphmeier commented Apr 13, 2017

Makes changes to the engines module that will allow warp sync, ancient block download, and light client sync to work for even complex engines.

Engines now detect and generate data for epoch changes. State proofs validating this data, along with the transitions themselves, are stored in the blockchain extras DB.

Snapshot block chunks are repurposed to a more general "secondary chunks" which engine-specific snapshot components will create and interpret as a means to corroborate the data within the snapshot state chunks. For proof-of-work engines, these chunks will be as specified in the wiki.

PoA engines like Aura and BasicAuthority will include all transitions in the snapshot, as described in #5427.
Ancient block download/light sync will make use of an EpochVerifier that caches an in-memory list of validators derived from the validated transitions obtained in the snapshot (or in the light client case, from the network). When epoch changes occur, we simply obtain a new EpochVerifier for that epoch. Ethash epochs don't currently make use of this mechanism, and just pretend that the chain is under one "epoch" so we never need to swap, but could be altered to in the future to ensure our DAG caching is completely optimal.

This approach isn't completely resilient (and indeed, won't really work for something like rinkeby's clique consensus; for that I'd wait on introducing engine-specific storage and have clique snapshots contain the entire headerchain.), but it will get the job done for now.

The "genesis epoch" has number 0, so any epochs beyond that must start with 1, otherwise warp and light sync will break.

@rphmeier rphmeier added A3-inprogress ⏳ Pull request is in progress. No review needed at this stage. M4-core ⛓ Core client code / Rust. labels Apr 13, 2017

#[test]
fn rejects_step_backwards() {
let tap = AccountProvider::transient_provider();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indent

@rphmeier
Copy link
Contributor Author

rphmeier commented Apr 20, 2017

To give a quick run-down of what's still left to do:

  • Implementation of snapshot chunk and restore traits for an authority-based method.
  • Implementation of ancient block restoration which uses an EpochVerifier that changes whenever a header is a transition. Perhaps this will be done via another trait hanging off of SnapshotComponents
  • Ensure restoration database always contains genesis epoch data. Otherwise, contracts seeded with initial validators won't work when restoring from snapshot.
  • Tests!

This implementation will be in a separate branch.

@rphmeier rphmeier changed the title [WIP] Generalized warp sync Groundwork for generalized warp sync Apr 20, 2017
@rphmeier rphmeier added A0-pleasereview 🤓 Pull request needs code review. and removed A3-inprogress ⏳ Pull request is in progress. No review needed at this stage. labels Apr 20, 2017
@rphmeier
Copy link
Contributor Author

@keorn

I'm mildly concerned about how Multi validator sets will play; there's a requirement that (for correct behavior) epochs should increase monotonically. My solution for now was to add the "inner" set's epoch number to the block number at which the transition occurred, although this will break for nested multi-sets and seems fragile.

Copy link

@keorn keorn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, lg in general. I like the Unsure stuff to keep state out of Engine. For Mutli maybe handling the recursive case is possible as mentioned.

use error::Error;
use header::Header;

/// Verifier for all blocks within an epoch without accessing
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfinished

pub type ChunkSink<'a> = FnMut(&[u8]) -> io::Result<()> + 'a;

// How many blocks to include in a snapshot, starting from the head of the chain.
const SNAPSHOT_BLOCKS: u64 = 30000;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be Engine specific? Different ones have different security requirements.

@@ -380,6 +390,12 @@ impl Client {
return Err(());
};

let verify_external_result = self.verifier.verify_block_external(header, &block.bytes, engine);
if let Err(e) = verify_external_result {
warn!(target: "client", "Stage 4 block verification failed for #{} ({})\nError: {:?}", header.number(), header.hash(), e);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stage 4 is here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(changed that to stage 5)

self.correct_set_by_number(header.number()).1.epoch_proof(header, caller)
}

fn epoch_set(&self, header: &Header, proof: &[u8]) -> Result<(u64, super::SimpleList), ::error::Error> {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe pass the previous block epoch nonce and return ValidatorSet instead of SimpleList? Then you can bump the nonce on recursive call.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

main problem there is that it's hard to say what the previous block epoch nonce would be without recovering it; meaning you have to package two state proofs for each transition in warp/light rather than one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right now, I think it's pretty fair to say that nested multi-sets don't make any sense -- we might just want to rewrite the validator set constructor to take a flag indicating whether we're at the top level (set false before recursing) and throw an error on nested multi set.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, though this makes SimpleList a special fundamental ValidatorSet rather than just another trait implementation. But I guess that's fine since there are no other scenarios yet.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with generalizing a bit later, but all current validator sets can express themselves as a SimpleList at any given block, and we need the property of fast verification of blocks out-of-order for the epoch verifier.

@rphmeier rphmeier added A8-looksgood 🦄 Pull request is reviewed well. and removed A0-pleasereview 🤓 Pull request needs code review. labels Apr 25, 2017
@rphmeier
Copy link
Contributor Author

(tagging on behalf of @keorn)

@rphmeier rphmeier merged commit 35958a0 into master Apr 25, 2017
@rphmeier rphmeier deleted the aura-contract-warp branch April 25, 2017 15:58
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A8-looksgood 🦄 Pull request is reviewed well. M4-core ⛓ Core client code / Rust.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants