-
Notifications
You must be signed in to change notification settings - Fork 996
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple fork transitions on same slot/epoch discard old fork_version
s
#2902
Comments
this system was written w/ the expectation that forks would be far enough apart to not hit this issue and we only really see this type of thing in artificial settings like testnets etc. to fix it, we would need to store the full fork schedule in the beacon state and modify many of the helpers to account for finding the right fork version (or even rewrite the messages to accelerate search) I personally don't think it is worth the additional consensus complexity to support this type of thing although if there is some huge thing I'm missing I'm open to discussing further :) |
I agree that this is somewhat theoretical, insofar as real networks can be constructed not to have this problem. I also agree that fixing properly this might not be worth the technical complexity, for the reasons you outline. My concerns are:
I'd prefer to specify something along the lines of, e.g., no two fork epochs can be equal if they're not 0. If people want to run clients or tests that way regardless, but then the outcome is not defined per spec. Already, one can reasonably assert that This way, Kurtosis, Hive, and similar systems could elide an arbitrary number of forks initially, because no ambiguity exists, but could not skip multiple forks afterwards. |
The result is that certain messages across multiple-fork-transition-per-epoch boundaries might not be verifiable when received.
https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/fork.md#upgrading-the-state states that the fork of the upgraded state is
https://github.com/ethereum/consensus-specs/blob/dev/specs/bellatrix/fork.md#upgrading-the-state states that for Bellatrix, it's:
And for https://github.com/ethereum/consensus-specs/blob/dev/specs/capella/fork.md#upgrading-the-state, it's
That is, even if there was no observable beacon chain-time during which an
ALTAIR_FORK_EPOCH == BELLATRIX_FORK_EPOCH
orBELLATRIX_FORK_EPOCH == CAPELLA_FORK_EPOCH
network existed in that intermediate fork, it will still show up asstate.fork.previous_version
of the fork to which it was upgraded, not necessarily the chronologically previous fork in question (e.g., the fork which might appear in the beacon API fork schedule).This means that signatures of attestations, for example, from a slot or two before an
ALTAIR_FORK_EPOCH == BELLATRIX_FORK_EPOCH
orBELLATRIX_FORK_EPOCH == CAPELLA_FORK_EPOCH
transition and then included in slots (which is supposed to be valid for up to 32 slots) cannot be verified afterward by a conforming client using https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/beacon-chain.md#get_domain as written:because
fork_version
cannot access an old enough version instate
by then.When fork transitions only occur on successive epochs, the attestations which trigger this are old enough to be invalid (
ATTESTATION_PROPAGATION_SLOT_RANGE == 32
), but in either theminimal
preset or immediately-after-each-other fork transitions, there can be glitches around these fork boundaries.The text was updated successfully, but these errors were encountered: