Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✅ Add optimistic sync tests #3489

Draft
wants to merge 15 commits into
base: dev
Choose a base branch
from

Conversation

wenceslas-sanchez
Copy link
Contributor

@wenceslas-sanchez wenceslas-sanchez commented Aug 23, 2023

The goal of this PR is to add more cases for optimistic sync feature.

Test TODO:

Global TODO:

  • refactoring
  • doc

@wenceslas-sanchez
Copy link
Contributor Author

Hi @hwwhww and @mkalinin,

I tried to implement the first case from this issue; which is in my case the below:

  • 3 branches are syncing
  • A is the one with the most attestation, C the one with the few
  • A is the optimistic head
  • A is set to invalid => B become the optimistic head
  • B is set to invalid => C become the optimistic head

Does it match what you asked ? I am not sure the way I am manipulating weights is the best way to do it (I used the parameter participation_fn in the method state_transition_with_full_block. Also the test is not well written (a lot of duplicated code etc.); If you are ok with what I implemented, I will revert with a clean version of this.

@mkalinin
Copy link
Collaborator

The test looks good in general but I am not sure that participation_fn will add an attestation on chain. I would probable use on_attestation handler to add attestations to the test vector so a client will be able to apply them to its FC store.

@wenceslas-sanchez
Copy link
Contributor Author

The test looks good in general but I am not sure that participation_fn will add an attestation on chain. I would probable use on_attestation handler to add attestations to the test vector so a client will be able to apply them to its FC store.

I did manage attestation instead of participation indices; but it can't run with mainnet config because of the function get_committee_count_per_slot that cap the number of committee indices to 1. I think to make it works on the mainnet configuration, we could manage manually each validator indices from each committee to attest this or that branch.

@wenceslas-sanchez
Copy link
Contributor Author

The test looks good in general but I am not sure that participation_fn will add an attestation on chain. I would probable use on_attestation handler to add attestations to the test vector so a client will be able to apply them to its FC store.

I did manage attestation instead of participation indices; but it can't run with mainnet config because of the function get_committee_count_per_slot that cap the number of committee indices to 1. I think to make it works on the mainnet configuration, we could manage manually each validator indices from each committee to attest this or that branch.

Now it run whatever the preset + added a test with equal weight (1 validator attestation per branch)

@mkalinin
Copy link
Collaborator

mkalinin commented Sep 1, 2023

@wenceslas-sanchez is this PR ready for review or are you expecting to do more changes to it?

@wenceslas-sanchez
Copy link
Contributor Author

@wenceslas-sanchez is this PR ready for review or are you expecting to do more changes to it?

Nothing more than code refactoring, so you can review it.

Copy link
Collaborator

@mkalinin mkalinin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test scenarios look good to me! 👍
I haven't looked deep into implementation design though, I hope @hwwhww will do that 😜

@@ -112,3 +114,209 @@ def test_from_syncing_to_invalid(spec, state):
assert mega_store.opt_store.head_block_root == signed_blocks_a[-1].message.hash_tree_root()

yield 'steps', test_steps


def add_attestation_and_sign_block_with_aggregation_bit_list(spec, state, store, block, index, aggregation_bit_list):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function should be moved to one of the helper packages I believe

Copy link
Contributor Author

@wenceslas-sanchez wenceslas-sanchez Sep 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this method too specific to what I am testing ? For instance I complete the aggregation_bits full of zeros to match the expected list size.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this method can potentially be reused in other tests, cc @hwwhww

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slightly inclined to move to fork_choice.py.

@hwwhww
Copy link
Contributor

hwwhww commented Sep 11, 2023

Sorry for the late reply! Great work @wenceslas-sanchez! I'm now picking up the reviews after the long conference weeks. 🙏

There was an unrelated issue (#3495) that broke our CI. I fixed it and the linter errors. But now we can see that the test_multiple_branches_sync_all_invalidated_but_one_equal_weight test case failed with capella, eip6110, and eip7002 forks, but succeeded with bellatrix and deneb. My intuition is somehow there is a tie.

@wenceslas-sanchez
Copy link
Contributor Author

Sorry for the late reply! Great work @wenceslas-sanchez! I'm now picking up the reviews after the long conference weeks. 🙏

There was an unrelated issue (#3495) that broke our CI. I fixed it and the linter errors. But now we can see that the test_multiple_branches_sync_all_invalidated_but_one_equal_weight test case failed with capella, eip6110, and eip7002 forks, but succeeded with bellatrix and deneb. My intuition is somehow there is a tie.

It seems that you are right, and that's because each preset/fork generates a different spec.Root(signed_block.message.hash_tree_root()). Below is the result per fork/preset.

Bellatrix Capella Deneb EIP6110
Minimal
Mainnet

(It's seems to have a lot of red cross, but sometimes it happens at the first assertion, and sometimes to the second.)

What we can do to manage this:

  • build a test per fork/preset (not elegant)
  • since we know the value of spec.Root(signed_block.message.hash_tree_root()), we can set the assertion following the hash tree root orders.
  • build one unique test for one fork and one/two preset.

I will try to revert very soon with the second solution.

@wenceslas-sanchez
Copy link
Contributor Author

Sorry for the late reply! Great work @wenceslas-sanchez! I'm now picking up the reviews after the long conference weeks. 🙏
There was an unrelated issue (#3495) that broke our CI. I fixed it and the linter errors. But now we can see that the test_multiple_branches_sync_all_invalidated_but_one_equal_weight test case failed with capella, eip6110, and eip7002 forks, but succeeded with bellatrix and deneb. My intuition is somehow there is a tie.

It seems that you are right, and that's because each preset/fork generates a different spec.Root(signed_block.message.hash_tree_root()). Below is the result per fork/preset.

Bellatrix Capella Deneb EIP6110
Minimal ✅ ❌ ✅ ❌
Mainnet ❌ ❌ ❌ ❌
(It's seems to have a lot of red cross, but sometimes it happens at the first assertion, and sometimes to the second.)

What we can do to manage this:

  • build a test per fork/preset (not elegant)
  • since we know the value of spec.Root(signed_block.message.hash_tree_root()), we can set the assertion following the hash tree root orders.
  • build one unique test for one fork and one/two preset.

I will try to revert very soon with the second solution.

Ok I think I found a solution but it's not super elegant.. what do you think ?

@@ -112,3 +114,209 @@ def test_from_syncing_to_invalid(spec, state):
assert mega_store.opt_store.head_block_root == signed_blocks_a[-1].message.hash_tree_root()

yield 'steps', test_steps


def add_attestation_and_sign_block_with_aggregation_bit_list(spec, state, store, block, index, aggregation_bit_list):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slightly inclined to move to fork_choice.py.

data=attestation_data,
)
sign_attestation(spec, state, attestation)
spec.on_attestation(store, attestation, is_from_block=True)
Copy link
Contributor

@hwwhww hwwhww Sep 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's implicit (sorry!), but I made all add_* helpers that yield something from the fork choice spec on_* helpers. If we yield here, it will output the attestation SSZ file to the test vectors.

In the given test cases, I think we don't need the spec.on_attestation call because the block will include the attestations?

If so, we should:

  1. remove this line
  2. rename add_attestation_and_sign_block_with_aggregation_bit_list to sign_block_with_aggregation_bit_list

If not, we should:

  1. replace this line with fork_choice::add_attestation helper
    yield from add_attestation(spec, store, attestation, test_steps)
  1. replace signed_block = add_attestation_and_sign_block_with_aggregation_bit_list with signed_block = yield from add_attestation_and_sign_block_with_aggregation_bit_list

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh okay understoood, I did rename the method + remove the on_attestation call. I also moved it to fork_choice.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants