-
Notifications
You must be signed in to change notification settings - Fork 973
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ignore subset aggregates #2847
Ignore subset aggregates #2847
Conversation
When aggregates are propagated through the network, it is often the case that a better aggregate has already been seen - in particular, this happens when an aggregator has not been able to include itself in the mesh and therefore publishes an aggregate with only its own attestations. This new ignore rule allows dropping all aggregates that are (non-strict) subsets of aggregates that have already been seen on the network. In particular, it does not mandate dropping aggregates where a union of previous aggregates would cause it to become a subset). The logic for allowing this is based on the premise that any aggregate that has already been seen by a peer will also have been seen by its neighbours - a subset aggregate (strict or not) brings no new value to the aggregation algorithm, except in the extreme edge case where you could combine several such sparse aggregates into a single, more dense "combined" aggregate and thus use less block space. Further, as a small benefit, computing the `hash_tree_root` of the full aggregate is generally not done -however, `hash_tree_root(data)` is already done for other purposes as this is used as index in the beacon API.
How would this affect gossip scoring? Should the current parameters be fine-tuned? |
Also, this should be tested in advance to not require a revert down the line like what happened with #2183 |
Should we consider the same change for SignedContributionAndProof messages as well? |
This should be fine because it's an IGNORE which means the sender is not considered dishonest
certainly. Should note though, that this actually piggy backs on the 2183 functionality of dropping dupes so would in theory reduce bandwidth through a known mechanism |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with this change, but want to hear more client team/engineering input on complexity.
It piggy backs on a known mechanism for reducing bandwidth -- essentially dropping the message from gossip (which has amplificaiton factor) in favor of disseminating the unnecessary message through slower IWANT/IHAVE direct comms.
Sure, why not :) d5f4c46 |
The spec says: Meaning this is in client territory for now - if your client has a score for an "expected message volume" not being met on the aggregate channel, seeing fewer messages will case gossipsub descoring. Further, if your client penalizes peers for |
Implemented this in Teku. The implementation was quite straight forward and early indications are that there's a small reduction in CPU usage as a result. |
hopefully you should see a reduction in outgoing bandwidth as well! over time, this should translate to an incoming bandwidth reduction as well. |
Yes, also a slight decrease in outgoing bandwidth. |
With
The indication "not mandate" means "MAY NOT drop ..", or "MUST NOT drop .."? It's definitely cheaper to check against a single bitfield per attestation data than to look over all known aggregates. At least Lodestar pre-aggregates them in the operation pool, so we don't retain all aggregates from gossip as received. |
When aggregates are propagated through the network, it is often the case
that a better aggregate has already been seen - in particular, this
happens when an aggregator has not been able to include itself in the
mesh and therefore publishes an aggregate with only its own
attestations.
This new ignore rule allows dropping all aggregates that are
(non-strict) subsets of aggregates that have already been seen on the
network. In particular, it does not mandate dropping aggregates where a
union of previous aggregates would cause it to become a subset).
The logic for allowing this is based on the premise that any aggregate
that has already been seen by a peer will also have been seen by its
neighbours - a subset aggregate (strict or not) brings no new value to
the aggregation algorithm, except in the extreme edge case where you
could combine several such sparse aggregates into a single, more dense
"combined" aggregate and thus use less block space.
Further, as a small benefit, computing the
hash_tree_root
of the fullaggregate is generally not done -however,
hash_tree_root(data)
isalready done for other purposes as this is used as index in the beacon
API.