-
Notifications
You must be signed in to change notification settings - Fork 743
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calling engine_preparePayload
in advance
#2715
Comments
As pointed out by @djrtwo, the This issue still remains relevant, I'll just change some naming when that PR merges. |
With the current VC/BN split I would very much favor using the VC to send a preparation call. There are two reasons for this: it removes the assumption that a single BN is responsible for a a given validator, which is not the case in more advanced architectures, and it allows the VC to provide the coinbase address (which you didn't mention in your API call, but should be in there). |
Good point about the feeRecipient address, I'll add that in. It could easily be provided in either scenario, so I don't think it's a reason for VC driving, though. I think BN driving is just as suitable for 1:many VC:BN. A VC can tell multiple BNs that they should be preparing payloads. It may result in some unnecessary preparePayload calls, but I'm not sure that's an issue considering that in PoW the execution client always expects to produce the next block. I'm not sure about the "advanced architectures", but I'm certainly open to hearing what they might be. Another way to think about BN driving is that it's exactly equivalent to VC driving, but calls the preparePayload in advance and defers the exact timing to the BN. |
The VC does, in general, have control of the information and flow. Graffiti is the most obvious current counterpart today for information: if the BN held the fee recipient information then we'd end up with having to configure both the VC (for graffiti) and BN (for fee recipient) for each validator, which seems overly restrictive. As for flow, the VC already tells the BN what to do with calls like sync committee subscriptions so this seems to fit the existing flow better. And if we ever end up with any proof of custody scheme we will have to have the VC call the BN as it will need to provide a proof, so this seems to be more future-proofed as well.
As for advanced architectures, one example would be a pool of VCs working with a pool of BNs. Any of the BNs could be asked by a VC to prepare a block, but getting all of them to do so would be wasteful. It would also require the VC to notify all of the BNs after it had fetched its block from one of them, to tell them to stop preparing, which is a very odd way round of doing things compared to selecting a relevant BN and asking it and it alone to generate the block. |
I've never suggested the BN should control the After considering your points, I think the "BN driving" scenario can be designed in a way that allows simplistic VC implementations to go "hands-free" whilst still working just as well for the more complicated scenarios you describe. Here's an updated description: BN DrivingAt some point in time, the VC publishes this message to the BN: # POST validator/potential_beacon_proposers
[
{
"validator_index": "0",
"feeRecipient": "0xabc.."
}
] Upon receiving that message, the BN knows that it should try and ensure a payload is prepared if it ever expects to validator 0 to produce a block. This approach has the following properties:
VC DrivingAt exactly the correct time(s), the VC publishes this/these message(s) to the BN: # POST validator/prepare_payload
{
"slot": "42",
"head_block_root": "0x123...",
"feeRecipient": "0xabc.."
}
SummaryHopefully it is shown that "BN driving" is still able to avoid wastefulness in a multi-BN environment. I find it to be more flexible and equally as powerful as "VC driving". I also find "BN driving" to be more amenable to optimization in the BN (in the scenario where it is given advance notice of proposers) and also more agreeable to VC implementations that aim for simplicity. Additionally, "BN driving" further distances the BN<>VC API from changes in the EL<>CL workflow. One thing to note is that "VC driving" allows the VC to specify the |
I'm not seeing why The core question seems to be whether the BN or VC is responsible for mapping the shuffling to a specific proposer. Having the VC provide ( This does seem like a reasonable responsibility for the validator client but the new design of |
I definitely think it is a simpler thing for the BN to handle this responsbility and for the BN to also eventually call I would not want one piece of software to call Another reason that I think this should go into BN responsibility is that BN is aware of the head (and changes to it). Thus BN is the most up-to-date entity to trigger build processes. Not to mention -- in the next wave of specs |
Thank you for the detailed consideration, although I do think that the concerns of VC driving are largely unfounded as these situations are already all dealt with at current by a VC that creates block proposals. With the revised "BN driving" option I think that this addresses most of my concerns. There remains an issue around a BN restarting after the |
I'm curious -- in a VC driven The value returned from If Beyond that, because |
IMO, |
Sorry, for some reason I didn't receive notifications about your response.
The main thrust of "VC driving" for me would be that the VC would call But as mentioned above, with short-term subscriptions that Paul put forward for the BN driving I think that the API makes everyone happy. Perhaps the arbitrary "2 epochs" could become an If @paulhauner is happy with me doing so I'll write up a PR for the beacon-apis repo that contains this design. |
That would be great, thank you! |
I stumbled on this searching for something else. I think we can safely close it now that this feature is implemented and working! |
Description
In a post-merge beacon chain, a CL (consensus layer/eth2) node will need to call two functions in order to prepare a block:
engine_preparePayload
: returns apayloadId
.engine_getPayload
: accepts apayloadId
.The ultimate goal of these two calls is to return an
ExecutionPayload
, which is effectively an execution (eth1) block to be included in a consensus (eth2) block.The reason there are separate preparePayload and getPayload calls is to allow the CL nodes to be able to give the EL (execution layer/eth1) nodes some time to prepare the payload (i.e., find the best set of transactions it can). To this end, in the ideal case we should call preparePayload some time before we call getPayload.
The purpose of this issue is to establish when the CL nodes should call preparePayload and to consider the engineering requirements for CL implementations (e.g., Lighthouse).
When to call preparePayload
Lets start with three basic constraints about when and how to call preparePayload:
s
.s
.parentHash
, we can only call it after we know the parent of the block at slots
.s - 1
.s
.Given these constraints, we could say that preparePayload should be called whenever the canonical head changes during slot
s - 1
.But alas, there is an edge-case. What if the node never receives a block at slot
s - 1
(i.e.,s - 1
is a "skip slot")? The head could remain unchanged (e.g. the block at slots - 2
) and therefore we'd never call preparePayload.In light of skip slots, it seems we may need to decide at some point during slot
s - 1
that we're probably not going to get a block and that we should call preparePayload with the current head (e.g.s - 2
). This point would be the threshold at which we assume there is a skip slot, so lets call itassumed_skip_slot_threshold
.We can now form a general definition of when to call preparePayload:
General definition
If a CL node expects to propose a block at slot
s
, then it should call preparePayload with values computed from the canonical head whenever the following events occur during slots - 1
:assumed_skip_slot_threshold
is reached, and the first condition (1) has not already been triggered.The nitty gritty of implementation
Proposer shuffling
Our previous definition makes the assumption that we always know the proposers for slot
s
at slots - 1
. This is not strictly true. The proposer shuffling for epoche
can only be known after the final block in epoche - 1
is processed.This means that if we're in the last slot of the epoch (i.e.,
(s + 1) % SLOTS_PER_EPOCH == 0
), we won't know what the proposer shuffling is until we either (a) receive a block at slots - 1
or (b) hitassumed_skip_slot_threshold
and assume that there is no block ats - 1
.With this in mind, we can create a more implementation-specific definition that is aware of proposer-shuffling constraints:
Proposer-shuffling aware definition
If the CL node is performing duties for any active validators, then it should run the
maybe_prepare_payload
routine whenever:assumed_skip_threshold
is reached, and the first condition (1) has not already been triggered.Where
maybe_prepare_payload
involves:process_slots
to advance it to slots
.s
. If so, continue, else exit.Note:
maybe_prepare_payload
can be optimized in the non-epoch-boundary scenario to avoid callingprocess_slots
, but this definition aims to be simple and general.Is the VC or BN driving?
You may notice that I've used "CL node" instead of referring to the duties of a beacon node (BN) or validator client (VC). That's because it's not immediately clear whether the BN or VC should be the one driving this series of events.
VC driving
In the "VC driving" scenario, the BN has no idea about which validators may produce blocks at slot
s
. It is up to the VC to ensure that the BN issues a relevant preparePayload request at the correct time(s). The "VC driving" process looks like this:If the VC is performing duties for any active validators, then it should run the
maybe_prepare_payload
routine whenever:head
SSE event).assumed_skip_threshold
is reached, and the first condition (1) has not already been triggered.Where
maybe_prepare_payload
involves:s
duties/proposer
endpoint.s
. If so, continue, else exit.validator/prepare_payload
for the time being.The definition of
validator/prepare_payload
requires some thought too. I propose it should take(slot, head_block_root)
as parameters and return nothing. It will be the duty of the BN to hold thepayloadId
and provide it during a getPayload request. For the input parameters,slot
is the slot in which the VC expects to propose a slot (i.e.,s
) andhead_block_root
will be head block at the time of the call (i.e., the expected parent of the beacon block it expects to propose ats
).BN driving
In the "BN driving" scenario, the VC knows nothing of the preparePayload request. Instead, just tells the BN which validators it is managing and the BN transparently calls preparePayload when it sees fit.
The "BN driving" process looks like this:
validator/beacon_committee_subscriptions
endpoint could theoretically be repurposed to also do this.validator/potential_beacon_proposers
endpoint (naming can be improved).What does @paulhauner think about VC or BN driving?
At this stage, I think I prefer BN driving because it strives for simplicity in the VC (the scary secret-key-holding thing) and it also allows for more optimization inside the BN. Some clients (Lighthouse, Teku, at least) are already doing optimizations to compute the proposer duties for epoch
e
at the end ofe - 1
, these could be leveraged to make preparePayload more efficient.Open Questions
I'm not sure what to define
assumed_skip_slot_threshold
as. One way to do it would be to set it at roughly the last time in which we usually expect a beacon block. In my experience this would be somewhere between 4-8s since slot start. However, it would be good to know if there's a point of diminishing returns regarding the delay between preparePayload and getPayload. For example, if it never takes the EL more than 3s to build the idealExecutionPayload
, then lets just set it to 9s (12s - 3s) after slot start.The text was updated successfully, but these errors were encountered: