-
Notifications
You must be signed in to change notification settings - Fork 20k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
internal: add eth_batchCall method #25743
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okkk
26c4994
to
3c51102
Compare
// CallResult is the result of one call. | ||
type CallResult struct { | ||
Return hexutil.Bytes | ||
Error error | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like that this is finally being implemented in the core protocol since it makes it very easy to do a number of things.
It'd be really great if this also includes gas consumed by the particular call. It'd help to set more appropriate gas limits when someone needs to sign the next transactions while the initial state-changing transactions are pending/not broadcasted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's useful we can easily return the gas used here. But can you please elaborate on your use-case? I didn't quite understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An example of simple use-case: erc20 approve
+ defi interaction
.
Problem: eth_estimateGas
done before the erc20 approve
getting confirmed would usually revert. Hence the normal flow of UX is to send approve
tx and wait for it to confirm then do the defi interaction
tx. This is well known to be not good UX because of increased user waiting times. Using huge input gas limit is not the good solution because wallets show max eth fees (gas price * gas limit) which looks costly and some users might not have enough eth.
Solution: if eth_batchCall
includes gasUsed
, then it can be used instead of eth_estimateGas
where the erc20 approve
tx is mentioned as first call and then the defi interaction
as second call and it's gasUsed
can be used to very accurately estimate gas even before prervious transaction is confirmed. The improved UX for this becomes: click on button in dapp once, hit confirm on metamask/wallet twice, check back in like few mins if both txs are confirmed. Similarly, if the user had to do a lot of steps like approve + deposit + stake + what not, a dapp could use eth_batchCall
to simulate the UI state after a user interaction and create list of txs to submit and get them signed at once and accurately estimate the gas limit (similar to github PR reviews where we can add lot of comments while scrolling at our convenience and it gets submited all at once). This saves a lot of user's waiting time, and hence has the potential to improve the UX considerably.
TLDR including gasUsed basically enables estimating gas on a state updated after a series of calls. I hope the usecase makes sense.
Edit: I just came across a project (created by Uniswap engineer) that exposes an endpoint for batch estimateGas using mainnet fork (link), use-case mentioned in their README beginning is exactly what I am trying to explain above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok now I understand the use-case, thanks for extensive description. I think adding gasUsed
to the result is not a good solution. If you look, logic of eth_estimateGas
is more complicated than simply doing a eth_call
and reporting the gasUsed. This AFAIK is because the gaslimit provided to the tx can change the flow of the tx itself (GAS
opcode).
But the use-case is valid IMO and warrants a eth_batchEstimateGas
or something of the sort.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it makes sense. So if gas
is not specified in a call object it'd assume a large value, in order to ensure complex state-changing calls follow a successful execution path if there exists any, wouldn't users need to use eth_batchEstimateGas
(or something like that) for setting the gas
field in the calls prior to using eth_batchCall
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An example of simple use-case:
erc20 approve
+defi interaction
.Problem:
eth_estimateGas
done before theerc20 approve
getting confirmed would usually revert. Hence the normal flow of UX is to sendapprove
tx and wait for it to confirm then do thedefi interaction
tx. This is well known to be not good UX because of increased user waiting times. Using huge input gas limit is not the good solution because wallets show max eth fees (gas price * gas limit) which looks costly and some users might not have enough eth.Solution: if
eth_batchCall
includesgasUsed
, then it can be used instead ofeth_estimateGas
where theerc20 approve
tx is mentioned as first call and then thedefi interaction
as second call and it'sgasUsed
can be used to very accurately estimate gas even before prervious transaction is confirmed. The improved UX for this becomes: click on button in dapp once, hit confirm on metamask/wallet twice, check back in like few mins if both txs are confirmed. Similarly, if the user had to do a lot of steps like approve + deposit + stake + what not, a dapp could useeth_batchCall
to simulate the UI state after a user interaction and create list of txs to submit and get them signed at once and accurately estimate the gas limit (similar to github PR reviews where we can add lot of comments while scrolling at our convenience and it gets submited all at once). This saves a lot of user's waiting time, and hence has the potential to improve the UX considerably.TLDR including gasUsed basically enables estimating gas on a state updated after a series of calls. I hope the usecase makes sense.
Edit: I just came across a project (created by Uniswap engineer) that exposes an endpoint for batch estimateGas using mainnet fork (link), use-case mentioned in their README beginning is exactly what I am trying to explain above.
gas used through a wrapper contract is not accurate with Multicall due to EIP-2929, so should be avoided FYI (this is why Uniswap made this endpoint i think)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it might be better to split these up in different PRs so that we can go through the execution-apis and standardize the eth_batchCall before adding it to geth. It would be nice to get wallet teams and other clients to weigh in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Since this is a new method, I'm not too worried about potential flaws, so I wouldn't mind merging it and letting people try it out.
Co-authored-by: lightclient <14004106+lightclient@users.noreply.github.com>
a81211a
to
47a90f7
Compare
) | ||
for _, call := range config.Calls { | ||
blockContext := core.NewEVMBlockContext(header, NewChainContext(ctx, s.b), nil) | ||
if call.BlockOverrides != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite understand why we have call-level block overrides. In practice these calls usually have the same block context if they want to be put in a single block by intention?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because users can simulate the case that transactions are included in different blocks? If so I think this design makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Micah also asked a similar question here: ethereum/execution-apis#312 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On one hand, it makes sense. For example, if you want to experiment with a time-locked contract. First you create it, then two years pass, now you want to interact again.
It opens up a lot of potential uses which does not fit inside a single block.
However, it might also be a footgun. If you want to simulate a sequence where
- Block
n
: A contractX
is selfdestructed, - Block
n+1
, contratX
is resurrected.
The two steps can never happen in one block. The question is: what happens in the batch-call? Is it possible to make the two calls execute correctly, or will it be some form of "time-shifted single block", where you can override the time and number, but state-processing-wise it's still the same block.... ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also am concerned that not all clients will have an easy time implementing this as currently specified. I suspect all clients could have two distinct blocks that they execute against some existing block's post-state, but not all clients may be able to simulate a series of transcations against a cohesive state when the transaction's don't share block fields.
Would be great to get other client feedback on this to verify, but without any feedback I would assume the worst that this will be "hard" to implement in some clients. Having writing some Nethermind plugins, my gut suggests that this would be challenging to do with Nethermind for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about this some more, and I think it would be better to just allow the user to multicall and have multiple blocks, each with different transactions. The model may look something like:
[ { block_n_details, block_n_transactions }, { block_m_details, block_m_transactions }, ... ]
We would still require normal rules to be respected between blocks (like block numbers are incrementing, timestamp in future blocks must be higher number than previous blocks, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or will it be some form of "time-shifted single block", where you can override the time and number, but state-processing-wise it's still the same block.... ?
This is a good point. As the implementation stands there are differences to how a sequence of blocks are executed (one being coinbase fee). As I mentioned here ethereum/execution-apis#312 (comment) I would like to proceed with "only" the single-block-multi-call variant. This would already be a big improvement for users and I would prefer not to delay that for something more complicated at the moment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would already be a big improvement for users and I would prefer not to delay that for something more complicated at the moment.
I think it is only notably more complicated if you try to do different overrides of transaction details within a single block. My proposal is to actually have multiple blocks, each which would follow most consensus rules (like timestamps must increase, block number must increase, etc.). I believe the complexity that Martin is referring to is specifically related to how the original proposal was designed where you have one "block" but each transaction had different block properties reflected in it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about this some more, ...
We would still require normal rules to be respected between blocks (like block numbers are incrementing, timestamp in future blocks must be higher number than previous blocks, etc.
I've also been thinking about this some more - and I reached a different conclusion :) In fact, kind of the opposite. I was thinking that we could make this call very much up to the caller. We would not do any form of sanity checks. If the user wants to do block 1, 500, 498, 1M, 3
in sequence, while letting timestamp
go backwards, then fine. It's up to the caller to use this thing "correctly".
In that sense, I don't see any need to enforce "separate blocks". (To be concrete, I think that only means shipping the fees to the coinbase, so that is not a biggie really. )
I do foresee a couple of problems that maybe should be agreed with the other clients:
- When EVM invokes the
BLOCKHASH(number)
opcode. How should we 'resolve' blockhash when the block number is overridden. Possible semantics- Always return emptyhash
- Always return
keccak256(num)
- Return as if it were executed on the current block, ignoring overrides.
Currently, geth does the third option, since the blockcontext.GetHash
function is set before any block overrides:
vmctx := core.NewEVMBlockContext(block.Header(), api.chainContext(ctx), nil)
// Apply the customization rules if required.
if config != nil {
if err := config.StateOverrides.Apply(statedb); err != nil {
return nil, err
}
config.BlockOverrides.Apply(&vmctx)
}
.... I think there was one more thing I meant to write, but I've forgotten now....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've also been thinking about this some more - and I reached a different conclusion :) In fact, kind of the opposite. I was thinking that we could make this call very much up to the caller. We would not do any form of sanity checks. If the user wants to do block
1, 500, 498, 1M, 3
in sequence, while lettingtimestamp
go backwards, then fine. It's up to the caller to use this thing "correctly".
My concern with this strategy is that some clients (or possibly future clients) may be architected such that disabling basic validation checks like "block number go up" in an area that is harder to override during calls. This is, of course, speculation on my part but it aligns with my general preference toward keeping the multicall
as close to actual block building as possible. I also can't think of any good use cases where having block number or time go backwards would help someone, so it feels like unnecessary leniency.
When EVM invokes the
BLOCKHASH(number)
opcode. How should we 'resolve' blockhash when the block number is overridden. Possible semantics* Always return emptyhash * Always return `keccak256(num)` * Return as if it were executed on the current block, ignoring overrides.
I think there is a fourth option to include it in the potential overrides, so the caller would say "when you execute this block and BLOCKHASH(n)
is called, return this value". The caller could provide an array of n
to blockhash
values (they presumably know what set they need). We could then fallback to one of the "reasonable defaults" that you have listed.
Value: (*hexutil.Big)(big.NewInt(1000)), | ||
}, | ||
expectErr: core.ErrInsufficientFunds, | ||
want: 21000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick, we don't need this want
field since error is already expected.
randomAccounts[0].addr: OverrideAccount{Balance: newRPCBalance(big.NewInt(1000))}, | ||
}, | ||
Calls: []BatchCallArgs{{ | ||
TransactionArgs: TransactionArgs{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
randomAccounts[0].addr
and randomAccounts[1].addr
are all have funds to transfer(allocated in genesis), you should use a new address here.
// | ||
// Note, this function doesn't make any changes in the state/blockchain and is | ||
// useful to execute and retrieve values. | ||
func (s *BlockChainAPI) BatchCall(ctx context.Context, config BatchCallConfig) ([]CallResult, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels weird that we put all arguments in a config object. I can get the point that it's way more flexible and can be easily extended in the future.
Maybe we can put Block rpc.BlockNumberOrHash
and Calls []BatchCallArgs
as standalone parameters and with a config object for specifying the additional configurations(state overrides, etc)? Just a braindump though.
Is there any advantage of using eth_batchCall over just using Like this: https://github.com/zhiqiangxu/multicall/blob/master/multicall_test.go#L38 |
If I'm understanding correctly, you mean makerdao/multicall kinda way then we can't set |
does each eth_call inside a batch_Call, executes sequentially keeping the previous calls end state ? this will allow to find the result at the end of n number of sequencial calls |
That is the idea, yes. |
Thats awesome.. I would love to use this api. Can someone tell How can I try this out before the merge with master? |
There are still some unresolved questions. For me, this one: #25743 (comment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change
Closing in favor of #27720. |
This is a successor PR to #25743. This PR is based on a new iteration of the spec: ethereum/execution-apis#484. `eth_multicall` takes in a list of blocks, each optionally overriding fields like number, timestamp, etc. of a base block. Each block can include calls. At each block users can override the state. There are extra features, such as: - Include ether transfers as part of the logs - Overriding precompile codes with evm bytecode - Redirecting accounts to another address ## Breaking changes This PR includes the following breaking changes: - Block override fields of eth_call and debug_traceCall have had the following fields renamed - `coinbase` -> `feeRecipient` - `random` -> `prevRandao` - `baseFee` -> `baseFeePerGas` --------- Co-authored-by: Gary Rong <garyrong0905@gmail.com> Co-authored-by: Martin Holst Swende <martin@swende.se>
Adds eth_batchCall as per #24089. The main characteristics are: