Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
internal: add eth_batchCall method #25743
internal: add eth_batchCall method #25743
Changes from all commits
47a90f7
bdf0fa7
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like that this is finally being implemented in the core protocol since it makes it very easy to do a number of things.
It'd be really great if this also includes gas consumed by the particular call. It'd help to set more appropriate gas limits when someone needs to sign the next transactions while the initial state-changing transactions are pending/not broadcasted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's useful we can easily return the gas used here. But can you please elaborate on your use-case? I didn't quite understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An example of simple use-case:
erc20 approve
+defi interaction
.Problem:
eth_estimateGas
done before theerc20 approve
getting confirmed would usually revert. Hence the normal flow of UX is to sendapprove
tx and wait for it to confirm then do thedefi interaction
tx. This is well known to be not good UX because of increased user waiting times. Using huge input gas limit is not the good solution because wallets show max eth fees (gas price * gas limit) which looks costly and some users might not have enough eth.Solution: if
eth_batchCall
includesgasUsed
, then it can be used instead ofeth_estimateGas
where theerc20 approve
tx is mentioned as first call and then thedefi interaction
as second call and it'sgasUsed
can be used to very accurately estimate gas even before prervious transaction is confirmed. The improved UX for this becomes: click on button in dapp once, hit confirm on metamask/wallet twice, check back in like few mins if both txs are confirmed. Similarly, if the user had to do a lot of steps like approve + deposit + stake + what not, a dapp could useeth_batchCall
to simulate the UI state after a user interaction and create list of txs to submit and get them signed at once and accurately estimate the gas limit (similar to github PR reviews where we can add lot of comments while scrolling at our convenience and it gets submited all at once). This saves a lot of user's waiting time, and hence has the potential to improve the UX considerably.TLDR including gasUsed basically enables estimating gas on a state updated after a series of calls. I hope the usecase makes sense.
Edit: I just came across a project (created by Uniswap engineer) that exposes an endpoint for batch estimateGas using mainnet fork (link), use-case mentioned in their README beginning is exactly what I am trying to explain above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok now I understand the use-case, thanks for extensive description. I think adding
gasUsed
to the result is not a good solution. If you look, logic ofeth_estimateGas
is more complicated than simply doing aeth_call
and reporting the gasUsed. This AFAIK is because the gaslimit provided to the tx can change the flow of the tx itself (GAS
opcode).But the use-case is valid IMO and warrants a
eth_batchEstimateGas
or something of the sort.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@s1na I had a implementation for
batchEstimateGas
way way back ago #21268But I think your approach is much better, would be appreciate you can also take over this with the same design(e.g. state overrides, block overrides, etc)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it makes sense. So if
gas
is not specified in a call object it'd assume a large value, in order to ensure complex state-changing calls follow a successful execution path if there exists any, wouldn't users need to useeth_batchEstimateGas
(or something like that) for setting thegas
field in the calls prior to usingeth_batchCall
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gas used through a wrapper contract is not accurate with Multicall due to EIP-2929, so should be avoided FYI (this is why Uniswap made this endpoint i think)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels weird that we put all arguments in a config object. I can get the point that it's way more flexible and can be easily extended in the future.
Maybe we can put
Block rpc.BlockNumberOrHash
andCalls []BatchCallArgs
as standalone parameters and with a config object for specifying the additional configurations(state overrides, etc)? Just a braindump though.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite understand why we have call-level block overrides. In practice these calls usually have the same block context if they want to be put in a single block by intention?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because users can simulate the case that transactions are included in different blocks? If so I think this design makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Micah also asked a similar question here: ethereum/execution-apis#312 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On one hand, it makes sense. For example, if you want to experiment with a time-locked contract. First you create it, then two years pass, now you want to interact again.
It opens up a lot of potential uses which does not fit inside a single block.
However, it might also be a footgun. If you want to simulate a sequence where
n
: A contractX
is selfdestructed,n+1
, contratX
is resurrected.The two steps can never happen in one block. The question is: what happens in the batch-call? Is it possible to make the two calls execute correctly, or will it be some form of "time-shifted single block", where you can override the time and number, but state-processing-wise it's still the same block.... ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also am concerned that not all clients will have an easy time implementing this as currently specified. I suspect all clients could have two distinct blocks that they execute against some existing block's post-state, but not all clients may be able to simulate a series of transcations against a cohesive state when the transaction's don't share block fields.
Would be great to get other client feedback on this to verify, but without any feedback I would assume the worst that this will be "hard" to implement in some clients. Having writing some Nethermind plugins, my gut suggests that this would be challenging to do with Nethermind for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about this some more, and I think it would be better to just allow the user to multicall and have multiple blocks, each with different transactions. The model may look something like:
We would still require normal rules to be respected between blocks (like block numbers are incrementing, timestamp in future blocks must be higher number than previous blocks, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good point. As the implementation stands there are differences to how a sequence of blocks are executed (one being coinbase fee). As I mentioned here ethereum/execution-apis#312 (comment) I would like to proceed with "only" the single-block-multi-call variant. This would already be a big improvement for users and I would prefer not to delay that for something more complicated at the moment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is only notably more complicated if you try to do different overrides of transaction details within a single block. My proposal is to actually have multiple blocks, each which would follow most consensus rules (like timestamps must increase, block number must increase, etc.). I believe the complexity that Martin is referring to is specifically related to how the original proposal was designed where you have one "block" but each transaction had different block properties reflected in it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've also been thinking about this some more - and I reached a different conclusion :) In fact, kind of the opposite. I was thinking that we could make this call very much up to the caller. We would not do any form of sanity checks. If the user wants to do block
1, 500, 498, 1M, 3
in sequence, while lettingtimestamp
go backwards, then fine. It's up to the caller to use this thing "correctly".In that sense, I don't see any need to enforce "separate blocks". (To be concrete, I think that only means shipping the fees to the coinbase, so that is not a biggie really. )
I do foresee a couple of problems that maybe should be agreed with the other clients:
BLOCKHASH(number)
opcode. How should we 'resolve' blockhash when the block number is overridden. Possible semanticskeccak256(num)
Currently, geth does the third option, since the
blockcontext.GetHash
function is set before any block overrides:.... I think there was one more thing I meant to write, but I've forgotten now....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern with this strategy is that some clients (or possibly future clients) may be architected such that disabling basic validation checks like "block number go up" in an area that is harder to override during calls. This is, of course, speculation on my part but it aligns with my general preference toward keeping the
multicall
as close to actual block building as possible. I also can't think of any good use cases where having block number or time go backwards would help someone, so it feels like unnecessary leniency.I think there is a fourth option to include it in the potential overrides, so the caller would say "when you execute this block and
BLOCKHASH(n)
is called, return this value". The caller could provide an array ofn
toblockhash
values (they presumably know what set they need). We could then fallback to one of the "reasonable defaults" that you have listed.