-
Notifications
You must be signed in to change notification settings - Fork 20.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
txIndexer.report() method uses too much cpu on sepolia network #28907
Comments
Wow, not often we see such detailed reports with flamegraphs and all. Thanks! |
The indexing progress is only queried if the transaction/receipt is not found. Are you sending lots of queries for non-existent receipts? |
@cryptocifer Can you please try this branch by any chance? #28908 |
yep in our scenario wallets send transactions through our node and then polling let me try #28908, thanks |
@rjl493456442 thanks! the branch does work! I'm trying to understand the real cause, could you help me confirm whether my guess is right? Our mainnet node also upgrades to v1.13.11 and serves the same RPC queries with the sepolia node. The only difference between these two nodes are the mainnet node is a full node and keeps only latest 90000 blocks transaction indices, while the sepolia node is an archive node keeps all of the transaction indices. So the "TxIndexTail" of in the sepolia node db constantly points to block 0 and got no chance to be modified, so the "TxIndexTail" in pebbledb gradually sinks to deeper level. While in the mainnet node, the "TxIndexTail" gets updated frequently so it keeps on a shallow level? So without #28908's fix, repeatedly reading "TxIndexTail" from deep level costs a lot than reading from shallow level? actually I doubt this suspicion because I believe pebbledb has cache to eliminate the high cost frequently reading from deep level. I just can't think of other causes.. |
I do agree with your analysis.
Pebble does have the cache for recently loaded block data, but the read still need to traverse the levels from top to bottom, especially all the level0 files need to be traversed one by one. Although we do configure the bloom filter to mitigate the cost, but I guess it's still an expensive operation, (it's way more expensive than I imagine tbh). |
Will close it as the fix is merged. Feel free to reopen if anything unexpected occurs again. |
System information
Geth version:
v1.13.11
CL client & version: prysm@v4.2.1
OS & Version: Linux
Steps to reproduce the behaviour
We have been running an archive node for sepolia network, the node also serves the
eth_getTransactionReceipt
calls at a rate of 150 QPS, this call serves very well until we upgrading to v1.13.11, which causeseth_getTransactionReceipt
at a very high latency (p95 >5 seconds).I dug a bit on the
eth_getTransactionReceipt
method and found that in v1.13.11 if the transaction is not found atm it will trigger a progress query on thetxIndexer.txIndexProgress()
, thetxIndexProgress()
method then trigger thetxIndexer.report()
method and get the progress from the txIndexer mainloop through a channel. (Well I don't think it's suitable to have a channel on the path of serving a RPC call, but let's hold on that for now.)The real cause of the high latency is inside the
txIndexer.report()
, it can trace to the deep internals of pebble and finally stop at the crc32.Update() method and Syscall6().as you can see
txIndexer.report()
takes 81% of the sampled CPU time.node startup options
Update
While things has not ended. We also run a mainnet ethereum node and also upgraded to
v1.13.11
, but the latency of theeth_getTransactionReceipt
is just as low as before(p95 ~1ms). So I also capture a cpu profile for the mainnet node, btw the mainnet node and the sepolia roughly serve the same kinds and the same amount of RPC calls, but as we can see in mainnet cpu profile thetxIndexer.report()
only takes 0.4% of the sampled CPU time.we can see in mainnet node's pprof it doesn't run into the crc32.Update and the Syscall6 method, maybe it's quite related to pebbledb or the db hierarchy between sepolia and mainnet...
The text was updated successfully, but these errors were encountered: