-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EPIC: Optimizing transaction processing #14929
Comments
This is going to have a very large variance depending on how/what the application does with transactions. My suggestion is to consider utilizing the simulations for benchmarking |
How would I utilize the simulations? Isn't |
I think this is a fairly large scope that may make sense to breakdown into phases. There is the ante handler which checks transactions, execution and storage and commitment phases. It might make sense to start with that instead of benchmarking a chain. Most of these items can be done with out running a chain, or benchmarking a chain instead the components that take part of the execution path. Tx processing is also up to applications so benchmarking modules may not need to be part of the first phase here. |
Sounds good to me, in particular the part about leaving out application specific processing for now. How would I go about running the ante handler? |
No, |
With some changes like this, I'm able profile block delivery on production data using tendermint block replay:
In my test run, most of the blocks are empty, the profile result looks like this:
What's interesting is tendermint |
Even though the applications built on Cosmos may be very different from "regular" applications, it may be worth looking into classical benchmarks to gather extra data points, such as in https://arxiv.org/pdf/2210.04484.pdf |
@yihuang that sounds like exactly what I want. Can you please explain to me how I acquire a snapshot My only concern is that any snapshot may not have any outlier transactions: unusual transactions taking a disproportionate amount of processing. They're juicy targets for DoS attacks, yet presumably rarely seen in normal transaction logs. |
On startup if tendermint has newer blocks than
It was convenient for me because I'm developing this "versiondb" feature, where I have developed a set of tools to replay the change set to any target version and dump the IAVL snapshot, also able to restore
yeah, that's hard to detect in benchmarks, you can't cover all the cases, probably need to monitor each block's processing time for abnormal numbers. |
How do you do it without having an existing node running? I don't have one locally, but more importantly I think it's crucial to be able to run benchmarks continuously. Otherwise, performance will surely backslide in time. |
I was just try to get a feel about the production behavior, for benchmarks need to run continuously, we'll need more isolated environment. |
closing this for now as the work is part of a simulator rewrite that is getting started |
Summary
As brought up in a recent team meeting, optimizing the transaction processing of Cosmos is a top priority. As a point of comparison Cosmos is described as an order (or two?) of magnitude slower than Tendermint itself.
Problem Definition
Performance is important for keeping the resource requirements of Cosmos chains in check, and to alleviate the effect of denial-of-service attacks.
Work Breakdown
As usual for performance optimization,
CC @odeke-em for reference.
CC @tac0turtle to get the ball rolling. What are the most realistic benchmarks to focus on? Are there other issues relevant to this work?
I've played around with the benchmarks and tests in order to find something relevant.
make test-sim-benchmarks
seems relevant, butRunning
gives me some result, but is it a realistic load?
The text was updated successfully, but these errors were encountered: