-
Notifications
You must be signed in to change notification settings - Fork 700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FRAME: weight ref time accounts for both parachain block production time and relay chain block validation time. But storage cost and machine can be different. #6420
Comments
We have four-ish execution models: alone/memepool, block building, block import, and block validation. In theory, all of these could differ depending upon many possible technologies used by the parachain. A priori, alone/memepool should be the most expensive if one uses batch verification properly, but we do not use batch verification yet, so yes block building is the most expensive, and also sassafras could reduce aggregate costs here dramatically. Also, block building could change wildly depending upon snark batch verfication choices, or starks for storage or similar. Anyways.. At least in theory, we want to measure validator resources fairly precisely because more accurate measurement means the validator resources could be divvied up among more parachains. A parachain does not require too many nodes, and they only track two chains, so their hardware specs could be overkill fairly easily. We thus should not need weights for the parachains side that run in production, no? It's true there maybe diagnostic weights that'd help the parachain set their specs, do multithreading work, etc. |
Agreed, the constraint is then the block validation, I don't know yet if the read and write ref time weights for parachain is correctly benchmarking for the situation of reading and writing in the context of PoV validation. I will look into. |
Anyways we do want collator benchmarking tools for parachain teams that do not run in production. I'm not sure how this should be handled, but overall the expectation should be coarser estimates, because the questions parachain teams should be answering are thing like 4 CPU cores vs 8 CPU cores, or asking tresury for a grant to multi-thread dalek or multi-thread something else. It's likely a lot of the existing node micro benchamrking stuff can be leveraged here. |
No it is not benchmarked this way. We are currently still benchmarking with the assumption of having a disk DB. |
paritytech/cumulus#2579 there also exist a benchmark for comparing import/production/validation. Back then validation was slower than import. |
In the context of a parachain, the ref time weight is used AFAICT to both limit the time to produce a block, and ensure the time to execute the block validation in the relay chain fits into the 2 seconds.
But those execution are different because:
1 - the storage access in the block production reads and writes into the whole database, while block validation only reads from the PoV and the writes is only to ensure storage root. So reads and writes should cost more on the block production than the block validation.
2 - parachain collator can decide to use faster hardware than relay chain, so the execution time for wasm instructions can be faster for the block production than block validation.
For (1) if the storage cost are considerably different, then parachain could increase their hardware requirement to get a better throughput. But then we would need a 3-dimensional weight:
production_ref_time
,validation_ref_time
andproof_size
.The text was updated successfully, but these errors were encountered: