-
Notifications
You must be signed in to change notification settings - Fork 651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tracking issue] State Witness size limit #10259
Comments
Note from the onboarding discussion - another approach is to add state witness size to the compute costs. It should work good enough for short term and be fairly close to what we want in the long term. |
It seems that there are three kinds of objects that contribute to state witness size:
We can't really do anything about 1) because there's no global congestion control, which means that the queue of incoming and delayed receipts is unbounded, so the size of With 2) the situation is better. We control which transactions get added to a chunk, so we could add a size limit for new transactions. In We can limit 3) by executing receipts until the nearcore/runtime/runtime/src/lib.rs Line 1485 in 33b5bd7
I think this would be good enough for normal non-malicious traffic, but this kind of limit isn't enough by itself. In Jakob's analysis he found out that a single receipt can access as much as 36 million trie nodes, which would produce hundreds of megabytes of PartialState . This means that we also need a per-receipt limit - if executing a receipt produces more than X MB of PartialState , then the receipt is invalid and execution failed. Like the 300TGas limit.This will be a breaking change, some contracts that worked before could break after introducing this limit, but I think it's necessary to add it, I don't see any way around it. There's also the question of what the size limit itself should be - In Jakob's analysis he proposed 45MB, but that requires a significant amount of bandwidth - sending 45MB My rough plan of action would be:
|
A quick and hacky size limit example, stops applying receipts when the size of |
This PR adds a new runtime config `state_witness_size_soft_limit` with size about 16 MB to begin with along with an implementation to enforce it in runtime. This is the first step of #10259 What is state witness size soft limit? In order to limit the size of the state witness, as a first step, we are adding a limit to the max size of the state witness partial trie or proof. In runtime, we record all the trie nodes touched by the chunk execution and include these in the state witness. With the limit in place, if the size of the state witness exceeds 16 MB, then we would stop applying receipts further and push all receipts into the delayed queue. The reason we call this a soft limit is that we stop the execution of the receipts AFTER the size of the state witness has exceeded 16 MB. We are including this as part of a new protocol version 83 Future steps - Introduce limits on other parts of the state witness like new transactions - Introduce a hard size limit for individual contract executions - Monitor size of state witness - Add metrics in a separate PR
Updating the project thread. I've merged in PR #10703 which adds a soft limit for storage proof size as highlighted in point 3 of @jancionear comment. The next step I was thinking of pursuing was the hard limit for each contract as per the research work that had been done by Jakob. Based on that I had a conversation with Simonas. Simonas suggested while this is totally doable, we should definitely consider the consequence of adding this restriction on contracts. Historically we've maintained the stance of having backward compatible contracts and adding this restriction can possibly cause some contracts to fail. We should probably get some statistics on the size of data touched by contracts and (1) whether there are any existing contracts on mainnet already running that may break and (2) whether there are any historic/dormant contracts that may break. (1) is easily doable as we can just add metrics to the mirrored mainnet traffic. Marcelo is the right point of contact for this. (2) on the other hand is quite a bit of work, but this too has been done in the past. I'm not personally sure whether the work is worth it for our case. At the end of the day this also boils down to decisions by upper management and we should definitely keep Bowen in the loop and let him know the proposed changes. That said, we should definitely do our research before going to him. As next steps, I propose we add some metrics like P50, P99, P999, P100 to figure out what's the size of data touched by contracts, whether any contracts would break (probably not). Technical side of things
|
cc. @jancionear |
Background
Current State Witness is implicitly limited by gas. In some cases large contributors to the State Witness size are not charged enough gas, which might result in State Witness being too big for the network to distribute it to all validators in time.
Proposed solution
MVP
Limiting State Witness is not required for Stateless Validation MVP/prototype.
Also (1) shows that current mainnet receipts result in reasonable State Witness size, so that won't be an issue for prototyping.
Short Term
In the short term (before launching Stateless Validation on mainnet) we need to implement soft limit for the State Witness size on the runtime side (similar to compute costs). See this comment for more details. This would help to protect agains bringing down the network with receipts that are specifically crafted to result in large State Witness.
Long Term
I believe in the long term we need to adjust our gas costs to reflect contributions to the State Witness size. This means introducing back TTN for reads, charging for contract code size for function calls, etc.
Resources
(1) zulip thread with current witness size analysis
(2) #9378
The text was updated successfully, but these errors were encountered: