-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[resharding] ChatGPT modulation #581
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall 👍
Left a few minor comments
|
||
### PromiseYield Receipt Handling | ||
|
||
Promise Yield were introduced as part of NEP-519 to enable defer replying to caller function while response is being prepared. As part of Promise Yield implementation, it introduced three new trie keys, `PromiseYieldIndices`, `PromiseYieldTimeout` and `PromiseYieldReceipt`. | ||
Promise Yield was introduced as part of NEP-519 to enable deferring replies to the caller function while the response is being prepared. As part of the Promise Yield implementation, it introduced three new trie keys: `PromiseYieldIndices`, `PromiseYieldTimeout`, and `PromiseYieldReceipt`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: not sure why it put a comma before and
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's grammatically correct. We usually have phrases like "a, b, and c are..."
|
||
### PromiseYield Receipt Handling | ||
|
||
Promise Yield were introduced as part of NEP-519 to enable defer replying to caller function while response is being prepared. As part of Promise Yield implementation, it introduced three new trie keys, `PromiseYieldIndices`, `PromiseYieldTimeout` and `PromiseYieldReceipt`. | ||
Promise Yield was introduced as part of NEP-519 to enable deferring replies to the caller function while the response is being prepared. As part of the Promise Yield implementation, it introduced three new trie keys: `PromiseYieldIndices`, `PromiseYieldTimeout`, and `PromiseYieldReceipt`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is NEP-519
the thing who introduced the new trie keys?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, promise yield was part of NEP 519
* `PromiseYieldReceipt { receiver_id, data_id }`: This is the receipt created by the account. | ||
|
||
An account can call the `promise_yield_create` host function that increments the `PromiseYieldIndices` along with adding a new entry into the `PromiseYieldTimeout` and `PromiseYieldReceipt`. | ||
|
||
The `PromiseYieldTimeout` is sorted as per the time of creation and has an increasing value of expires_at block height. In the runtime, we iterate over all the expired receipts and create a blank receipt to resolve the entry in `PromiseYieldReceipt`. | ||
The `PromiseYieldTimeout` is sorted by time of creation and has an increasing value of `expires_at` block height. In the runtime, we iterate over all the expired receipts and create a blank receipt to resolve the entry in `PromiseYieldReceipt`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess all timeouts are sorted among each others?
neps/nep-0568.md
Outdated
* For child with lower index, receipt_bytes is the sum of both delayed receipts bytes and congestion control bytes, hence `receipt_bytes = parent.receipt_bytes` | ||
* For child with upper index, receipt_bytes is just the bytes from delayed receipts, hence `receipt_bytes = parent.receipt_bytes - buffered_receipt_bytes` | ||
* For the child with the lower index, `receipt_bytes` is the sum of both delayed receipts bytes and buffered receipts bytes, hence `receipt_bytes = parent.receipt_bytes`. | ||
* For the child with the higher index, `receipt_bytes` is just the bytes from delayed receipts, hence `receipt_bytes = parent.receipt_bytes - buffered_receipt_bytes`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it more accurate to say parent.buffered_receipt_bytes
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that's right. Thanks!
neps/nep-0568.md
Outdated
|
||
While Frozen MemTries provide the benefits of being compatible with instant resharding, they come at the cost of memory consumption. Once a MemTrie is frozen, since it doesn't support deallocation of memory, it continues to consume as much memory as it did at the time of freezing. In case a node is tracking only one of the child shards, a Frozen MemTrie would continue to use the same amount of memory as the parent trie. Due to this, Hybrid MemTries are only a temporary solution and we rebuild the MemTrie for the children once the post-processing step for Flat Storage is completed. | ||
While Frozen MemTries facilitate instant resharding, they come at the cost of memory consumption. Once a MemTrie is frozen, it continues to consume the same amount of memory as it did at the time of freezing, as it does not support memory deallocation. If a node tracks only one of the child shards, a Frozen MemTrie would continue to use the same amount of memory as the parent trie. Therefore, Hybrid MemTries are only a temporary solution, and we rebuild the MemTrie for the children once the post-processing step for Flat Storage is completed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we rebuild the MemTrie for the children once the post-processing step for Flat Storage is completed.
not sure if we are going to do it in the end, is it worth to put this sentence in the NEP?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a part of the implementation section but yeah, probably too much information. Let me change it to "we rebuild the MemTrie for the children after resharding is completed".
neps/nep-0568.md
Outdated
1. **Freeze the Parent MemTrie**: Create a read-only frozen arena representing a snapshot of the state at the time of freezing (after post-processing the last block of the epoch). The parent MemTrie is no longer required in runtime going forward. | ||
2. **Clone the Frozen MemTrie**: Clone the Frozen MemTrie cheaply for both child MemTries to use. This does not clone the parent arena's memory but merely increases the reference count. | ||
3. **Create Hybrid MemTries for Each Child**: Create a new MemTrie with `HybridArena` for each child. The base of the MemTrie is the read-only `FrozenArena`, while all new node allocations occur in a dedicated `STArena` memory pool for each child MemTrie. This temporary MemTrie is used while Flat Storage is being built in the background. | ||
4. **Rebuild MemTrie from Flat Storage**: Once the Flat Storage is constructed in the post-processing step of resharding, we use it to load a new MemTrie and catch up to the latest block. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same point as line 354
neps/nep-0568.md
Outdated
|
||
### State Sync | ||
|
||
The state sync algorithm defines a `sync_hash` that is used in many parts of the implementation. This is always the first block of the current epoch, which the node should be aware of once it has synced headers to the current point in the chain. A node performing state sync first makes a request (currently to centralized storage on GCS, but in the future to other nodes in the network) for a `ShardStateSyncResponseHeader` corresponding to that `sync_hash` and the Shard ID of the shard it's interested in. Among other things, this header includes the last new chunk before `sync_hash` in the shard, and a `StateRootNode` with hash equal to that chunk's `prev_state_root` field. Then the node downloads (again from GCS, but in the future it'll be from other nodes) the nodes of the trie with that `StateRootNode` as its root. Afterwards, it applies new chunks in the shard until it's caught up. | ||
The state sync algorithm defines a `sync_hash` used in many parts of the implementation. This is always the first block of the current epoch, which the node should be aware of once it has synced headers to the current point in the chain. A node performing state sync first makes a request (currently to centralized storage on GCS, but in the future to other nodes in the network) for a `ShardStateSyncResponseHeader` corresponding to that `sync_hash` and the Shard ID of the shard it's interested in. Among other things, this header includes the last new chunk before `sync_hash` in the shard and a `StateRootNode` with a hash equal to that chunk's `prev_state_root` field. Then the node downloads (again from GCS, but in the future from other nodes) the nodes of the trie with that `StateRootNode` as its root. Afterwards, it applies new chunks in the shard until it's caught up. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we could omit the details about GCS and decentralized state sync, I don't think it has any relevance in this context
Some beautification courtesy ChatGPT.
I double checked everything, we aren't changing any meaning.