-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: Spec on current cachekv implementation #13977
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this document!!
In a followup PR, we should add concurrency assumptions of the cachekv store. Currently it seems like the memdb used has a mutex and the cachekv store has a mutex which could lead to unforeseen issues
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @dangush!
Cool work, especially on describing the iterator! CacheKV is probably the most complex part of the store module.
I've left some comments below, where I think the current explanation can be improved.
In general, @angbrav and I have also been diving into the store module and started writing down our understanding. If you intend to add more content, it would be good to sync!
* Allow iteration over contiguous spans of keys | ||
* Act as a cache, so we don't repeat I/O to disk for reads we've already done | ||
* Note: We actually fail to achieve this for iteration right now | ||
* Note: Need to consider this getting too large and dropping some cached reads |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain what you mean here? In contrast to the inter-block cache, there is no upper bound on the cache size in a CacheKV.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe @ValarDragon (who wrote this part) is referring to considering runtime issues that can be mitigated by bounding cache. For example, the complexity of running iteration on any size range of keys right now is tied to the overall size of the cache. The best case here would be to design iteration to run relative to the size of the range, but bounding cache size may be needed / a consideration also.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, but the current use of CacheKV to scope transactions in memory until writing back to the underlying IAVL doesn't really allow for bounding cache size, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not that I'm aware of, no.
|
||
## Iteration | ||
|
||
Efficient iteration over keys in `KVStore` is important for generating Merkle range proofs. Iteration over `CacheKVStore` requires producing all key-value pairs from the underlying `KVStore` while taking into account updated values from the cache. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should say here that iterators range over a key interval [start, end)
, as it becomes important below.
@tac0turtle I would like to resolve #13977 (comment) first, it might cause confusion if it's merged as-is. The remaining comments should be easy to address, we can do a follow-up PR but should also be fairly easy to address right here. Also, keep in mind that we need to sync this with the changes of #13881. |
@tac0turtle I'd rather resolve the simple issues in this PR and the more complex ones in a different one. But I am also fine with the alternative. By simple I mean:
More complex ones (a different PR):
|
* Allow iteration over contiguous spans of keys | ||
* Act as a cache, so we don't repeat I/O to disk for reads we've already done | ||
* Note: We actually fail to achieve this for iteration right now | ||
* Note: Need to consider this getting too large and dropping some cached reads |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not that I'm aware of, no.
FYI, I just did a relatively big refactoring on cachekv: #14350 |
Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
merging this and lets handle the changes in a follow up pr |
Description
Contributes to: #12986
Adds documentation of the current CacheKVStore implementation, as per phase 1 of the plan outlined in #12986 to improve the SDK's storage layer.
Author Checklist
All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.
I have...
docs:
prefix in the PR titleReviewers Checklist
All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.
I have...
docs:
prefix in the PR title