Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: trie cache configuration #7566

Merged
merged 4 commits into from
Sep 6, 2022

Conversation

jakmeier
Copy link
Contributor

@jakmeier jakmeier commented Sep 6, 2022

This is a pure refactor, including two main refactors:

  • replace TrieCacheFactory with TrieConfig
  • move config constants into TrieConfig

The motivation is to facilitate wider configuration and changes to the
config format in follow-up PRs.

See also #7564.

This is a pure refactor, including two main refactors:

- replace `TrieCacheFactory` with `TrieConfig`
- move config constants into `TrieConfig`

The motivation is to facilitate wider configuration and changes to the
config format in follow-up PRs.
@jakmeier jakmeier requested a review from a team as a code owner September 6, 2022 13:48
@jakmeier
Copy link
Contributor Author

jakmeier commented Sep 6, 2022

@matklad this should partially address the points you raised in #7022 (comment)

I will change the config format and add more config options in a follow-up.

@jakmeier jakmeier requested review from Longarithm and matklad and removed request for mzhangmzz September 6, 2022 15:42
core/store/src/trie/config.rs Outdated Show resolved Hide resolved
pub view_shard_cache_config: ShardCacheConfig,
}

pub struct ShardCacheConfig {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I now understand terminology around shards/chunks, and it still doesn't makes sense to me that we call our caches shard cache and chunk cache :) Unrelated to the PR.

Copy link
Contributor Author

@jakmeier jakmeier Sep 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I think it makes some sense. Both are trie node caches for a single shard. But the lifetime of the chunk cache is a single chunk. The lifetime of the shard cache is for as long as the client runs.

But it is very confusing indeed. The cache naming scheme that I am used to would call them something like L0TrieNodeCache for the chunk cache and L1TrieNodeCache for the shard cache. This makes it clear what is being cached and what the hierarchy is. When a cache is cleared doesn't really belong in the name IMO. But I don't care enough to change it...

core/store/src/trie/config.rs Outdated Show resolved Hide resolved
core/store/src/trie/shard_tries.rs Outdated Show resolved Hide resolved
core/store/src/trie/shard_tries.rs Show resolved Hide resolved
@@ -72,7 +71,7 @@ pub struct TrieCacheInner {
/// Upper bound for the total size.
total_size_limit: u64,
/// Shard id of the nodes being cached.
shard_id: u32,
shard_id: ShardId,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 👍 👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this is one is weirder than it might look. ShardId is u64 but in NEP241 we specified shard_id inside ShardUId to be u32, such that it fits in 8 bytes. Changing everything to u32 doesn't work, as there are places where we deserialize it from u64.
I considered creating an issue on GH for a general cleanup. But I'm not sure there is anything worth while doing at this point. Probably it's best to just use ShardId = u64 for shard_id everywhere except inside ShardUId.

Comment on lines +210 to +218
pub fn new(config: &TrieConfig, shard_uid: ShardUId, is_view: bool) -> Self {
let capacity = config.shard_cache_capacity(shard_uid, is_view);
let total_size_limit = config.shard_cache_total_size_limit(shard_uid, is_view);
let queue_capacity = config.deletions_queue_capacity();
Self(Arc::new(Mutex::new(TrieCacheInner::new(
cap,
DEFAULT_SHARD_CACHE_DELETIONS_QUEUE_CAPACITY,
DEFAULT_SHARD_CACHE_TOTAL_SIZE_LIMIT,
shard_id,
capacity as usize,
queue_capacity,
total_size_limit,
shard_uid.shard_id(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a TrieCacheConfig with capacity, total_size_limit, queue_capacity, passing in just that, and removing shard_id and is_viewparametres altogether (we only need them for log_assert, and I am fairly sure we can live without this info there).

Ahhh, no we need those to create metrics as well. 🤔, yeah, I think this is ok then.

- `cfg!() if else` instead of two `#[cfg]`
- derive default for `TrieConfig`
- converge `ShardTries` constructors
@near-bulldozer near-bulldozer bot merged commit 90fd9fb into near:master Sep 6, 2022
@jakmeier jakmeier deleted the refactor-trie-config-7564 branch September 6, 2022 18:49
nikurt pushed a commit that referenced this pull request Sep 7, 2022
This is a pure refactor, including two main refactors:

- replace `TrieCacheFactory` with `TrieConfig`
- move config constants into `TrieConfig`

The motivation is to facilitate wider configuration and changes to the
config format in follow-up PRs.

See also #7564.
jakmeier added a commit to jakmeier/nearcore that referenced this pull request Sep 15, 2022
This is a pure refactor, including two main refactors:

- replace `TrieCacheFactory` with `TrieConfig`
- move config constants into `TrieConfig`

The motivation is to facilitate wider configuration and changes to the
config format in follow-up PRs.

See also near#7564.
nikurt pushed a commit that referenced this pull request Nov 9, 2022
This is a pure refactor, including two main refactors:

- replace `TrieCacheFactory` with `TrieConfig`
- move config constants into `TrieConfig`

The motivation is to facilitate wider configuration and changes to the
config format in follow-up PRs.

See also #7564.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants