-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Break up AccountsIndex lock #12787
Break up AccountsIndex lock #12787
Conversation
daa7fe7
to
ba3311d
Compare
runtime/benches/accounts.rs
Outdated
// Write to a different slot than the one being read from. Because | ||
// there's a new account pubkey being written to every time, will | ||
// compete for the accounts index lock on every store |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good comment!
runtime/src/accounts_index.rs
Outdated
#[derive(Clone, Debug)] | ||
pub struct AccountMapEntry<T> { | ||
ref_count: Arc<AtomicU64>, | ||
pub slot_list: Arc<RwLock<SlotList<T>>>, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder we could reduce Arc to 1 from 2 by line this, it's a bit hideous but maybe worth effort? (we should be keen the size of AcccontIndex memory footprint (Arc costs 2-word memory for each, so we're increasing entry size by 4 * usize
here per Account
.) and reduced general update cost from updating Arc twice everytime accessed).
pub struct AccountMapEntryInner<T> {
ref_count: AtomicU64,
pub slot_list: RwLock<SlotList<T>>,
}
type AccountMapEntry<T> = Arc<AccountMapEntryInner>;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm really happy to see this report. :) How about any perf changes of |
first-pass of review done. :) |
c9e2367
to
256a009
Compare
a3c18af
to
00fbb85
Compare
Codecov Report
@@ Coverage Diff @@
## master #12787 +/- ##
========================================
Coverage 82.1% 82.1%
========================================
Files 366 366
Lines 86103 86237 +134
========================================
+ Hits 70739 70872 +133
- Misses 15364 15365 +1 |
runtime/src/accounts_index.rs
Outdated
// already present, then the function will return back Some(account_info) which | ||
// the caller can then take the write lock and do an 'insert' with the item. | ||
// It returns None if the item is already present and thus successfully updated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
outdated comment?
cool. :) also, had the validator been slummed by
As a ground wrapping-up of past various AccountsDB related improvements by you, how about writing like this? Too much hustle? This is done for AccountIndex's you can peek around this build jobs to know how to create these pdfs or ask me :) : https://buildkite.com/solana-labs/system-performance-tests/builds?branch=pull%2F9527%2Fhead also, please take a look at my previous similar reports for inspiration: |
pub fn handle_dead_keys(&self, dead_keys: Vec<Pubkey>) { | ||
if !dead_keys.is_empty() { | ||
for key in &dead_keys { | ||
let mut w_index = self.account_maps.write().unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just making sure: w_index
is moved inside loop body to mitigate longer write lock?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm yeah, it should be a pretty fast loop though, so maybe it's not necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, I'm not sure too for the perf. difference.. Anyway, I just wanted to confirm this was somewhat intentional or just typo.
runtime/src/accounts_index.rs
Outdated
(w_account_entry.unwrap(), is_newly_inserted) | ||
} | ||
|
||
pub fn handle_dead_keys(&self, dead_keys: Vec<Pubkey>) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nits: how about taking a &[Pubkey]
? I donno why clippy didn't warn for this. ;)
runtime/src/accounts_index.rs
Outdated
if let Some(account_entry) = w_index.get(key) { | ||
if account_entry.slot_list.read().unwrap().is_empty() { | ||
w_index.remove(key); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nits: I think we could reduce the lookup from 2 times to once like this:
if let Some(index_entry) = w_index.entry(key) {
if (...) {
index_entry.remove();
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
slight variation on this using Occupied
instead of Some
, but updated!
2nd review pass is done. I think this is really reaching to the lgtm. Thanks for defending my wave of nits attacks patiently. Well, I'm very tend to be interrupted by other concurrent tasks.. |
f79529f
to
66e67f3
Compare
@ryoqun, I couldn't get the PDF's to scale properly, but I was able to collect some metrics: With the DashMap + Index change: https://metrics.solana.com:3000/d/monitor-edge/cluster-telemetry-edge?orgId=2&from=1602901242387&to=1602902152277&panelId=35&var-datasource=Solana%20Metrics%20(read-only)&var-testnet=testnet-dev-carl&var-hostid=All Without DashMap + Index change: https://metrics.solana.com:3000/d/monitor-edge/cluster-telemetry-edge?orgId=2&from=1602902319061&to=1602903305200&panelId=35&var-datasource=Solana%20Metrics%20(read-only)&var-testnet=testnet-dev-carl&var-hostid=All The confirmation + TPS are about the same, the biggest difference seems to be replay has fewer spikes with the changes: https://metrics.solana.com:3000/d/monitor-edge/cluster-telemetry-edge?orgId=2&from=1602901207646&to=1602903332757&panelId=35&fullscreen&var-datasource=Solana%20Metrics%20(read-only)&var-testnet=testnet-dev-carl&var-hostid=All&refresh=1m |
Thanks for running these perf results, anyway! I created them for future reference. break-up-accounts-index-before.pdf I'm still looking the result. But, in general, there should be no bad perf. degradation and spike reduced a lot for some reason. Maybe the root contributor to it is the reduction of squashing, which I didn't expected but a good outcome. :) |
runtime/src/accounts_db.rs
Outdated
// Assertion enforced by `accounts_index.get()`, the latest slot | ||
// will not be greater than the given `max_clean_root` | ||
if let Some(max_clean_root) = max_clean_root { | ||
assert!(*slot <= max_clean_root); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nits: this assert! is no longer needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh bad merge thanks! re-added!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with nits!
Thanks for working so hard on AccountsDB.
206f2c3
to
10bce7c
Compare
Co-authored-by: Carl Lin <carl@solana.com> (cherry picked from commit e6b821c) # Conflicts: # runtime/src/accounts.rs # runtime/src/accounts_db.rs
Co-authored-by: Carl Lin <carl@solana.com>
Co-authored-by: Carl Lin <carl@solana.com>
Problem
Holding AccountsIndex lock during scans blocks account stores
Summary of Changes
Builds on top of #12126
Benchmark
bench_concurrent_scan_write
here shows 1000 inserts of new pubkeys into the storage taking about 9-10ms (80% still waiting on the AccountsIndex write lock), even as the index/accountsdb storage size grows. Before this change, the inserts tarting taking hundreds of ms.Maybe we would use something like a https://github.com/crossbeam-rs/crossbeam/tree/master/crossbeam-skiplist to optimize contention the core lock farther, but there does not seem to be any stable alternatives yet.
Fixes #