-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Recently rejected cache for transaction queue #9005
Conversation
385dddf
to
d3fe51e
Compare
} | ||
|
||
/// Minimal size of rejection cache, by default it's equal to queue size. | ||
const MIN_REJECTED_CACHE_SIZE: usize = 2048; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe use this to instantiate the hashmap with_capacity()
? Not sure how hot this cache is expected to be.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, the cache is used quite extensively, especially when you are running with a small transaction pool.
@@ -138,6 +138,50 @@ impl CachedPending { | |||
} | |||
} | |||
|
|||
#[derive(Debug)] | |||
struct RecentlyRejected { | |||
inner: RwLock<HashMap<H256, transaction::Error>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious about why you're storing the error here as opposed to using a HashSet
. Do you plan to use the error count stats to refine the logic further down?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error is stored here, so that we can return it later (we could return generic RecentlyRejected
, but It's more descriptive to have the actual values), remember that Error
can look like this: InsufficientGasPrice { got, minimal }
, so for every transaction the values might be completely different.
I'm planning to refine the logic a bit as well, especially with regard to cache invalidation (like: "Clear all InsufficientBlanace
errors efficiently"), and was considering approaches like Vec<(Error, HashSet)>
or Vec<ErrorCode, HashMap<Hash, Error>>
, but haven't decided yet, what path to follow. Will run some performance tests, cause maybe HashMap<Hash, Error>
and iterating over all entries (or using retain
) will be efficient enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the explanation! :)
inner.insert(hash, err.clone()); | ||
|
||
// clean up | ||
if inner.len() > self.limit { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we never go above the limit the cache entries are never expired, is that ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cache is completely cleared on Pool::cull
, which in turn should be run on every block.
@dvdplm or @andresilva for a final review? |
* Store recently rejected transactions. * Don't cache AlreadyImported rejections. * Make the size of transaction verification queue dependent on pool size. * Add a test for recently rejected. * Fix logging for recently rejected. * Make rejection cache smaller. * obsolete test removed * obsolete test removed * Construct cache with_capacity.
* parity-version: bump beta to 1.11.6 * scripts: remove md5 checksums (#8884) * Add support for --chain tobalaba * Convert indents to tabs :) * Fixes for misbehavior reporting in AuthorityRound (#8998) * aura: only report after checking for repeated skipped primaries * aura: refactor duplicate code for getting epoch validator set * aura: verify_external: report on validator set contract instance * aura: use correct validator set epoch number when reporting * aura: use epoch set when verifying blocks * aura: report skipped primaries when generating seal * aura: handle immediate transitions * aura: don't report skipped steps from genesis to first block * aura: fix reporting test * aura: refactor duplicate code to handle immediate_transitions * aura: let reporting fail on verify_block_basic * aura: add comment about possible failure of reporting * Only return error log for rustls (#9025) * Transaction Pool improvements (#8470) * Don't use ethereum_types in transaction pool. * Hide internal insertion_id. * Fix tests. * Review grumbles. * Improve should_replace on NonceAndGasPrice (#8980) * Additional tests for NonceAndGasPrice::should_replace. * Fix should_replace in the distinct sender case. * Use natural priority ordering to simplify should_replace. * Minimal effective gas price in the queue (#8934) * Minimal effective gas price. * Fix naming, add test * Fix minimal entry score and add test. * Fix worst_transaction. * Remove effective gas price threshold. * Don't leak gas_price decisions out of Scoring. * Never drop local transactions from different senders. (#9002) * Recently rejected cache for transaction queue (#9005) * Store recently rejected transactions. * Don't cache AlreadyImported rejections. * Make the size of transaction verification queue dependent on pool size. * Add a test for recently rejected. * Fix logging for recently rejected. * Make rejection cache smaller. * obsolete test removed * obsolete test removed * Construct cache with_capacity. * Optimize pending transactions filter (#9026) * rpc: return unordered transactions in pending transactions filter * ethcore: use LruCache for nonce cache Only clear the nonce cache when a block is retracted * Revert "ethcore: use LruCache for nonce cache" This reverts commit b382c19. * Use only cached nonces when computing pending hashes. * Give filters their own locks, so that they don't block one another. * Fix pending transaction count if not sealing. * Clear cache only when block is enacted. * Fix RPC tests. * Address review comments. * A last bunch of txqueue performance optimizations (#9024) * Clear cache only when block is enacted. * Add tracing for cull. * Cull split. * Cull after creating pending block. * Add constant, remove sync::read tracing. * Reset debug. * Remove excessive tracing. * Use struct for NonceCache. * Fix build * Remove warnings. * Fix build again. * miner: add missing macro use for trace_time * ci: remove md5 merge leftovers
…rp_sync_on_light_client * 'master' of https://github.com/paritytech/parity: Never drop local transactions from different senders. (openethereum#9002) Precise HTTP or WebSockets for JSON-RPC options (openethereum#9027) Recently rejected cache for transaction queue (openethereum#9005) Make HashDB generic (openethereum#8739) Only return error log for rustls (openethereum#9025) Update Changelogs for 1.10.8 and 1.11.5 (openethereum#9012)
Tests shows that we receive the same transactions multiple times (especially when running with large number of peers).
This causes excessive verification overhead, when the same transaction is verified over and over again and then rejected for exactly the same reason.
This PR introduces a cache for rejections, which we clear after every block.
Also because of large amount of transactions the sync -> IoWorker queue was easily saturated and a lot of transactions were just dropped, this PR also addresses that by making the queue size depend on the transaction pool size.
In subsequent PR I plan to optimize cache eviction, cause initial tests indicate that the most common rejection reasons don't need to be cleared at every block:
Related #8696