-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ChainDB: let the BlockFetch client add blocks asynchronously #2721
Conversation
Draft because the impact on bulk sync speed should be measured. |
6e98685
to
aa1413e
Compare
return $ \pt -> | ||
case pointToWithOriginRealPoint pt of | ||
Origin -> False | ||
NotOrigin pt' -> checkBlocksToAdd pt' || checkVolDb (realPointHash pt') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Race condition: when a block has been removed but not yet written to the VolatileDB, this will return False
. Possible solution: in the background thread, peek the block, process it, and then remove it from the queue. Add a comment that these two are not disjoint.
cc: @edsko.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good thing someone is paying attention 🙄
c5b3a28
to
2defb1f
Compare
Fixes #2487. Currently, the effective queue size when adding blocks to the ChainDB is 1 (for why, see #2487). In this commit, we let the BlockFetch client add blocks fully asynchronously to the ChainDB, which restores the effective queue size to the configured value again, e.g., 10. The BlockFetch client will no longer wait until the block has been written to the VolatileDB (and thus also not until the block has been processed by chain selection). The BlockFetch client can just hand over the block and continue downloading with minimum delay. To make this possible, we change the behaviour of `getIsFetched` and `getMaxSlotNo` to account for the blocks in the queue, otherwise the BlockFetch client might try to redownload already-fetched blocks. This is an alternative to #2489, which let the BlockFetch client write blocks to the VolatileDB synchronously. The problem with that approach is that multiple threads are writing to the VolatileDB, instead of a single background thread. We have relied on the latter to simplify the VolatileDB w.r.t. consistency after incomplete writes.
See the comment on `cdbBlocksToAdd`.
2defb1f
to
5299069
Compare
This is a port of PR IntersectMBO/ouroboros-network#2721 to the new ChainSelQueue. Co-authored-by: mrBliss <thomas@well-typed.com>
This is a port of PR IntersectMBO/ouroboros-network#2721 to the new ChainSelQueue. Co-authored-by: mrBliss <thomas@well-typed.com>
This is a port of PR IntersectMBO/ouroboros-network#2721 to the new ChainSelQueue. Co-authored-by: mrBliss <thomas@well-typed.com>
This is a port of PR IntersectMBO/ouroboros-network#2721 to the new ChainSelQueue. Co-authored-by: mrBliss <thomas@well-typed.com>
Port of IntersectMBO/ouroboros-network#2721 Co-Authored-By: Thomas Winant <thomas@well-typed.com>
Port of IntersectMBO/ouroboros-network#2721 Co-Authored-By: Thomas Winant <thomas@well-typed.com>
Routine cleanup of stale PRs targeting Consensus components (which nowadays live in https://github.com/IntersectMBO/ouroboros-consensus). No work is lost, see https://stackoverflow.com/a/17954767. |
Port of IntersectMBO/ouroboros-network#2721 Co-authored-by: Thomas Winant <thomas@well-typed.com> Co-authored-by: Alexander Esgen <alexander.esgen@iohk.io>
Port of IntersectMBO/ouroboros-network#2721 Co-authored-by: Thomas Winant <thomas@well-typed.com> Co-authored-by: Alexander Esgen <alexander.esgen@iohk.io>
Port of IntersectMBO/ouroboros-network#2721 Co-authored-by: Thomas Winant <thomas@well-typed.com> Co-authored-by: Alexander Esgen <alexander.esgen@iohk.io>
Port of IntersectMBO/ouroboros-network#2721 Co-authored-by: Thomas Winant <thomas@well-typed.com> Co-authored-by: Alexander Esgen <alexander.esgen@iohk.io>
Port of IntersectMBO/ouroboros-network#2721 Co-authored-by: Thomas Winant <thomas@well-typed.com> Co-authored-by: Alexander Esgen <alexander.esgen@iohk.io>
Port of IntersectMBO/ouroboros-network#2721 Co-authored-by: Thomas Winant <thomas@well-typed.com> Co-authored-by: Alexander Esgen <alexander.esgen@iohk.io>
Port of IntersectMBO/ouroboros-network#2721 Co-authored-by: Thomas Winant <thomas@well-typed.com> Co-authored-by: Alexander Esgen <alexander.esgen@iohk.io>
Fixes IntersectMBO/ouroboros-consensus#655.
Currently, the effective queue size when adding blocks to the ChainDB is 1 (for
why, see IntersectMBO/ouroboros-consensus#655). In this commit, we let the BlockFetch client add blocks fully
asynchronously to the ChainDB, which restores the effective queue size to the
configured value again, e.g., 10.
The BlockFetch client will no longer wait until the block has been written to
the VolatileDB (and thus also not until the block has been processed by chain
selection). The BlockFetch client can just hand over the block and continue
downloading with minimum delay. To make this possible, we change the behaviour
of
getIsFetched
andgetMaxSlotNo
to account for the blocks in the queue,otherwise the BlockFetch client might try to redownload already-fetched blocks.
This is an alternative to #2489, which let the BlockFetch client write blocks to
the VolatileDB synchronously. The problem with that approach is that multiple
threads are writing to the VolatileDB, instead of a single background thread. We
have relied on the latter to simplify the VolatileDB w.r.t. consistency after
incomplete writes.