-
Notifications
You must be signed in to change notification settings - Fork 698
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider giving each networking subsystem its own network-bridge and subprotocol #810
Comments
Yes, although we need to be careful with back pressure. I realized recently that our current architecture has some nice properties by accident, in particular our combined/non specific back pressure is actually a good thing. In particular, consider the case approval voting can't keep up. Right now this will lead to high ToFs at approval-voting and by extension on approval-distribution and more importantly will fill up the incoming channel, which leads to the network-bridge getting blocked and slowed down, which also is slowing down the backing pipeline. Therefore less stuff will get successfully backed and approval-voting can recover (as it gets less work to do) - finality can proceed. Now consider a world where approval distribution will really only back-pressure on receiving approval votes. Depending on low level details (in the end everything is a single TCP connection), this could lead to approval-distribution getting constantly overwhelmed, while backing continues operating at full speed. Especially with escalation strategies (but even without), things might easily go south in that scenario. Resulting in an every increasing finality lag. |
We should maybe have the backpressure come from candidate validation instead with some priority levels - approval gets top priority and then if we can't validate any more candidates we'd have backpressure on backing that way. There's always the risk that some node can't keep up with the network, though. We could also have some logic in backing that slows it down according to some monotonically increasing |
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.142 to 1.0.143. - [Release notes](https://github.com/serde-rs/serde/releases) - [Commits](serde-rs/serde@v1.0.142...v1.0.143) --- updated-dependencies: - dependency-name: serde dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* Simplify operational extrinsics * Remove old extrinsics from finality verifier
* Simplify operational extrinsics * Remove old extrinsics from finality verifier
* Simplify operational extrinsics * Remove old extrinsics from finality verifier
* Simplify operational extrinsics * Remove old extrinsics from finality verifier
* Simplify operational extrinsics * Remove old extrinsics from finality verifier
* Simplify operational extrinsics * Remove old extrinsics from finality verifier
* Simplify operational extrinsics * Remove old extrinsics from finality verifier
* Simplify operational extrinsics * Remove old extrinsics from finality verifier
* Simplify operational extrinsics * Remove old extrinsics from finality verifier
* Simplify operational extrinsics * Remove old extrinsics from finality verifier
* Simplify operational extrinsics * Remove old extrinsics from finality verifier
* Simplify operational extrinsics * Remove old extrinsics from finality verifier
* Simplify operational extrinsics * Remove old extrinsics from finality verifier
The network bridge was initially designed and implemented in late 2020, before we'd fully implemented all of the necessary networking protocols for parachains.
At that time, we weren't sure whether the gossip protocols would need to share state in some way or if they'd be completely separate. Furthermore, we combined them for the purpose of sharing a
View
.As we've gone on, we've found that:
View
and finding some other mechanism)With that in mind, we should experiment with extracting network-bridge into a generic utility and embedding it into the subsystems directly, with each subsystem having its own subprotocol. If it doesn't negatively affect performance overall, we should create an upgrade path to support old peers on the standalone network bridge as well as new peers on the divided subprotocols. After that is deployed and most nodes have upgraded, we can remove the standalone network-bridge entirely.
As an added bonus, we can release our generic bridge as a crate alongside or as part of orchestra, which will make writing networked parachains even easier.
The text was updated successfully, but these errors were encountered: