-
Notifications
You must be signed in to change notification settings - Fork 961
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[multistream-select] Reduce roundtrips in protocol negotiation. #1212
Conversation
In preparation for the eventual switch from tokio to std futures. Includes some initial refactoring in preparation for further work in the context of libp2p#659.
1. Enable 0-RTT: If the dialer only supports a single protocol, it can send protocol data (e.g. the actual application request) together with the multistream-select header and protocol proposal. Similarly, if the listener supports a proposed protocol, it can send protocol data (e.g. the actual application response) together with the multistream-select header and protocol confirmation. 2. In general, the dialer "settles on" an expected protocol as soon as it runs out of alternatives. Furthermore, both dialer and listener do not immediately flush the final protocol confirmation, allowing it to be sent together with application protocol data. Attempts to read from the negotiated I/O stream implicitly flushes any pending data. 3. A clean / graceful shutdown of an I/O stream always completes protocol negotiation. The publich API of multistream-select changed slightly, requiring both AsyncRead and AsyncWrite bounds for async reading and writing due to the implicit buffering and "lazy" negotiation. The error types have also been changed, but they were not previously fully exported. Includes some general refactoring with simplifications and some more tests, e.g. there was an edge case relating to a possible ambiguity when parsing multistream-select protocol messages.
go-libp2p recently started doing this and it significantly reduces connection negotiation times.
So, go-libp2p does this iff the dialer knows that the responder supports the protocol. However, it's technically unsound: multiformats/go-multistream#20. This was one of my original motivations for multistream 2.0. |
As an aside, the test |
This is the case I'm talking about.
Yes, however, it may be too late by that point. By the time peer A sees the By example: Peer A sends:
Peer B will read:
Then send (1):
Then read:
And send another (2):
And finally read:
Send:
And finally process the request/send the response. The issue is that we can get all the way down here before the other side receives (1). |
(I accidentily deleted my earlier comment when I instead wanted to edit it). I think I see now what you are pointing at, which is rather what happens on peer B. Indeed, I think in this implementation would then indeed have peer B think it agreed with peer A on protocol B, processing "other data" accordingly, as you say. Thanks again for pointing that out. |
So, to avoid the mentioned problem, it seems that in multistream-select 1.0 there must always be a request/response boundary between upgrades, but it would still seem to be safe to allow the last upgrade on a connection or substream to be "lazy", which may still be very beneficial for substream upgrades in the substream-per-request scenario, allowing 0-RTT negotiation on these substreams. Does that sound right or am I overlooking something again? |
Technically, the final request may look like an upgrade. Unfortunately, a multistream message is just I'm pretty sure the only completely sound way to do this is to only use the 0-RTT case when we know that the other side speaks the protocol. Any other solutions are shades of gray (that may be fine in practice until we get multistream-2.0 and fix this once and for all). |
The test implicitly relied on "slow" connection establishment in order to have a sufficient probability of passing. With the removal of roundtrips in multistream-select, it is now more likely that within the up to 50ms duration between swarm1 and swarm2 dialing, the connection is already established, causing the expectation of step == 1 to fail when receiving a Connected event, since the step may then still be 0. This commit aims to avoid these spurious errors by detecting runs during which a connection is established "too quickly", repeating the test run. It still seems theoretically possible that, if connections are always established "too quickly", the test runs forever. However, given that the delta between swarm1 and swarm2 dialing is 0-50ms and that the TCP transport is used, that seems probabilistically unlikely. Nevertheless, the purpose of the artificial dialing delay between swarm1 and swarm2 should be re-evaluated and possibly at least the maximum delay further reduced.
While multistream-select, as a standalone library and providing an API at the granularity of a single negotiation, supports lazy negotiation (and in particular 0-RTT negotiation), in the context of libp2p-core where any number of negotiations are composed generically within the concept of composable "upgrades", it is necessary to wait for protocol negotiation between upgrades to complete.
Since reading from a Negotiated I/O stream implicitly flushes any pending negotiation data, there is no pitfall involved in not waiting for completion.
Status Update:
|
|
||
impl fmt::Display for Protocol { | ||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { | ||
write!(f, "{}", String::from_utf8_lossy(&self.0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is ok for now (although I don't like using the usage of the Display
trait), but if multistream-select forces you to use UTF-8, I think we should open an issue to change the design to use String
and &str
. We need to look at the specs of multistream-select 2.0 first, however, in order to not have to potentially revert that later.
/// | ||
/// This limit is necessary in order to be able to unambiguously parse | ||
/// response messages without knowledge of the corresponding request. | ||
/// 140 comes about from 3 * 47 = 141, where 47 is the ascii/utf8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really understand what the ASCII encoding of /
has to do with the length.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue was that if a "protocols list response" happens to contain 47 protocols, then its encoding starts with a uvi-length byte that resembles /
and hence decoding decodes the entire thing as a single protocol message upon seeing the leading /
. With the length restriction decoding is unambiguous because a "protocols list response" with 47 protocols is guaranteed to exceed that length and hence cannot be misinterpreted as a single protocol.
This is certainly not pretty but simply due to the fact that we have Sink/Stream abstractions over messages, so that decoding a message is unaware of the request in whose context decoding happens, i.e. there is no "expected" message when decoding. The protocol messages of multistream-select are unfortunately not designed for unambiguous decoding. We may want to go with a different approach of context-aware decoding in a future implementation, though I'm not sure if it can achieve the "elegance" of the Sink/Stream abstraction.
Let me know if my wording in the code comments can be made clearer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may want to go with a different approach of context-aware decoding in a future implementation, though I'm not sure if it can achieve the "elegance" of the Sink/Stream abstraction.
I think an approach using async/await can be extremely straight-forward, but for now it's ok.
} | ||
} | ||
|
||
if let State::Completed { remaining, .. } = &mut self.state { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit-pick: this block could be moved to line 82 (Ok(Async::Ready(())) => {}
) for clarity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I don't think so, because this should also happen if poll_flush
returned an Err
with WriteZero
.
return Ok(Async::NotReady) | ||
} | ||
SeqState::SendProtocol { mut io, protocol } => { | ||
let p = Protocol::try_from(protocol.as_ref())?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a little saddening that this PR reverses the direction of #800.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was not completely unaware of that prior work. It is my understanding though (and you're probably aware) that Bytes
/ BytesMut
do not even allocate for <= 31
bytes, which probably covers the majority of protocol names.
Together with the fact that this PR also generally reduces I/O roundtrips and the number of send
/ recv
(sys)calls on the underlying transport, my hope is that there is overall certainly no severe performance regression (indeed, there should be an improvement, though I have no benchmarks to back up that claim, except for the fact that some tests started failing because of implicit assumptions on the "delay" caused by protocol negotiation over TCP transports). It may also be seen as a minor advantage for the implementation that cloning a Protocol
is known to be cheap, whereas that is an implicit assumption for N: Clone
.
I admit though that I'm generally not too concerned with performance on the level of counting allocations, unless there are benchmarks in place that call out major regressions.
As I noted in some inline comments, personally I would actually like to eventually replace the N
type parameter in the API of this crate with the Protocol
type, which this PR only does internally for now. Given that protocol names are actually supposed to be utf8-encoded strings, a related (vague) thought is that the internal representation of Protocol
could possibly be changed later to Cow<'static, str>
or something similar, which could then avoid additional intermediate allocations again also for larger, 'static
protocol names.
So overall, does that sound reasonable or what is your verdict on the PR? Do you generally approve of the changes or should more time be invested into analysing the performance and allocations, possibly by adding benchmarks (which could also be done as a follow-up, of course)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So overall, does that sound reasonable or what is your verdict on the PR? Do you generally approve of the changes or should more time be invested into analysing the performance and allocations, possibly by adding benchmarks (which could also be done as a follow-up, of course)?
Overall I am in favour of this PR and I would not analyse the allocations any further. I have no doubts that due to the reduced number of round trips the overall performance is better. The creation of intermediate Bytes
is just something I noticed while reading the diff.
Let's merge this? 🎉 |
@tomaka Sure, I'm just wondering if you'd like to make a |
I don't think it's needed. If we had an easy way to do releases, I'd be in favour of publishing 0.11.1. However that's not the case, and releasing a version is slow and painful at the moment. |
Overview & Motivation
This is another attempt at #659, superseding #1121 and sitting on top of #1203. The following are the main conceptual changes:
The multistream-select header is always sent together with the first negotiation message.
Enable 0-RTT: If the dialer only supports a single protocol, it can send protocol data (e.g. the actual application request) together with the multistream-select header and protocol proposal. Similarly, if the listener supports a proposed protocol, it can send protocol data (e.g. the actual application response) together with the multistream-select header and protocol confirmation.
In general, the dialer "settles on" an expected protocol as soon as it runs out of alternatives. Furthermore, both dialer and listener do not immediately flush the final protocol confirmation, allowing it to be sent together with application protocol data. Attempts to read from an incompletely negotiated I/O stream implicitly flushes any pending data.
A clean / graceful shutdown of an I/O stream always tries to complete protocol negotiation, so even if a request does not expect a response, as long as the I/O stream is shut down cleanly, an error will occur if the remote did not support the expected protocol after all.
If desired in specific cases, it is possible to explicitly wait for full completion of the negotiation before sending or receiving data by waiting on the future returned by
Negotiated::complete()
.Overall, the hope is that this especially improves the situation for the substream-per-request scenario.
The public API of multistream-select changed slightly, requiring both
AsyncRead
andAsyncWrite
bounds for async reading and writing due to the implicit buffering and "lazy" negotiation. The error types have also been changed, but they were not previously fully exported anyway.This implementation comes with some general refactoring and simplifications and some more tests, e.g. there was an edge case relating to a possible ambiguity when parsing multistream-select protocol messages.
Examples
0-RTT Negotiation
Logs of a 0-RTT negotiation; The dialer supports a single protocol also supported by the listener:
A corresponding packet exchange excerpt (Wireshark) of 0-RTT negotiation where
ping
andpong
are "application protocol data" and e:Other Examples
Logs from a 1-RTT negotiation; The dialer supports two protocols, the second of which is supported by the listener.
Since protocol negotiation "nests", the following may also be observed as a single packet (again a simplified Wireshark excerpt):
Multistream-select 2.0
Given the status of the spec and the apparent lack of any implementations that I'm aware of, I didn't actually bother trying to implement it. Instead, these changes incorporate minimal compatibility with the currently proposed upgrade path by having the listener (aka responder) always answer to
/multistream/2.0.0
withna
. The implementation could be continued once the spec becomes "actionable".