Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reuse capacity when possible in <BytesMut as Buf>::advance impl #698

Merged
merged 1 commit into from
Apr 25, 2024
Merged

Reuse capacity when possible in <BytesMut as Buf>::advance impl #698

merged 1 commit into from
Apr 25, 2024

Conversation

paolobarbolini
Copy link
Contributor

While thinking about usage of BytesMut in my own code I've realized sometimes the following pattern (explained by the following pseudocode) is encountered:

let mut buf = BytesMut::with_capacity(8196);

loop {
    read_into_bufmut(&mut buf);

    if some_condition {
        // consume the data and then discard it
        let consumed = process_data(&buf);
        buf.advance(consumed);
        // in most cases the above `advance` ends-up consuming the entire length of `buf`
    } else {
        // consume the data as `Bytes`
        ship_data(buf.split().freeze());

        // (this is just to explain that `VecDeque` wouldn't be efficient here)
    }
}

In this particular example it would be more efficient to buf.clear() when consumed == buf.len(), instead of calling advance and eventually ending-up going through the allocation machinery.

This PR makes advance do it automatically for cases in which it's safe to do (which is why I didn't put it inside advance_unchecked). I'm not sure the Buf API allows it, the only thing I've found is #[must_use = "consider BytesMut::advance(len()) if you don't need the other half"] on BytesMut::split().

@paolobarbolini paolobarbolini changed the title Reuse capacity when possible in <BytesMut as Buf>::advance impl Reuse capacity when possible in <BytesMut as Buf>::advance impl Apr 23, 2024
Copy link
Contributor

@Darksonn Darksonn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make more sense to move this check to advance_unchecked?

src/bytes_mut.rs Outdated Show resolved Hide resolved
@paolobarbolini
Copy link
Contributor Author

Would it make more sense to move this check to advance_unchecked?

I'm not quite sure how to structure this because the thing with advance_unchecked is that it gets called by split_to and split_off. These functions expect advance_unchecked to actually advance the position, otherwise they would end up with the two instances of BytesMut having overlapping ranges.

@Darksonn
Copy link
Contributor

The conflict with split_to and split_off is a good reason to not put it in advance_unchecked.

Copy link
Contributor

@braddunbar braddunbar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is nice, thanks! I only have two small requests:

  1. Can we use self.remaining() or self.len consistently in this function? Either one is fine with me, but it's not obvious that those values are synonymous.
  2. Can we move this conditional before the assert! above? If cnt == self.len, we already know that cnt <= self.remaining() so we don't need to check it twice.

@paolobarbolini
Copy link
Contributor Author

Good idea. Fixed!

@Darksonn Darksonn merged commit baa5053 into tokio-rs:master Apr 25, 2024
15 checks passed
@mox692 mox692 mentioned this pull request Jul 30, 2024
@camshaft
Copy link
Contributor

camshaft commented Aug 1, 2024

This change broke our usage of BytesMut in s2n-quic, where we have a similar expectation:

These functions expect advance_unchecked to actually advance the position, otherwise they would end up with the two instances of BytesMut having overlapping ranges.

In our case, we expect that advance actually reduces the overall capacity of the chunk. There's also no efficient way to work around this, since the only option is to call split_to and discard the result.

@Darksonn
Copy link
Contributor

Darksonn commented Aug 1, 2024

That's unfortunate, we may have to revert this change. Can you open a bug report with details about this regression?

@camshaft
Copy link
Contributor

camshaft commented Aug 1, 2024

Opened #725

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants