Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ipfs/go-bitswap] Bitswap servers should reuse streams for sending responses #80

Open
aschmahmann opened this issue Feb 24, 2022 · 0 comments
Labels
kind/enhancement A net-new feature or improvement to an existing feature P1 High: Likely tackled by core team if no one steps up

Comments

@aschmahmann
Copy link
Contributor

aschmahmann commented Feb 24, 2022

Problem

While sending wantlists to peers reuses streams when possible https://github.com/ipfs/go-bitswap/blob/2b51297a0b68198b6c4bcacdd8868a6df8dcd182/network/ipfs_impl.go#L105-L108

Responding to messages involves opening up new streams https://github.com/ipfs/go-bitswap/blob/2b51297a0b68198b6c4bcacdd8868a6df8dcd182/network/ipfs_impl.go#L326-L334

The main issues with this AFAICT are:

  1. Reduced utility of backpressure since instead of messages getting queued up and handled one at a time the multiplexer has to deal with all the responses together
  2. Extra resource consumption in opening up new streams. Opening up streams isn't crazy expensive, but it's a waste
  3. Multiplexer window resetting:
    • Similar to protocols like TCP multiplexers like Yamux have window sizes of how many bytes they're willing to send at a time without acknowledgement
    • The Yamux multiplexer has a maximum window size of 16MiB https://github.com/libp2p/go-libp2p-yamux/blob/90fad39d3b2797e48d3a400d077f3cf282ed25d1/transport.go#L22
    • But an initial window of 256KiB https://github.com/libp2p/go-yamux/blob/e7b88075cff76b8c8372efdc93884a0d1c3cfe9d/const.go#L118
    • This means the window is reset for every block
    • An example of where this would be brutal is when sequentially (i.e. strictly verifiably) downloading a linear DAG of 1MiB blocks since each block will require multiple round trips due to needing to increase the window size each time
    • This also effects default go-ipfs UnixFS DAG transfer since the default chunker creates 256KiB blocks (with protobuf wrapping pushing their size above 256KiB) and then extra Bitswap message padding increases the response size above 256KiB meaning that each block requires two round trips to transfer

Discovered while doing some performance benchmarking in ipfs/test-plans#4.

Proposed Solution

It turns out that client side code is more than happy to continuously process multiple messages per stream
https://github.com/ipfs/go-bitswap/blob/dbfc6a1d986e18402d8301c7535a0c39b2461cb7/network/ipfs_impl.go#L406

With some basic inspection it appears the code supporting this has been around since ancient times (2016) ipfs/go-bitswap@58d1750 and so should be safe to use.

Reusing the sending stream for responses is not supported and would be a protocol breaking change and so we need two long lived streams per peer.

We should be able to reuse or copy most of the stream reuse code for the streamMessageSender used by the wantlists for sending responses. However, some care will have to be given to make sure backpressure gets happily handled when the stream is backed up so we don't pull tasks off the task queue and load blocks from disk if we have nowhere to send them.

  • Advantages:
    • Non protocol breaking change
    • On the surface seems not too bad to implement (although backpressure code might end up being more involved)
    • Solves the problem
  • Disadvantages:
    • Clients can't safely assume this is happening everywhere until all servers have upgraded
    • Without a protocol break signal or gross things like leveraging UserAgent strings clients can't prioritize fetching from updated peers (although to be fair, there's no codepaths for this anyway so probably not much is lost here at least in go-bitswap)
@aschmahmann aschmahmann added the need/triage Needs initial labeling and prioritization label Feb 24, 2022
@aschmahmann aschmahmann added kind/enhancement A net-new feature or improvement to an existing feature P1 High: Likely tackled by core team if no one steps up and removed need/triage Needs initial labeling and prioritization labels Feb 25, 2022
@Jorropo Jorropo changed the title Bitswap servers should reuse streams for sending responses [ipfs/go-bitswap] Bitswap servers should reuse streams for sending responses Jan 27, 2023
@Jorropo Jorropo transferred this issue from ipfs/go-bitswap Jan 27, 2023
guseggert pushed a commit that referenced this issue Mar 15, 2023
* feat(block options): add store codec
* refactor: BlockPutSettings.CidPrefix

Removes duplicated fields and replaces them with cid.Prefix

Codec, MhType and MhLength were  already in prefix, and we already return
prefix. A lot of duplicated values and code responsible for syncing them
did not really need to exist.

* test: CIDv1 raw and dag-pb cases
* chore: release 0.7.0

Co-authored-by: Marcin Rataj <lidel@lidel.org>

This commit was moved from ipfs/interface-go-ipfs-core@a3374d9
Jorropo pushed a commit to ipfs/go-libipfs-rapide that referenced this issue Mar 23, 2023
* feat(block options): add store codec
* refactor: BlockPutSettings.CidPrefix

Removes duplicated fields and replaces them with cid.Prefix

Codec, MhType and MhLength were  already in prefix, and we already return
prefix. A lot of duplicated values and code responsible for syncing them
did not really need to exist.

* test: CIDv1 raw and dag-pb cases
* chore: release 0.7.0

Co-authored-by: Marcin Rataj <lidel@lidel.org>

This commit was moved from ipfs/interface-go-ipfs-core@a3374d9
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement A net-new feature or improvement to an existing feature P1 High: Likely tackled by core team if no one steps up
Projects
None yet
Development

No branches or pull requests

1 participant