-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: improve sessions implementation #495
Conversation
Moves most common session code into an abstract superclass to remove duplication. - Sessions are created synchronously - The root CID of a session is filled on the first CID retrieval - Providers are found and queried for the root block directly, any that have it are added to the session - Providers that have errored (e.g. protocol selection failure) are excluded from the session - Bitswap only queries provider peers, not directly connected peers - HTTP Gatways are loaded from the routing - When providers are returned without multiaddrs we try to load them without blocking yielding of other providers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whew. I'm a big fan of all the listed improvements. Various comments for things I didn't understand and some minor improvements we could make, but overall lgtm
|
||
return { | ||
announce: async (cid, block, options) => { | ||
await this.bitswap.notify(cid, block, options) | ||
}, | ||
|
||
retrieve: async (cid, options) => { | ||
return session.want(cid, options) | ||
return session.retrieve(cid, options) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice. I like the idea of normalizing session usage to blockBroker terminology 🎉
async createSession (root: CID, options?: CreateSessionOptions<BitswapWantBlockProgressEvents>): Promise<BlockBroker<BitswapWantBlockProgressEvents, BitswapNotifyProgressEvents>> { | ||
const session = await this.bitswap.createSession(root, options) | ||
createSession (options?: CreateSessionOptions<BitswapWantBlockProgressEvents>): BlockBroker<BitswapWantBlockProgressEvents, BitswapNotifyProgressEvents> { | ||
const session = this.bitswap.createSession(options) | ||
|
||
return { | ||
announce: async (cid, block, options) => { | ||
await this.bitswap.notify(cid, block, options) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking maybe this should be something like session.announce
, but.. we want all bitswap peers to know about blocks we have now right?
It feels like there may be something to optimize here. If we've got a session of bitswap negotiations going on, we want any peers we sent a WANT, to receive a HAVE/DONTWANT from us, but if we didn't send a WANT/WANTHAVE a want to a peer, and they didn't send a WANT/WANTHAVE to us, we don't necessarily need to notify
that peer that we have the block, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we want all bitswap peers to know about blocks we have now right?
Yes, if any peers want the block we are announcing (e.g. it's in the want list they've previously sent us), the protocol says we are to send it to them.
we want any peers we sent a WANT, to receive a HAVE/DONTWANT from us
If we've sent them a WANT-BLOCK
or WANT-HAVE
, and we acquire the block by other means before they reply with HAVE
or DONT-HAVE
, we send them an updated wantlist cancelling that particular WANT-*
.
if we didn't send a WANT/WANTHAVE a want to a peer, and they didn't send a WANT/WANTHAVE to us, we don't necessarily need to notify that peer that we have the block, right?
We'll only notify them if they've previously sent us a WANT-BLOCK
or a WANT-HAVE
and they've not subsequently cancelled that WANT-*
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It sounds like the this.bitswap
instance will handle this for us then, and we don't need to use a scoped-down session.notify
.
thanks for the explainer
} | ||
} | ||
|
||
function filterMultiaddrs (multiaddrs: Multiaddr[], allowInsecure: boolean, allowLocal: boolean): Multiaddr[] { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
allow for future customizations with config obj?
function filterMultiaddrs (multiaddrs: Multiaddr[], allowInsecure: boolean, allowLocal: boolean): Multiaddr[] { | |
function filterMultiaddrs (multiaddrs: Multiaddr[], { allowInsecure, allowLocal }: { allowInsecure: boolean, allowLocal: boolean }): Multiaddr[] { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤷 I usually think if the final arg is an object, then it's an options object, and all properties should be optional.
packages/bitswap/src/session.ts
Outdated
// increase max providers so we can find another more suitable peer | ||
this.maxProviders++ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should provide a mechanism to cap how much this can increase. if a user customizes maxProviders and we increase based on failed queries to certain providers, we're not respecting that config.
maybe we can increase a validProviders
count and leave maxProviders to the provided config value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason this increases is to allow adding extra session peers after the existing peers prove bad or unreliable.
When we add new peers we query the routing for new providers - if this is deterministic we'd get the same set of providers back, so we need a method to exclude known bad peers, which is why this sets a "failed" flag on the peer and allows the session size to increase.
This probably isn't very scalable since we end up storing an unbounded list of bad peers. I've refactored the code to use a bloom filter to exclude bad peers, this will only ever occupy the memory used by the filter, which is fixed based on the number of hashes it is expected to contain.
if (options.signal?.aborted === true) { | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if (options.signal?.aborted === true) { | |
return | |
} | |
if (options.signal?.aborted === true) { | |
// not throwing the aborted signal because xxxxx | |
return | |
} |
Co-authored-by: Russell Dempsey <1173416+SgtPooki@users.noreply.github.com>
Change checklist