-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gossipsub v1.1: prune peer exchange #234
Conversation
Summoning @Stebalien @raulk @Kubuxu; this is ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
initial quick pass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty good to me, left a few comments to address.
Also, noting for posterity that this PR should resolve libp2p/specs#215 🎉
@@ -239,6 +264,90 @@ func (gs *GossipSubRouter) handlePrune(p peer.ID, ctl *pb.ControlMessage) { | |||
gs.tracer.Prune(p, topic) | |||
delete(peers, p) | |||
gs.untagPeer(p, topic) | |||
gs.addBackoff(p, topic) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 great!
However, we're not currently doing anything about a peer continuously chasing away other peers by sending a GRAFT. If during a GRAFT we could check if we're already over the limit and send pack a PRUNE that would largely resolve this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't want to do that because we will reject all new peers and they may not be able to form a connected mesh.
That's why it accepts the peer and resolves (with randomization) during the heartbeat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be clear, sending a PRUNE if we are over the limit is the wrong thing to do, because then the mesh can become full and fail to accept new peers.
(I considered it when designing the algorithm)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get that it used to be that way, but is that still true now that we have peer exchange?
If I send back a PRUNE only when I have >Dhi peers and I prune (Dhi-D) peers and tell them about each other won't they always be able to join the mesh?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They probably will be able to do that, but I'd rather reshuffle the mesh in its entirety instead of rejecting new GRAFTs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think about a fully connected mesh where all peers are at D_hi
-- (unlikely as it is) it won't accept new peers then, while GRAFTing and reshuffling will resolve the situation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, so IIUC this is really a timing problem right? Because if B is connected to C and then A tries to connect to B it's possible that when B PRUNEs A and C that A will send a GRAFT to C before C receives the PRUNE from B.
My concern with the current approach is that if A wants to GRAFT to B it can always keep spamming B and eventually it will get through, even though B has given A plenty of other peers to connect to. This is annoying since if a peer is "better" in some way (e.g. they are the primary publisher) then nodes might be selfish, however it's certainly not a deal-breaker and wasn't even resolvable until the Peer Exchange changes.
Since fixing this would likely require some protocol thought and is less important than the rest of the episub work, seems reasonable to push this further down the road. Would you like me to create an issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It could be an attack -- I don't think it can happen naturally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, we can create an issue to discuss further instead of it being lost in oblivion after the pr merges.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a recent win here! The decisions of which peers to retain and which to eject is now performed by evaluating the peer's score! 🎉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly LGTM but I want to make sure we don't introduce a DoS vector.
gs.addBackoff(p, topic) | ||
px := prune.GetPeers() | ||
if len(px) > 0 { | ||
gs.pxConnect(px) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have two concerns:
- What if a single peer sends a bunch of prunes? Could they cause us to launch a ton of goroutines (and crash)?
- Can't a peer use this to trick us into connecting to a ton of (potentially non-useful) peers? We should only try connecting to the number of peers we actually need (one?).
Ideally, we'd stick these peers in a set of known peers for the topic, connecting to them as-needed whenever we're pruned or we disconnect from a peer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- We must be already grafted, we ignore prunes that don't correspond to a peer on our mesh. So a peer can't make us launch an arbitrary number of goroutines by sending us prunes; at most he can make us launch a single goroutine for each topic we belong to its mesh. Not much of a vector I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- We could have a (legitimate) prune listing an arbitrary number of useless peers; we could limit the number of peers we connect to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For 2 I added a check that limits the number of connections to at most GossipSubPrunePeers
, so this should address the concern.
We do want more than 1 peer in general, to expand our potential peers as much as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. SGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I slept on it and there is a DoS vector: A malicious peer could send us in sequence GRAFT/PRUNE and cause us to spawn a goroutine; it could simply be sitting there sending GRAFT/PRUNE ad infinum, causing us to spawn a goroutine for each pair.
Granted, the effect is limited in how many goroutines it can fit inside the 30s window for the connection timeout, but it's still nasty.
I will rewrite the connection logic to use a limited set of pending connections and goroutine-free scheduling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implemented a connection scheduler, which limits the number of max pending connections too.
@@ -441,16 +553,21 @@ func (gs *GossipSubRouter) heartbeat() { | |||
tograft := make(map[peer.ID][]string) | |||
toprune := make(map[peer.ID][]string) | |||
|
|||
// clean up expired backoffs | |||
gs.clearBackoff() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is probably fine, but we should monitor this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is, walking through each of these every heartbeat should be fine, but could get expensive in some edge cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could do it every few heartbeats instead of every heartbeat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Every heartbeat is probably fine, I'm just calling it out so we keep it in mind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A way more efficient manner would be to use a ring buffer of peer slices, with nslots = BackoffPeriod/HeartbeatInterval
. Then you track the tick count (monotonically increasing by 1), every heartbeat increments it by 1. And when clearing the backoff, you simply do currentTick mod nslots
, and simply clear that slot. That's pretty efficient. The data for backoff would turn into map[string][][]peer.ID
.
acba90e
to
a09bca2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works but can't we just have a buffered channel (selecting with default when writing to it)?
Technically, this works slightly better (de duplicates, etc.) but I'm not sure if the complexity is worth it.
Actually you are right; we don't need this complexity, we can just use a sufficient buffer in the connect channel. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks pretty great. Approving to avoid stalling any longer, but I'd really appreciate these two items being addressed before merging (see comments):
- Backoff data structure for more efficient clearing.
- Buffered channel for connect attempts, and less connector goroutines.
Thanks, @vyzo!
We need to enhance the specs ASAP.
p := peer.ID(pi.PeerID) | ||
|
||
_, connected := gs.peers[p] | ||
if connected { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we'll connect to PEX peers but we'll wait until the next heartbeat to rebalance the mesh. That's why we can safely skip topic members that we're already connected to, because we'll anyway consider them in the next heartbeat (as long as we remain connected to them). I think that's correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup, do you want a comment here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally ;-)
@@ -441,16 +553,21 @@ func (gs *GossipSubRouter) heartbeat() { | |||
tograft := make(map[peer.ID][]string) | |||
toprune := make(map[peer.ID][]string) | |||
|
|||
// clean up expired backoffs | |||
gs.clearBackoff() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A way more efficient manner would be to use a ring buffer of peer slices, with nslots = BackoffPeriod/HeartbeatInterval
. Then you track the tick count (monotonically increasing by 1), every heartbeat increments it by 1. And when clearing the backoff, you simply do currentTick mod nslots
, and simply clear that slot. That's pretty efficient. The data for backoff would turn into map[string][][]peer.ID
.
GossipSubPruneBackoff = time.Minute | ||
|
||
// number of active connection attempts for peers obtained through px | ||
GossipSubConnectors = 16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this too much? 16 conn attempts at once, i.e. we'll try to connect to all peers recommended by the pruning one (as this value is equal to GossipSubPrunePeers
.
I'd prefer if connect was a buffered channel, and we'd have less concurrent goroutines. Right now it's a bit hit or miss, e.g. if we got pruned from two topics at once, the connector goroutines will be saturated with the first batch, and all peers from the second batch will be dropped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a buffered channel! But yes, we can certainly lower from 16, how about 8 or 4?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Max inflight dials FD limit is 160 by default. Assuming each peer has an average of 3 addresses, 16 connectors could make use 30% of our FD allowance. I'd scale this down to 8 by default, but I'd add an option so the app can increase/decrease it (it may also touch the swarm limits!).
Also, we will need some fairness heuristic here. If we're pruned from two or three topics at once, the first topic will get all slots, and the other two will be queued. Instead, we might want to balance peers from topics 1, 2, and 3 for quicker healing. Some form of priority queue could work well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the hardening branch reduces this to 8.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to merge the pending PRs in go-libp2p-core and go-libp2p-peerstore before we merge this one. Those commit hashes in go.mod are nasty.
Re: backoff data structure I am not entirely convinced it is more efficient with the circular buffer, as we'd have to iterate through the entire ring to find if a peer is being backed off. |
rebased on master |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
alright, ready to merge.
GossipSubPruneBackoff = time.Minute | ||
|
||
// number of active connection attempts for peers obtained through px | ||
GossipSubConnectors = 16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the hardening branch reduces this to 8.
Adds support for peer exchange on prune; see #233
Depends on:
TBD: