-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Discovery #184
Add Discovery #184
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's move the discovery logic to its own component/file.
Just to clarify the change request, I would like the discovery logic in its own component instead of piling up on the pubsub struct (and file). |
Seems reasonable, however, it like makes fixing #184 (comment) even more difficult (i.e. without looking like a mess of private channels between the PubSub and discover structs). You ok if we punt on that for now and just re-add the waitgroup? |
e93015a
to
d06b14e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's great progress to see the discovery stuffs in its own module, but I am totally confused about this new peer notification stuff you are adding.
That looks completely redundant, as we already have logic to handle this with connection notifications.
This is needed in order to solve #184 (comment). In particular, if we initialize pubsub and the first thing we do is Publish to a topic then we'll start bootstrapping that topic before we actually send the message. If we send the message too soon it'll just be dropped since there won't be anyone in the network (or a large enough portion of the network) to hear the message. We used to just wait a little bit (250ms) between the end of |
The pubsub mainloop should have no details of discovery leaking. |
ea78aaf
to
6edf7ec
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is almost ready to merge I would say, let's get another pass from @Stebalien as well.
5530761
to
730cbb2
Compare
} | ||
|
||
select { | ||
case <-disc.done: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're going to spawn a bunch of goroutines. We really do need to listen on peer join/leave events to determine when we're done bootstrapping. At the moment:
- This is going to fire once the DHT query completes, regardless of whether or not we're done connecting to peers.
- We're going to hit that 100ms sleep.
- We're then going to restart the loop, see that we aren't "ready" because we haven't finished connecting yet.
- We're going to spawn another discovery process.
- We're going to spawn a bunch of goroutines to connect to the same peers.
The right way to do this is:
- When we want more peers, signal that we want to discover more peers.
- When we don't need any more peers, signal that we're happy with the number of peers we have.
Note: We can probably also find a way to hack in a fix but, at the very least, we can't spawn a bunch of goroutines connecting to the same peers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Emitting an "I don't need more peers" signal is likely much more invasive then just checking from the outside. However, if we're certain that the "how many peers" functions that we care about is static per PubSub instance then we can use the PeerJoin events instead of time.Sleep(100 ms)
(although if the number of peers we wait on can change over time this won't work).
We're going to spawn another discovery process.
This won't be very expensive since it should be backed by a backoff cache of some sort anyhow.
We're going to spawn a bunch of goroutines to connect to the same peers.
I'm fine replacing this with storing a local cache of queried peers. I only left it this way bc I though when we talked about this previously that you didn't want to store that state locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed this on a call. It's going to be fine as-is as long as we remember which peers we're currently dialing and/or have recently dialed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good; just a minor bug and and open question about the MinTopicSize
returend function being called potentially outside the event loop.
The history is nasty enough we should probably rebase and squash (or at least rebase to get rid of the merges). |
Added PubSub.Join(topic) that returns a Topic object. This object can be Subscribed and Published to as well as to get a TopicEventHandler for topic events. This means that the Subscription object will no longer handle PeerEvents and that PubSub's Publish and Subscribe functions are deprecated.
When the WithDiscovery option is passed to PubSub then PubSub will be able to search for more peers that are interested in our topics. This includes the ability for Publishes (via Topic.Publish()) to block until the router is ready to publish. When a router is ready is currently defined by a combination of a user defined MinTopicSize function (passed into topic.Publish via the WithReadiness publish option) and the properties of the pubsub router used. The discovery tests show example usage.
3ba9dfb
to
f9c26c2
Compare
🚀 |
This adds bootstrapping and the ability for rediscovery into the PubSub system as described in libp2p/go-libp2p-pubsub-router#28.
Once we go forward with this we can modify go-libp2p-pubsub-router to utilize an arbitrary discovery mechanism instead of the built in content routing based discovery. We likely will also need to implement a number of encapsulating discovery wrappers (e.g. tiered, parallel, etc.) like we have for content routing.
@raulk @vyzo @Stebalien