Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Low prio] Add Watch() method to watch pubsub updates #3

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

hsanjuan
Copy link
Contributor

Playing here a bit.. I would like to have a tool that I can use to watch IPNS-pubsub updates without having to run ipfs, just by running a libp2p node connected to the swarm. It seemed that this would help.

This method returns a channel on which all pubsub updates for a key are sent.

Channels are closed when Cancel() is called on a key.

Updates are sent to all channels which have registered for a key (one per call
to Watch()).

This method returns a channel on which all pubsub updates for a key are sent.

Channels are closed when Cancel() is called on a key.

Updates are sent to all channels which have registered for a key (one per call
to Watch()).
@hsanjuan hsanjuan self-assigned this Jun 26, 2018
@hsanjuan hsanjuan requested a review from Stebalien June 26, 2018 18:55
@hsanjuan hsanjuan changed the title [Low prioAdd Watch() method to watch pubsub updates [Low prio] Add Watch() method to watch pubsub updates Jun 26, 2018
watchChannels.mux.RLock()
p.watchMux.RUnlock()

defer watchChannels.mux.RUnlock()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could be wrong but there is no difference between deferring the RUnlock here as opposed to right under the call to RLock? And it would be clearer if the defer was right after the lock. For example:

	watchChannels.mux.RLock()
	defer watchChannels.mux.RUnlock()

	p.watchMux.RUnlock()

@@ -138,6 +147,23 @@ func (p *PubsubValueStore) Subscribe(key string) error {
return nil
}

func (p *PubsubValueStore) Watch(key string) <-chan []byte {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

godoc?

@Stebalien
Copy link
Member

This looks related to:

https://github.com/libp2p/go-libp2p-routing/blob/29f1089de6a0826ecf62d20cddd8bb0930d54fb2/routing.go#L54-L62

That's my attempt at a value-store independent version of this. I started implementing it for the DHT but decided to punt as we had more pressing issues.

  1. What do you think of the interface?
  2. Would that interface make sense here?

defer watchChannels.mux.RUnlock()
for _, ch := range watchChannels.channels {
select {
case ch <- data:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd consider:

  1. Adding a buffer size of 1.
  2. Replacing the buffered value in this case.

That is,

select {
case ch <- data:
case <-ch:
    ch <- data
}

That way, the user always gets the latest value.

@hsanjuan
Copy link
Contributor Author

What do you think of the interface? Would that interface make sense here?

Well yeah, this is basically an implementation of that interface. However it looks weird that with that interface it's not possible to control subscriptions so a GetValue() leaves a thread running there that can't be removed. And same would be with SearchValue() (unless we cancel the subscription on context cancellation, but that might screw with later calls to GetValue() calls).

Also seems options are unused, but we'd have to think what happens when GetValue is called with some options and SearchValue with different ones (do we keep separate subscriptions for each or just filter?).

All that said, the interface makes sense for what it is (a KeyValueStore user should not have to worry how things are working below, whether subscribing or something else). And we can always adopt the interface signature for SearchValue and improve later.

@magik6k magik6k mentioned this pull request Aug 31, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants