-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: set provider manager options #593
feat: set provider manager options #593
Conversation
@@ -56,22 +86,23 @@ type getProv struct { | |||
} | |||
|
|||
// NewProviderManager constructor | |||
func NewProviderManager(ctx context.Context, local peer.ID, dstore ds.Batching) *ProviderManager { | |||
func NewProviderManager(ctx context.Context, local peer.ID, dstore ds.Batching, opts ...Option) (*ProviderManager, error) { | |||
pm := new(ProviderManager) | |||
pm.getprovs = make(chan *getProv) | |||
pm.newprovs = make(chan *addProv) | |||
pm.dstore = autobatch.NewAutoBatching(dstore, batchBufferSize) | |||
cache, err := lru.NewLRU(lruCacheSize, nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hydra is probably the only case where there's a worry about this being expensive, so if this optimization doesn't seem important there, it may not be worth the complexity.
consider checking if the cache is unset after applying options, and only creating the fallback/default one if it hasn't.
Any reason not to use a single shared provider record instead of constructing one per DHT? |
1 similar comment
Any reason not to use a single shared provider record instead of constructing one per DHT? |
Do you mean a single provider manager? I did originally try that, please read the PR description for why not. |
I did not read it... you're right, this seems like the right way to go. |
This adds a new flag and env var to allow provider record GC to be disabled: `-disable-prov-gc` / `HYDRA_DISABLE_PROV_GC`. If GC is _enabled_ it automatically disables provider record GC for all Hydra heads other than the first. It also renames the `Relay` option to `EnableRelay` to better communicate that it's a boolean flag. Depends on: * [x] libp2p/go-libp2p-kad-dht#593 * [x] sysulq/golang-lru#6 fixes #74
Summary
This PR enables the provider manager to accept options and exposes the
cleanupInterval
andcache
(cache implementation) as options that can be set.This also adds
providersOptions []providers.Option
as a main option to the DHT.Context
Currently all Hydra heads share a datastore and they all perform GC on that same datastore. We really only need one Hydra head to perform GC. Exposing the
cleanupInterval
option allows us to disable GC on all other heads (by setting it really high).Originally I opened #591 to expose the
ProviderManager
as a constructor option, the idea being to construct a single provider manager instance and pass it to all DHTs. However, this would have only solved the GC problem for a single Hydra i.e. we wouldn't be able to stop multiple Hydras sharing the same datastore from GC-ing and additionally, all heads for a single Hydra would share the same cache. Since Hydra Peer IDs are balanced, the overlap between the caches should be minimal and we'd end up with a thrashy cache not tuned for a particular peer ID.The cache implementation is exposed as an option because Hydra heads who are not performing GC still need a way to invalidate their caches (the cache is cleared only when GC begins). An LRU cache with expiry will be passed instead (maybe this one https://github.com/hnlq715/golang-lru).
NewProviderManager
now returns an error instead of panicing...