Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

peerstore: memoryAddrBook GC interval is hard-coded #1698

Closed
lthibault opened this issue Apr 22, 2020 · 0 comments · Fixed by #2960
Closed

peerstore: memoryAddrBook GC interval is hard-coded #1698

lthibault opened this issue Apr 22, 2020 · 0 comments · Fixed by #2960

Comments

@lthibault
Copy link
Contributor

I'm writing an application that uses very small TTLs in order to quickly detect unavailable peers. The TTL values are on the order of seconds.

Currently, memoryAddrBook waits 1h between GC cycles, which is unacceptably long. A naive solution would be make the hard-coded value configurable, but this is likely to be very costly at low interval values. I think we should use a slightly more sophisticated method.

I've evaluated a few different approaches, which fall into two rough categories:

  1. Remove the need for periodic polling (Approach 1)
  2. Make periodic polling very cheap (Approaches 2 -- 4)

I'm wondering whether any of these are good fits for memoryAddrBook, and I'm happy to contribute a patch if there's any interest.

Approach 1: Let the Go runtime handle timeouts

The Go runtime is surprisingly smart. The runtime maintains a heap of runtime.timer instances, min-ordered by expiration time, and sleeps until the first timer fires. I'm curious to know whether this approach was ever attempted, and if so, whether it was ever benchmarked. In my experience, deferring to the Go runtime in such matters is almost always the best solution.

Here's a rough sketch of what this would look like. Note that my example doesn't make use of sharding (as with addrSegments), though there is nothing preventing us from doing so. Note also how timers are reused.

type addrSet struct {
    sync.Mutex
    as map[peer.ID]*entry
}

type entry struct{
    set *addrSet

    timerLock sync.Mutex
    timer *time.Timer

    key peer.ID
    addrs map[string]multiaddr.Multiaddr
}

Any time a peer's addresses/TTL is updated, you need to run entry.setExpire:

func (e *entry) setExpire(d time.Duration) {
    e.timerLock.Lock()
    defer e.timerLock.Unlock()

    // if this entry was just created, there won't be a TTL job yet
    if e.timer != nil {
        // cancel the existing TTL expiration job
        if !e.timer.Stop() {
            // The timer fired right before this function was called, so we need to re-add it.
            // Assumes setExpire's caller holds the addrSet lock.
            e.set.as[e.key] = e
        }
    }

    e.ttlTimer = time.AfterFunc(TTL, e.expire)
}

func (e *entry) expire() {
    e.set.Lock()
    defer e.set.Unlock()
    delete(e.set.as, e.key)
}

There are so low-hanging optimizations we can do here. The above is just trying to paint a "big picture".

The potential drawback I see is that the runtime's timer heap is protected by a mutex. At some scale, I'd expect contention to become an issue, and there's little we can do about it. That said, I'm not sure what that inflection point actually is, and in practice, I've never encountered it.

Approach 2: Heap-based expiration

As mentioned above, we can continue to run a periodic GC task, and focus on making it cheap enough to run often. A 1h interval probably makes sense for most applications, but it would ideally be feasible to lower the interval to the order of seconds (or even tens of milliseconds).

This approach would involve maintaining a heap alongside the map of expiring addrs. The GC loop would then proceed to do the following on each iteration:

func runGC(now time.Time) {
    for heap.Peek().Deadline.Before(now) {
        deleteAddrsFromAddrBook(heap.Pop().ID)
    }
}

Performance improvements here are going to come from avoiding full map iterations on each GC run. Mutex Un/Locks are on the order of tens of nanoseconds on modern hardware , so are unlikely to be problematic. This solution is also amenable to sharding.

Approach 3: Treap-based expiration

We can use a similar approach as above, substituting the map+heap for a persistent treap + CAS. The trade-off here is twofold:

  • persistent treaps need to allocate each time they are modified
  • although they perform well for uniformly-distributed weights (O log n), they fall to O(n) in the worst-case for non-random weights.

In practice, this is likely to be quite slow. I mention it only for completeness, and in case someone thinks of a clever optimization.

Approach 4: Hierarchical-Timing-Wheel-based expiration

Instead of a heap, we can maintain one or more timing wheels. Inserting/Updating entries involves incrementing a counter and scheduling it's decrementation some time in the future. When the counter reaches zero, we delete the entry from the map.

HTWs have an amortized time-complexity of O(1).

@marten-seemann marten-seemann transferred this issue from libp2p/go-libp2p-peerstore Aug 19, 2022
@marten-seemann marten-seemann changed the title memoryAddrBook GC interval is hard-coded peerstore: memoryAddrBook GC interval is hard-coded Aug 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant