-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase provider Multiaddress TTL #795
Increase provider Multiaddress TTL #795
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, but I'm not very familiar with this code base.
@guseggert, can you have a look at this?
We keep the Multiaddresses of providing peers around for much longer as this means we'll return them alongside the provider records and likely make the second DHT lookup for the peer record obsolete. The assumption is that peers don't change network addresses often. For context: ipfs/kubo#9264
75da96a
to
362b64d
Compare
What would be the estimated storage overhead in the peerstore? |
@guseggert what's your take on this change? We just discussed the worst-case scenario of dialing down the hydras. If we changed the multi-address TTL to 24h (as in this PR) it would take longer for the network to pick up that the Hydras are dialable at a different location. I still think this change would be net-positive. |
I can't reason about the impact of this change without data here. Do we have addr churn data? |
Could we also flip this around and make it the responsibility of hydras to notify peers when the addr changes? Ex by crawling the DHT at Hydra startup, which has the side effect of telling nodes about the new multiaddr (right?). |
There's ongoing work about exactly that in our measurement repo: probe-lab/network-measurements#22 We'll check back here with insights from that study 👍
I also need to double-check if that's actually happening but theoretically that would work |
@cortze since RFM17.1 is done, could you give an update here? Do we have numbers on Multiaddress churn? |
Based on the data of the CID Hoarder (this is from a few weeks ago), the online-ness of the delegated provider records holders stays mostly the same over 48 hours (see pic below). This means they still listen at the same address for those 48 hours, so I wouldn't say that there is much Multiaddress churn rate. About RFM17.1, I don't know if there are any blockers to push this improvement forward, although I would suggest merging libp2p/go-libp2p-kad-dht#802 before or at the same time of applying these changes |
@guseggert see above, the data seems to suggest that there isn't much multiaddress churn. Can you have another look and schedule this to go out in production? @dennis-tra let's keep this change in mind and monitor the DHT Lookup experiment results before and after to see the impact ;-) |
All seems to be in place to get this landed - @guillaumemichel had another look and approved. @guseggert can you cross-check and flag any further issues that our team could/should work on? This has been hanging for a while and would love to get it in production and see the performance impact in the wild. All measurement instrumentation is there through our DHT Lookup measurement, so we should spot a downward trend, right @dennis-tra ? cc: @BigLep @p-shahi to be in the loop and schedule this for one of the upcoming releases. |
That's right 👍 |
Fix multiple ProviderAddrTTL definitions #795
For context: ipfs/kubo#9264
We keep the Multiaddresses of providing peers around for much longer as this means we'll return them alongside the provider records and likely make the second DHT lookup for the peer record obsolete.
The assumption is that peers don't change network addresses often.
For context: ipfs/kubo#9264