-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Offline ipns record lives for 1m only #6656
Comments
That's due to https://github.com/ipfs/go-ipfs/blob/master/namesys/namesys.go#L51 and this boolean check being reversed https://github.com/ipfs/go-ipfs/blob/master/namesys/namesys.go#L194 |
Nice! |
Should I use |
This is working as intended. If the node is offline, we have no way to know if the IPNS record is up-to-date so we return an error after the TTL expires. Note: The TTL is not the same as the EOL. The EOL is when the record is no longer valid, the TTL is how long we keep the record before we go to the network to look for a new one. Now, that doesn't mean this current model is necessarily correct. We really need to better support offline/local IPNS. However, we need to communicate how "fresh" the response is to the user. |
That seems incredibly inefficient. If the record is going to be around for 24 hours by default why is the daemon being forced to re-resolve the record after only 1 minute? If the user published it and it's valid for 24 hours by default they shouldn't be searching the network after only 1 minute. |
1. The EOL is how long the record is "valid". It doesn't mean that it's fresh. The EOL is just there as a cutoff to indicate when the record is so stale it should no longer be considered valid.
2. The TTL is how long we cache the record before trying to find a fresh one.
Someone can publish a new record _long_ before the record expires. If we cache records until the EOL, we won't find these new records.
|
Fair points, but isn't the TTL also defaulted to 24 hours? IPNS resolution takes ages and makes it basically impractical to use for anything. If we were to cache records like this for the duration of their TTL and not for a minute it would go a long ways to improving the usability of IPNS which for all intents and purposes is virtually unusable. My understanding of TTL was to day "hey if this record has a TTL of 45 minutes, we'll consider it valid for 45 minutes before searching the network". This setting instead disregards that, and only caches it for 1 min. What's the point of a TTL if we're just caching it for 1 minute? |
The EOL defaults to 24 hours. The TTL defaults to 1 minute.
I agree. The TTL was set to 1m when IPNS resolution still only took a few seconds (when the network was <1000 nodes and all nodes were connected to eachother.
s/TTL/EOL. The EOL limits the amount of time an attacker could withhold a new IPNS record without the user noticing. It also limits the amount of time-travel an attacker can perform on an IPNS record. Without it, an attacker could keep on presenting/publishing an old IPNS record long after newer versions of the record have been published. OT Notes:
|
That's fine. However I think #6663 should be reopened |
Ok I see what you're saying. In that case, wouldn't it be better to handle the ttl in a Publish request with something like: ttl := DefaultResolverCacheTTL
if setTTL, ok := checkCtxTTL(ctx); ok {
ttl = setTTL
}
ns.cacheSet(peer.IDB58Encode(id), value, ttl) that way situations like OP's could be avoided |
That's a bug. |
How would that be a bug? If the user overrides the TTL value then this would allow overriding the cached TTL in a way that matches the TTL given when publishing a record |
I mean, our current handling of TTLs on publish is a bug. We should do what you suggest in #6656 (comment) and use the same TTL on publish as we do on resolve. However, I'm not sure how this fixes the problem. |
Closing in favor of ipfs/notes#391. |
fixes ipfs/kubo#6656 (comment) This commit was moved from ipfs/go-namesys@ac0ea1a
fixes ipfs/kubo#6656 (comment) This commit was moved from ipfs/go-namesys@3856c6e
fixes ipfs/kubo#6656 (comment) This commit was moved from ipfs/go-namesys@ac0ea1a
fixes ipfs/kubo#6656 (comment) This commit was moved from ipfs/go-namesys@3856c6e
Version information:
go-ipfs version: 0.4.22-
Repo version: 7
System version: amd64/linux
Golang version: go1.12.7
Description:
Start
ipfs daemon
with a default config, callipfs cat /ipns...
to resolve & cache the content, turn off the internet connection.The /ipns/ link will be available on the local machine for 1 minute only. After that minute you will not be able to neither cat nor resolve the link by /ipns path (Error: failed to find any peer in table), however the data is still available through a resolved and cached /ipfs/hash.
The text was updated successfully, but these errors were encountered: