Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Offline ipns record lives for 1m only #6656

Closed
kpp opened this issue Sep 21, 2019 · 14 comments · Fixed by #6667
Closed

Offline ipns record lives for 1m only #6656

kpp opened this issue Sep 21, 2019 · 14 comments · Fixed by #6667
Labels
kind/bug A bug in existing code (including security flaws)

Comments

@kpp
Copy link
Contributor

kpp commented Sep 21, 2019

Version information:

go-ipfs version: 0.4.22-
Repo version: 7
System version: amd64/linux
Golang version: go1.12.7

Description:

Start ipfs daemon with a default config, call ipfs cat /ipns... to resolve & cache the content, turn off the internet connection.
The /ipns/ link will be available on the local machine for 1 minute only. After that minute you will not be able to neither cat nor resolve the link by /ipns path (Error: failed to find any peer in table), however the data is still available through a resolved and cached /ipfs/hash.

@kpp kpp added the kind/bug A bug in existing code (including security flaws) label Sep 21, 2019
@bonedaddy
Copy link
Contributor

bonedaddy commented Sep 21, 2019

That's due to https://github.com/ipfs/go-ipfs/blob/master/namesys/namesys.go#L51 and this boolean check being reversed https://github.com/ipfs/go-ipfs/blob/master/namesys/namesys.go#L194

@kpp
Copy link
Contributor Author

kpp commented Sep 21, 2019

Nice!

@kpp
Copy link
Contributor Author

kpp commented Sep 21, 2019

Should I use ipfs name publish with --lifetime or with --ttl? Will ipfs name publish overwrite the existent key on other nodes even if the key is cached if they got connection to the publisher?

@Stebalien
Copy link
Member

This is working as intended. If the node is offline, we have no way to know if the IPNS record is up-to-date so we return an error after the TTL expires. Note: The TTL is not the same as the EOL. The EOL is when the record is no longer valid, the TTL is how long we keep the record before we go to the network to look for a new one.

Now, that doesn't mean this current model is necessarily correct. We really need to better support offline/local IPNS. However, we need to communicate how "fresh" the response is to the user.

@bonedaddy
Copy link
Contributor

That seems incredibly inefficient. If the record is going to be around for 24 hours by default why is the daemon being forced to re-resolve the record after only 1 minute?

If the user published it and it's valid for 24 hours by default they shouldn't be searching the network after only 1 minute.

@Stebalien
Copy link
Member

Stebalien commented Sep 23, 2019 via email

@bonedaddy
Copy link
Contributor

Fair points, but isn't the TTL also defaulted to 24 hours? IPNS resolution takes ages and makes it basically impractical to use for anything. If we were to cache records like this for the duration of their TTL and not for a minute it would go a long ways to improving the usability of IPNS which for all intents and purposes is virtually unusable.

My understanding of TTL was to day "hey if this record has a TTL of 45 minutes, we'll consider it valid for 45 minutes before searching the network".

This setting instead disregards that, and only caches it for 1 min. What's the point of a TTL if we're just caching it for 1 minute?

@Stebalien
Copy link
Member

Fair points, but isn't the TTL also defaulted to 24 hours?

The EOL defaults to 24 hours. The TTL defaults to 1 minute.

IPNS resolution takes ages and makes it basically impractical to use for anything

I agree. The TTL was set to 1m when IPNS resolution still only took a few seconds (when the network was <1000 nodes and all nodes were connected to eachother.

What's the point of a TTL if we're just caching it for 1 minute?

s/TTL/EOL.

The EOL limits the amount of time an attacker could withhold a new IPNS record without the user noticing. It also limits the amount of time-travel an attacker can perform on an IPNS record. Without it, an attacker could keep on presenting/publishing an old IPNS record long after newer versions of the record have been published.

OT Notes:

  • we should consider changing the default EOL to infinity as most IPNS users don't really care about this.
  • Blockchains completely side-step this problem as they remember the entire state. Unfortunately, the DHT tends to forget things.

@kpp
Copy link
Contributor Author

kpp commented Sep 23, 2019

  1. The TTL is how long we cache the record before trying to find a fresh one.

That's fine. However I think #6663 should be reopened

@bonedaddy
Copy link
Contributor

Ok I see what you're saying. In that case, wouldn't it be better to handle the ttl in a Publish request with something like:

	ttl := DefaultResolverCacheTTL
	if setTTL, ok := checkCtxTTL(ctx); ok {
		ttl = setTTL
	}
	ns.cacheSet(peer.IDB58Encode(id), value, ttl)

that way situations like OP's could be avoided

@Stebalien
Copy link
Member

That's a bug.

@bonedaddy
Copy link
Contributor

How would that be a bug? If the user overrides the TTL value then this would allow overriding the cached TTL in a way that matches the TTL given when publishing a record

@Stebalien
Copy link
Member

I mean, our current handling of TTLs on publish is a bug. We should do what you suggest in #6656 (comment) and use the same TTL on publish as we do on resolve.

However, I'm not sure how this fixes the problem.

@Stebalien
Copy link
Member

Closing in favor of ipfs/notes#391.

ralendor pushed a commit to ralendor/go-ipfs that referenced this issue Jun 6, 2020
hsanjuan pushed a commit to ipfs/go-namesys that referenced this issue Feb 18, 2021
hsanjuan pushed a commit to ipfs/go-namesys that referenced this issue Feb 18, 2021
guseggert pushed a commit to ipfs/boxo that referenced this issue Dec 6, 2022
guseggert pushed a commit to ipfs/boxo that referenced this issue Dec 6, 2022
Jorropo pushed a commit to ipfs/go-libipfs-rapide that referenced this issue Mar 23, 2023
Jorropo pushed a commit to ipfs/go-libipfs-rapide that referenced this issue Mar 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws)
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants