Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propose Naam as naming system powered by IPNI #4

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

masih
Copy link
Member

@masih masih commented Dec 7, 2022

Define specification for Naming As Advertisement (Naam) protocol, which enables publication, update and resolution of IPNS records over IPNI advertisements.

Rendered

@masih
Copy link
Member Author

masih commented Dec 7, 2022

PoC implementation will be pushed to this repo shortly.

NAAM.md Outdated Show resolved Hide resolved
@willscott
Copy link
Member

@guseggert and @lidel are probably good potential reviewers here from a client perspective, though both are likely busy at the moment. It may be worth finding a way to engage them here for early feedback

NAAM.md Outdated Show resolved Hide resolved
NAAM.md Show resolved Hide resolved
@masih
Copy link
Member Author

masih commented Dec 8, 2022

Also @aschmahmann i would love your feedback on this please 🙏

@willscott
Copy link
Member

related ongoing effort worth tracking in relation to this is ipfs/specs#351

Copy link

@aschmahmann aschmahmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Glad to see this moving along 🎉. None of my comments require action, but wanted to flag some of the available options and optimizations now and going forward in case they're helpful to you all.

Comment on lines +216 to +218
* decodes the metadata from the provider info as IPNS record,
* validates the record, and
* if valid returns its value.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A note/thought here. Unlike for provider records, IPNS records have a single "best" value. That is to say that while it's totally reasonable that if 100 different peers are advertising they have the block with multihash M returning them all to the user is very understandable (unless they've asked for some kind of limiting, like geographic or recently seen online). However, given a 100 different peers that have all put up IPNS records corresponding to the key K there is at most 1 record worth returning (unless you're trying to allow for querying historical information). Recall that even if Alice is the only one with the key K, anyone can store and therefore store with IPNI an IPNS record signed by Alice corresponding with the key K (e.g. in case her publisher goes offline and her records are deleted).

Perhaps it's fine to ask the caller to decode all of the returned values and calculate the best themselves so as to not require the IPNI nodes to do any validation. On the other hand, if the IPNI nodes do validation before storing the data in their database they can both only store a single IPNS record per key and save the caller time on validation.

I can see how it's probably easier to shift it to being the caller's responsibility to do the processing given that it doesn't require any database alterations in storetheindex and given how the network behaves today the number of nodes advertising a given IPNS record will be very low. In this case perhaps it'd be worthwhile to do the work of selecting the "best" record in between the IPNI database and the HTTP API (i.e. ipfs/specs#337) server responses. Then later if the savings look substantial implementers can switch to only storing a single record per key.

Copy link
Member Author

@masih masih Dec 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is at most 1 record worth returning

Yep; the go implementation does exactly that, in that in the context of naming over advertisement, only the "head" advertisement matters, and because advertisements are signed we can verify that the advertisement containing the IPNS record was indeed published by the peer that corresponds to the IPNS key.

anyone can store and therefore store with IPNI an IPNS record signed by Alice

Yes I think of this as a good thing, since even if Alice is offline, IPNS key is resolvable. I think the same is true about the existing DHT-based IPNS resolution?

decode all of the returned values and calculate the best themselves...
if the IPNI nodes do validation before storing the data in their database

Two things to point out here:

  • Because the provider and context ID is identical across all NAAM advertisements, there will only be a single record as a result of processing those ads.
  • The IPNI built-in validation, checks the signature of the ad and would guarantee that the ad is from the publisher/provider.

do the work of selecting the "best" record in between the IPNI database and the HTTP API

Thank you for mentioning the HTTP API, that seems reasonable, though I must point out in IPNI there will only be one IPNS record for a given IPNS key. Reading this comment I am worried that I have not spelled out the fact that only the head advertisement matters, and because provider and context ID are identical across all NAAM advertisements then there will only be a single provider record for it and therefore a singe IPNS record.

Having pointed out the singularity of IPNS records resolved via IPNI, selecting the best between HTTP and API, would become a matter of EOL value and whichever record is newer I think.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and because advertisements are signed we can verify that the advertisement containing the IPNS record was indeed published by the peer that corresponds to the IPNS key.

You can't really do this. There may be many IPNS keys owned by a single peer, Alice. Of course she can also use her libp2p node ID as a key but this isn't the only one.

though I must point out in IPNI there will only be one IPNS record for a given IPNS key.

I'm not sure if I'm misunderstanding or misexplaining myself. Say the following happens:

  1. On day 1 Alice signs an IPNS record R1 using key K that's valid for 30 days and has a sequence number 1, then advertises it to IPNI from the AliceProvider
  2. On day 2 Bob sees Alice's IPNS record R1 and says "hey, I should try and keep that alive" so advertises it to IPNI from the BobProvider
  3. On day 3 Alice signs an IPNS record R2 using key K that's valid for 30 days and has a sequence number 2, then advertises it to IPNI from the AliceProvider
  4. On day 4 when Charlie tries to look up the record associated with the IPNS identifier derived from K from IPNI there are two records (AliceProvider, R2) and (BobProvider, R1) which will either both need to be returned (for the client to figure out which is valid) or the IPNI node will need to figure out that R2 > R1 and only publish the result from the AliceProvider
    • Note: In theory R2 could be invalid (EOL was on day -7, or even day 3.5, invalid signature, etc.) so just choosing R2 because Alice published R1 and R2 to IPNI isn't sufficient

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On step 2: this is where I need your help to get my head around this; I suspect I am missing something fundamental about IPNS.
In Naam, BobProvider can only advertise IPNS records with key /ipns/<bob's-key>, because Bob does not have Alice's key. What am I missing?

On step 4: thank you for pointing out the case where R2 could have expired or invalid. As the spec stands, such publication by Alice would make R1 no longer discoverable. I am curious why should R1 remain discoverable if Alice has published a new record for the same key that has expired? One could argue because the key is the same and the value is changed, then the value corresponding to the key should be the latest, i.e. the expired record and it is Alice's responsibility to publish new records to correct it. As for the case, where signature of R2 is invalid, again one can argue that it is Alice's responsibility to publish correct IPNS records if Alice wants the records to be discoverable.

WDYT?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On step 2: this is where I need your help to get my head around this; I suspect I am missing something fundamental about IPNS.

In Naam, BobProvider can only advertise IPNS records with key /ipns/<bob's-key>, because Bob does not have Alice's key. What am I missing?

I thought I was explaining this above, but I'll try to give more examples to see if that helps.

Alice and Bob can each have hundreds of keys (e.g. Ed25519 key pairs) for a variety of purposes. Any of these keys can be used to create IPNS identifiers, such as /ipns/alices-favorite-book-key, /ipns/alices-least-favorite-movie-key, etc. These may/may not be the same as the alices-ipni-provider-node-key or alices-laptop-kubo-node-libp2p-key.

This means that this approach (i.e. one key per IPNI provider) is not sufficient, nor necessary https://github.com/ipni/go-naam/blob/9002ac91a0e3d606d81748df34d63e5f826e821a/naam.go#L142-L143.

Additionally, once Alice has signed a record R1 for /ipns/alices-favorite-book-key with her private key there's nothing stopping anyone else (e.g. Bob) from publishing R1 as well since that signed record is now public information.

On step 4: thank you for pointing out the case where R2 could have expired or invalid. As the spec stands, such publication by Alice would make R1 no longer discoverable. I am curious why should R1 remain discoverable if Alice has published a new record for the same key that has expired?

Given the above let's revisit. Let's take the go-naam implementation as-is as an example. We can see that there is no server side code at all, only client side code. This means a few things:

  1. IPNS records are not validated server side, meaning they might be completely invalid (missing a signature, signed with the wrong key, etc.) not just expired
  2. Anyone can publish R1 including both Alice and Bob AND there is no way for IPNI to know whether Alice or Bob is the "canonical" author.
  3. 1 + 2 implies that only sending back a single value would be unsafe since there's no logic to decide which one to send back. This is fixed if there's server-side logic deciding which to send back (i.e. negating 1).

This means that as is https://github.com/ipni/go-naam/blob/9002ac91a0e3d606d81748df34d63e5f826e821a/indexer_client.go#L105-L110 seems to be a DoS vulnerability since a malicious Mallory could advertise /ipns/alice-favorite-book-key with a totally bogus and invalid value (i.e. not signed). This is because a) the server doesn't validate the publish b) the server sends back all matches for a given multihash and c) the client code chooses the first response it might be Mallory's which would prevent resolving the IPNS address.

The solutions here are, as mentioned above, either:

  1. Have the client process all the results and choose the best valid one (the server will already send back multiple results as-is). go-ipns has tools for this.
  2. Have the server do processing and only store the best valid entry and return only that one

@masih does that make sense?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does thank you for helping me understand. Would this address your concern? This is to make it more explicit that the identity used to sign and publish the advertisement must match the identity in IPNS key.

This specification is written to require zero changes on the existing ad ingest pipeline in the hope to make it easier to integrate. Relaxing that condition slightly, it should be straightforward to also expand server-side validations.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure. IIRC pr.Provider from here refers to an the "provider" of a CID not the index provider node and there's no proven relationship the owner of AliceIndexProviderKey is the owner of AliceMHProviderKey (e.g. for example, I could delegate all my IPNI record publishes to a third party). IIUC the plan is to have some reputation to track good behavior here, but it's TBD and best-effort.

However, even if this did work it would limit utility by only allowing 1 key per index provider (e.g. really bad for infra providers, or just anyone with multiple mutable websites to manage under different keys). Given that your proposed change is client side, why not just use go-ipns to select the best record?

For example in go-naam do something roughly like:

ipnsValidator := ipns.Validator{}
var records [][]byte
for _, pr := range mhr.ProviderResults {
  if !bytes.Equal(pr.ContextID, ContextID) { continue }
  recBytes, err := getIPNSBytesFromMetadata(pr.Metadata)
  if err != nil { continue }
  if ipnsValidator.Validate(ipnsID, pr.Metadata) != nil { continue }
  records = append(records, recBytes)
}
bestRecordIndex, err := ipnsValidator.Select(ipnsID, records)
if err != nil || bestRecordIndex <0 { return nil, routing.ErrNotFound}
return records[bestRecordIndex], nil

Basically, because the validation is only happening client-side anyway why not do a little more CPU work (rather than relying on assumptions) in the validation logic in order to unlock a lot of functionality. If it becomes important later servers can add the validation onto their end.

Copy link
Member Author

@masih masih Dec 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

proven relationship the owner of AliceIndexProviderKey is the owner of AliceMHProviderKey

Advertisement signature would prove that which is verified at the time of ingest by indexer nodes. Right?

only allowing 1 key per index provider

That's a reasonable concern. Though, I must point out, Naam publish is way more light-weight than a regular index provider set up. The two also are not bound to each other: a user with multiple identities can publish different advertisement chains under different identities. The Naam ad chains can be published on their own exclusive chain.

why not do a little more CPU work

go-naam already does the validation here. Is there something I have missed in validating the records?

Thank you for pointing out select I missed it! I still think using select doesn't make sense because, indexer has to return at most one record for the same Provider ID + Context ID + Multihash. If it does not then there is bug in the indexer implementation. When the result can only be a single one then select usage doesn't quite make sense.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Advertisement signature would prove that which is verified at the time of ingest by indexer nodes. Right?

Ah ok thx. I knew you could publish for others, but I guess that depends on the policy. Not sure how that's determined, but if it's sufficiently trusted then that's probably ok.

The Naam ad chains can be published on their own exclusive chain.

For sure, and maybe this depends on what the delegated advertising policies look like but it'd be annoying to have to have hundreds of libp2p hosts (and ports) for hundreds of naam-index-provider nodes. If it's as simple as making all the private keys available to the a single libp2p index provider node that's probably ok (although note there might be some operational complexity around creating advertisements if people want to isolate their private keys to colder storage/hardware devices).

I still think using select doesn't make sense because, indexer has to return at most one record for the same Provider ID + Context ID + Multihash.

Right, this is for enabling multiple return values. The only use case I can think of for this at the moment is to increase some resiliency of the IPNS record which is not currently allowed. For example:

  1. Alice decides to make a consistency vs availability tradeoff and always publish her records with EOLs of effectively infinity
  2. At some point Alice takes her node (and index provider) offline
  3. A few days later index providers garbage collect her records
  4. Bob tries to keep Alice's record alive by publishing it to IPNI, but can't which means /ipns/alice-key is effectively dead even though it doesn't have to be

This isn't necessarily the most important, although it's something that's being lost here. It could of course be added later as well if needed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's as simple as making all the private keys available to the a single libp2p index provider node that's probably ok

yup that should suffice.

it's something that's being lost here.

Agreed; already added as limitations. As is if the provider side is offline for longer than a week records are no longer resolvable. Server side changes to treat Naam context ID differently can be a way rectify this.

Thank you so much for the great pointers.

Comment on lines +17 to +19
This document defines the **N**aming **A**s **A**dvertise**m**ent (Naam) protocol: a mechanism by
which [IPNI](IPNI.md) can be used to both publish, and
resolve [IPNS](https://github.com/ipfs/specs/blob/main/ipns/IPNS.md) records. Naam utilises the

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC what you've identified here is reasonably that you can use this mechanism to store basically any type of small data. In particular, you don't actually need to change how any IPNI node operates in order for this proposal to function.

Instead, the proposal is more about letting people know what to do consistently so that they can coordinate across applications in the same way that the Bitswap and FilecoinGraphSync-v1 codes and metadata are defined so as to increase coordination. Also, it allows IPNI nodes to use APIs like ipfs/specs#337 reasonably instead of only responding to FindProviders queries and having clients decipher them on the their end.

I don't know if down the road advertising chains and networks might want to be divided up more so people can choose to participate in them more selectively, but without damaging the system as a whole. Doesn't seem necessary now or for this PR (given that it already works as is), but perhaps something to consider as the system and the data it stores grows.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spot on; that's exactly why this spec exists.

On separation of networks; maybe down the line. It seems like a big overkill to just separate for IPNS records considering how few of them are out there compared to content CIDs.

Comment on lines +229 to +230
IPFS paths, specially for records that do not need the peer's public key to be embedded into IPNS
records. For example, a Ed25519 peer ID also captures the peer's public key. A marshalled IPNS

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could get a little tricky for people trying to use say RSA 4096 keys. Depending on how much of a sticking point this is, there's possibly a couple ways around it. Notably, the RSA key is not covered by the signature so you could store as separate advertisements the identifier -> public key mapping and the IPNS record and then combine them when returning results.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for pointing this out; I need to think about this one a bit. Public key resolution feels like something that's beyond the scope of this spec. Unless you are saying such keys make up the majority of IPNS users today and without it, the size limitations of the approach would make it much less useful to people?

Copy link
Member Author

@masih masih Dec 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like RSA keys are very popular.

Maybe we can extend this to support CID -> public key lookup, where the NAAM IPNS record has the CID of public key which is resolvable to data just like a regular content CID via IPNI.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lidel can explain how those numbers were collected. Note: that number showing only RSA PK records is a bit misleading. PK records shouldn't exist for Ed25519 since they're derivable from the identifiers, the question is what percentage of IPNS records use RSA keys (and of what size) vs ed25519, secp256k1, ...

I'd hope that by now most of the keys are ed25519, but I'm not sure about the distribution without some checking.

Copy link

@lidel lidel Feb 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Disclaimer: someone executed some lookups on hydra's for me back then, and we ended up marking RSA as SHOULD in the spec so I did not dig deeper)

I remember these RSA vs ED25519 numbers had to be interpreted with additional steps for /ipns and /pk prefixes (number for RSA included keys referred from peer records, and not used for IPNS, Ed25519 missing because its already inlined, etc)

Either way, RSA keys were the old default so we should support them. Ed25519 is the new default, and we are nudging people to move away from RSA (ipfs/kubo#9507, ipfs/kubo#9506), so over time, there will be less of them.

NAAM.md Outdated Show resolved Hide resolved
Comment on lines +225 to +226
a maximum limit of 10 KiB. The 1 KiB limit currently imposed by network indexers on the maximum size
of advertisement metadata.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is 1KiB the current network limit, but 100B is the recommended (you will have a good time) number? https://github.com/ipni/specs/blob/69a5f6bfdacf6050958a901324322c1e61b1611e/IPNI.md#advertisements

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The spec might be out of date though i think 100B is a reasonable value for a typical ad. the indexer nodes currently have a hard limit of 1KiB.

a maximum limit of 10 KiB. The 1 KiB limit currently imposed by network indexers on the maximum size
of advertisement metadata.

Basic tests show that a 1KiB limit is sufficiently large for storing simple IPNS records pointing to

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some background info on the limits:

When trying to figure out a limit to use here was some of the input information ipfs/specs#319 (comment). The max record length found exceeded 1kib.

At the moment IPNS records have bloat to them (e.g. some data is stored in the record twice, once as cbor and another in the protobuf) so there's room to evolve the spec to cut down on used space.

You may already have metrics on this for IPNI, but my suspicion is that the current problem is not long paths but people using really large identity cids. IIRC the only way you end up needing long paths is either with non-UnixFS IPLD queries (e.g. traversing a deep path inside a single block, a deep path within an ADL that captures the query, the types of complexity that can come with the IPLD URI scheme, etc.), or with nested IPNS/dnslink queries (e.g. pointing to /ipns/otherkey/foo/bar/baz/....). Neither of these are common at the moment, people abusing identity cids definitely does happen though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

people using really large identity cids

Ah yes IDENTITY CIDs strike again! :) My vote would be to stop the abuse of IDENTITY CIDs, and i would go as far as saying that NAAM should reject IDENTITY CIDs ( and maybe IPFS too). I will add this as a limitation.

I am curious what is the actual usecase for putting IDENTITY CIDs inside IPNS records?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes IDENTITY CIDs strike again!

Yes, along with their lack of max-size specification 😭.

I am curious what is the actual usecase for putting IDENTITY CIDs inside IPNS records?

This one's actually pretty straightforward, especially compared to standard IPNI provider records for identity CIDs.

  1. Let's say Alice publishes her current favorite quote under her key with identifier /ipns/alicekey
  2. On day 1 Alice's favorite quote is "You only live once, but if you do it right, once is enough" which is 58 characters. Rather than using SHA2-512 on it (which is 64 bytes) she decides to encode it as an identity CID to save on bytes, and uses it as the value of the IPNS record (along with the raw codec)
  3. On day 2 Alice's favorite quote is "In the end, it's not the years in your life that count. It's the life in your years" which is 83 characters so Alice uses SHA2-512 (along with the raw codec) to get her CID, and updates her IPNS record to point at the CID of her new favorite quote

Basically, as long as there are any valid uses for identity CIDs, there are reasons to have them in IPNS records. Where things start getting out of hand is people pushing identity CIDs to be bigger and bigger to save themselves from doing network queries.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seem like forbidding identity CIDs in IPNS records would still be a reasonable restriction, or have the IPNS spec explicitly limit the length of identity CIDs to be less than the length of the largest hash.

NAAM.md Outdated
advertisements ever change for the same IPNS key. For more information,
see [Updating IPNS Records](#updating-ipns-records).

Note that the IPNS multicodec code `0xe5` is tagged as "namespace" in the CSV table, which could

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correct, the namespace is used for things like ENS and other to basically turn /ipns/ into 0xe5. Adding a code for IPNS records higher in the table seems reasonable though. Not strongly held opinions here though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally think that we should add a code dedicated to IPNS Record, hence the paragraph.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, for ipns-record we should add a separate code, like we already have for libp2p-peer-record.
Will be useful in other places, e.g. will allow us to put IPNS Record in a CAR with blocks behind the CID it points at, and have end-to-end snapshot of IPNS website.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you Sir

NAAM.md Outdated
Comment on lines 167 to 168
Alternatively, a new multicodec code can be introduced to the
[multicodec CSV file](https://github.com/multiformats/multicodec/blob/master/table.csv) to avoid

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Off topic: IMO needing to reserve codes in the table is kind of a pain and impediment to development (e.g. see the reluctance around adding a new entry). Would there be any interest in defining some code that just means it's a named thing, so that groups more concerned about iterating/experimenting quickly than saving bytes could just go ahead even if it means losing a few metadata bytes to describe the name of their protocol/system?

Copy link
Member Author

@masih masih Dec 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potentially; indexers make no assumptions about what's in the metadata. As in, they do not validate it. However, metadata is expected to start with a varint which offers a hint to its consumers about how to read the remaining bytes.

There are also rules about ordering: metadata entries should be sorted by their protocol ID in ascending order. However, that doesn't apply here because NAAM metadata will only contain a single entry, i.e. 0xe5....

On reducing barriers for agile and iterative development, there is no requirement that the protocol ID in metadata has to be mutlicodec. It just so happens that the ones we commonly use (for bitswap and filecoin graphsync) are added. As long as the advertisement publisher, and the retrieval client agree on a code (which ideally doesn't clash with existing ones) then they are free to use whatever code they desire.

Define specification for Naming As Advertisement (Naam) protocol, which
enables publication, update and resolution of IPNS records over IPNI
advertisements.
@masih
Copy link
Member Author

masih commented Dec 9, 2022

Thank you for pointing out ipfs/specs#351 @willscott , I wonder if we can copy the same HTTP API and add it to the IPNI spec, which offers NAAM IPNS record validation on the server side as API sugar.

NAAM.md Outdated Show resolved Hide resolved
NAAM.md Outdated
advertisements ever change for the same IPNS key. For more information,
see [Updating IPNS Records](#updating-ipns-records).

Note that the IPNS multicodec code `0xe5` is tagged as "namespace" in the CSV table, which could
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, for ipns-record we should add a separate code, like we already have for libp2p-peer-record.
Will be useful in other places, e.g. will allow us to put IPNS Record in a CAR with blocks behind the CID it points at, and have end-to-end snapshot of IPNS website.

Comment on lines +229 to +230
IPFS paths, specially for records that do not need the peer's public key to be embedded into IPNS
records. For example, a Ed25519 peer ID also captures the peer's public key. A marshalled IPNS
Copy link

@lidel lidel Feb 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Disclaimer: someone executed some lookups on hydra's for me back then, and we ended up marking RSA as SHOULD in the spec so I did not dig deeper)

I remember these RSA vs ED25519 numbers had to be interpreted with additional steps for /ipns and /pk prefixes (number for RSA included keys referred from peer records, and not used for IPNS, Ed25519 missing because its already inlined, etc)

Either way, RSA keys were the old default so we should support them. Ed25519 is the new default, and we are nudging people to move away from RSA (ipfs/kubo#9507, ipfs/kubo#9506), so over time, there will be less of them.

Add clarifying bullet points to spell out the encoding and IPNS key
format.
Copy link

@lidel lidel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

multiformats/multicodec#312 merged, we have ipns-record codec (0x0300) 🥳

(sidenote) we also have libp2p-peer-record, so the missing one for "parity" (to have opaque records for all routing types) is ipfs-(dht-)provider-record (?)

NAAM.md Outdated Show resolved Hide resolved
NAAM.md Outdated Show resolved Hide resolved
NAAM.md Outdated Show resolved Hide resolved
* **`PreviousID`** - the optional link to the previous Naam advertisement. `nil` indicates no
previous advertisements.
* **`Provider`** - the publisher peer ID.
* **`Addresses`** - the array of multiaddrs over which the publisher can be reached.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will the publisher addresses be used for? Indexers get the advertisements from the address specified in the announcement message and map the peer ID multihash to an IPNS record contained in IPNI metadata. IPNS clients resolve a peer ID multihash to the IPNS record. They do not care about what the publisher's address is. So, in the context of IPNS, the advertisement's Addresses field seems unnecessary.

a maximum limit of 10 KiB. The 1 KiB limit currently imposed by network indexers on the maximum size
of advertisement metadata.

Basic tests show that a 1KiB limit is sufficiently large for storing simple IPNS records pointing to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seem like forbidding identity CIDs in IPNS records would still be a reasonable restriction, or have the IPNS spec explicitly limit the length of identity CIDs to be less than the length of the largest hash.


## Privacy

To be revisited when double-hashing work is finalised.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See go-naam PR 13.

When reader-privacy is enabled (enabled by default), a double-hashed lookup is done when resolving a key to an IPNS record. The client wishing to resolve a key to an IPNS record creates a hash of the key and sends that to the indexer for lookup. The reader-privacy-enabled indexer returns encrypted metadata that is indexed by the hash of the key. The client then decrypts this metadata using the original key and extracts the IPNS record from the decrypted metadata.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants