-
Notifications
You must be signed in to change notification settings - Fork 699
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Announce "capabilities" over the network? #523
Comments
We could also probably leverage the DHT to store certain node information. Like which node can provide blocks since genersis etc. |
Do we care about longer term storage? If so, should we consider erasure coding? |
I don't know, maybe not in general. Probably a small subset of people will care. I mean we could also have some web page where people announce the address of nodes that support syncing from genesis, but using Dht sounds more cool 🤣 |
This is a good solution, but doing it is incredibly hard and needs proper research design if we want to avoid eclipse attacks. |
I think that one good way to do this is to reserve some kind of prefix or key in the DHT where all the nodes have a specific capability, in a similar way to polkadot-fellows/RFCs#8. If we allow ourselves to modify the libp2p Kademlia protocol, then we can for example switch to 64 bytes DHT keys, where the first 32 bytes are a prefix that indicates a capability, and the last 32 bytes are the actual PeerId. That's a suggestion, but there might be better ones. If we don't allow ourselves to modify the libp2p Kademlia protocol, then we can use the existing "providers" system, where the PeerIds close to a certain key representing a capability have the responsibility to store a list of PeerIds that implement this capability. In both cases we have an issue: it can easily be eclipse attacked by malicious nodes hijacking the list. The reason why the normal DHT can't be eclipse attacked is purely because we put a few honest bootnodes in the chain spec that guarantee that all honest nodes can be reached in one way or another. We don't have that luxury here (unless we add additional stuff in the chain spec, which is not a great solution). In order to bypass this eclipse attack, we can rotate the "capability prefix" using the on-chain Babe/Sassafras randomness. In other words, this would rotate in a random way the list of PeerIds that are responsible for storing the list of nodes that have a certain capability. This doesn't completely solve the problem, but it gives a random chance for an eclipse attack to fail every time the session changes, meaning that in finite time we should be able to find an honest node. Since we would need to know the current on-chain randomness in order to find where nodes with a certain capability are located, this means that all nodes must have the capability of bringing you to the head of the chain. |
I've opened polkadot-fellows/RFCs#59 |
Currently we expect that we can sync from all full nodes all blocks or request warp proofs from all full nodes. When people start using things like
--block-pruning
these assumptions are not correct anymore. As there are currently not that much nodes running with these settings, retrying the download on from a different node probably succeeds. However, we can not expect this to happen in the future and may in general switch to "prune everything" on default. Stuff like all the block/bodies from each block isn't that useful too keep.So, we may should change the sync handshake to contain information until which block a node has data. Warp syncing probably changes any way when we have nodes (like light clients) that only need to sync from a certain snapshot. All data before these snapshots is not important anymore.
The text was updated successfully, but these errors were encountered: