-
Notifications
You must be signed in to change notification settings - Fork 279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat!: Address agnostic p2p #5176
base: main
Are you sure you want to change the base?
feat!: Address agnostic p2p #5176
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
#[config(env = "P2P_EXTERNAL_PORT")] | ||
pub external_port: WithOrigin<u16>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the notion of port
, and it being a required parameter, is not the right decision.
Knowing only the port is not sufficient to connect to a peer. They might be hosted on different hosts, especially in environments like Kubernetes cluster, where different peers might have different associated domains. While network.address
is used to bind the peer to listen to a specific address in the local environment, there could be an optional mention of network.public_address
which other peers could use to connect to this peer. Moreover, in some environments it could make sense for it to be an array, so that peer will broadcast all of them to all peers and each of the peers will use the first reachable one. Being optional, peer will broadcast its local address (network.address
) by default.
Maybe I am not right. Could you explain how external_port
would work in a context when peers are distributed around multiple machines?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain how
external_port
would work in a context when peers are distributed around multiple machines?
Consider we have network with PeerA and some other peers, and add new PeerX. Initially network doesn't know PeerX address and knows only its public key. Initially PeerX knows only PeerA address.
- PeerX tries to connect to PeerA
- PeerA receives connection from some IP 1.2.3.4
- PeerA accepts the connection because it knows PeerX public key
- PeerX sends its
external_port
(5555) to PeerA - PeerA knows PeerX IP (1.2.3.4) and its port (5555) and so forms its address (1.2.3.4:5555)
- PeerA broadcasts PeerX address (1.2.3.4:5555) to other peers
So here we assume that other peers could reach PeerX through the same IP (1.2.3.4) as the PeerA. I agree that this assumption might be incorrect. E.g. if PeerA and PeerX are both in some kubernetes cluster, PeerA might receive connection from PeerX using some local kubernetes IP. Such IP would not work for PeerB outside of the cluster.
So I like the idea of providing full address in the config (that is replace network.external_port
with network.public_address
). But I don't understand why it should be optional. As I understand, network.address
usually is like 0.0.0.0:port
, so basically it contains only port to bind and can omit the host.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I understand,
network.address
usually is like0.0.0.0:port
, so basically it contains only port to bind and can omit the host.
It is not always the case. One could specify some other IP to bind to a specific network interface for security (to limit access) or other reasons. It might be crucial for advanced deployments to be able to alter the host.
But I would agree that in most cases it is 0.0.0.0
. Shall we split network.address
to required port
and optional host
defaulting to 0.0.0.0
?
So I like the idea of providing full address in the config (that is replace
network.external_port
withnetwork.public_address
). But I don't understand why it should be optional.
Maybe you are right and network.public_address
is needed in the majority in cases. My rationale was that "for some setups it might not be needed and peers could broadcast their network.address
as if it is network.public_address
"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@s8sato ideas here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we split
network.address
to requiredport
and optionalhost
defaulting to0.0.0.0
?
Personally I think current parameter network.address
is OK
for some setups it might not be needed and peers could broadcast their
network.address
as if it isnetwork.public_address
I agree that it may be convenient for some setups, but for complex setups it probably will be hard to understand for users that peers are not connecting to each other because some optional parameter is not specified. So I think it should be required parameter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No objection from my side.
I think a connecting peer X has to notify its public address only i.e. should't include its private address in the notification, because the private address can conflict with another peer's one in another cluster.
Let's note your decision as doc comments to clarify address
public_address
responsibilities
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed.
Also, maybe gossip_address
would be a clearer name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a connecting peer X has to notify its public address only i.e. should't include its private address in the notification, because the private address can conflict with another peer's one in another cluster.
Could you elaborate on this please? It is already the case that when connected, peer sends only its public address, isn't it?
Let's note your decision as doc comments to clarify
address
public_address
responsibilities
Added comment to iroha_config::parameters::user::Network
91e7ea6
to
37209a0
Compare
Update:
|
37209a0
to
bcd2b69
Compare
Signed-off-by: Dmitry Murzin <diralik@yandex.ru>
Signed-off-by: Dmitry Murzin <diralik@yandex.ru>
Signed-off-by: Dmitry Murzin <diralik@yandex.ru>
Signed-off-by: Dmitry Murzin <diralik@yandex.ru>
Signed-off-by: Dmitry Murzin <diralik@yandex.ru>
Signed-off-by: Dmitry Murzin <diralik@yandex.ru>
Signed-off-by: Dmitry Murzin <diralik@yandex.ru>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Signed-off-by: Dmitry Murzin <diralik@yandex.ru>
bcd2b69
to
1194e61
Compare
/// Peers received via gossiping from other peers | ||
/// First-level key corresponds to `SocketAddr` | ||
/// Second-level key - peer from which such `SocketAddr` was received | ||
gossip_peers: BTreeMap<PeerId, BTreeMap<PeerId, SocketAddr>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need some eviction policy. This can be done separately
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean to periodically drop entries which were received long time ago?
Currently we have eviction based on topology
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently we have eviction based on topology
I see, I think then it's ok. Can a malicious actor overwhelm other peer's gossip handle?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can a malicious actor overwhelm other peer's gossip handle?
Single malicious actor can't, since we choose address using majority rule
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get that, but a single malicious actor could send lots of different false addresses, couldn't it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
really well documented
Context
Fixes #5117
How things currently work:
sumeragi.trusted_peers
config parameter with addresses of initial peersSolution
Planned changes:
network.address
Use case: Peer changes address
Use case: New peer added
iroha1:8000
andiroha2:8000
iroha3:8000
sumeragi.trusted_peers
- only peer Anetwork.public_address
-iroha3:8000
9.9.9.9:52143
(52143
is some random port chosen by C for outcoming connection, and9.9.9.9
is some IP address, e.g. local IP address in kubernetes cluster)iroha3:8000
) to AMigration Guide
network.public_address
in the config, orP2P_PUBLIC_ADDRESS
env var). This parameter should contain external address of the peer used for p2p (as seen by other peers)genesis.json
— changedtopology
:Before:
After:
Review notes
Most meaningful changes are:
crates/iroha_core/src/peers_gossiper.rs
- logic related to peers gossipingcrates/iroha_p2p/src/network.rs
PeersGossiper
service. (Previously peer connects to addresses of peers from topology)crates/iroha_p2p/src/peer.rs
- after establishing connection, peer now sends itspublic_address
crates/iroha_data_model/src/peer.rs
- moved address fromPeerId
toPeer
Other changes are mostly related to
Peer
/PeerId
TODO:
multiple_networks
)Checklist
CONTRIBUTING.md
.