Skip to content
This repository has been archived by the owner on Jun 27, 2023. It is now read-only.

pubsub between docker containers does not work #58

Closed
achingbrain opened this issue Oct 9, 2018 · 7 comments
Closed

pubsub between docker containers does not work #58

achingbrain opened this issue Oct 9, 2018 · 7 comments
Labels
help wanted Seeking public contribution on this issue P2 Medium: Good to have, but can wait until someone steps up status/ready Ready to be worked

Comments

@achingbrain
Copy link
Member

I've created a repo that demos the problem: https://github.com/achingbrain/docker-ipfs-pubsub

When run with docker-compose as per the instructions in the README, two containers are created that connect to each other as swarm peers, subscribe to the same topic and start sending messages, but do not see each other in the topic peer list, nor do they receive the messages sent by the other.

Any ideas?

@achingbrain
Copy link
Member Author

achingbrain commented Oct 10, 2018

I think this is because the containers respond to an MDNS query from each other with addresses that include 127.0.0.1 and the same ports as the current container (as it's the default config), so it thinks it's connected to the other container but it's in fact connected to itself.

That is, if I manually filter out all loopback addresses from this list of multiaddrs when key === 'TCP' before it's passed to this.dialer.dialMany then pubsub between docker containers works as expected.

cc @jacobheun @diasdavid

@achingbrain
Copy link
Member Author

I guess it's not verifying the remote peer id after connection, which probably explains why using ipfs.ping fails with an error about ids not matching.

@daviddias daviddias added the status/ready Ready to be worked label Oct 31, 2018
@vasco-santos vasco-santos added help wanted Seeking public contribution on this issue P2 Medium: Good to have, but can wait until someone steps up labels Feb 11, 2019
@achingbrain
Copy link
Member Author

A workaround for this is to configure unique swarm ports for every IPFS instance under the Docker daemon.

E.g. Container A uses:

Swarm: [
  `/ip4/0.0.0.0/tcp/9000`,
  `/ip4/127.0.0.1/tcp/9001/ws`
],

Container B uses:

Swarm: [
  `/ip4/0.0.0.0/tcp/9002`,
  `/ip4/127.0.0.1/tcp/9003/ws`
],

..etc.

Doesn't scale very well, obviously.

@daviddias
Copy link
Member

@achingbrain I believe I know the issue. Can you do a test: run ipfs swarm addrs in each instance and see the list of addresses the Peer is Listening.

If only localhost/local interfaces appear, than it is normal that connection can't be established given that the IP packets will be routed to the instance itself. You should have the equivalent of a different IP pair

@jacobheun
Copy link
Contributor

I think libp2p/js-libp2p#202 (comment) would help resolve/prevent this issue. This would enable us to more easily avoid announcing local addresses, so we don't accidentally dial ourself, and prevent the need to manually set the address.

@achingbrain
Copy link
Member Author

If you run the example you'll have two containers - echo_1 and echo_2. They have each other in their swarm peers list but not as topic peers.

Here echo_1 is Qm..dT and echo_2 is Qm..5h

echo_1  | My peer id QmQXnULD5usLVTsgnCYGCRqtbVcsJBEWcjY7voPpt8nBdT
echo_2  | My peer id QmNTneAkkct2PMzJR73gARRPtKjVsBZEHa3wBkX6xCCo5h
echo_1  | Topic peers:
echo_1  | None
echo_1  | Swarm peers:
echo_1  | ...
echo_1  | QmNTneAkkct2PMzJR73gARRPtKjVsBZEHa3wBkX6xCCo5h
echo_1  | Swarm addrs:
echo_1  | ...
echo_1  | QmNTneAkkct2PMzJR73gARRPtKjVsBZEHa3wBkX6xCCo5h
echo_1  | /ip4/127.0.0.1/tcp/4002
echo_1  | /ip4/172.27.0.3/tcp/4002

So echo_2 is advertising connections on loopback but also 172.x which is routable via the docker daemon. If you remove the loopback address everything works and the echo containers appear in each other's topic peers.

@achingbrain
Copy link
Member Author

Closing due to staleness

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
help wanted Seeking public contribution on this issue P2 Medium: Good to have, but can wait until someone steps up status/ready Ready to be worked
Projects
None yet
Development

No branches or pull requests

4 participants