Skip to content
This repository has been archived by the owner on Aug 1, 2023. It is now read-only.

test: disable mdns in exchange files #70

Merged
merged 2 commits into from
May 31, 2019
Merged

test: disable mdns in exchange files #70

merged 2 commits into from
May 31, 2019

Conversation

jacobheun
Copy link
Contributor

The tests are failing against the latest go-ipfs rc candidate (0.4.21-rc.3), ipfs/kubo#6389. This is due to an issue with go announcing its address over MDNS with port 4001 even though the port is actually different. Since the MDNS discovery occurs before dial in the test and fails, the connect attempt will subsequently fail due to JS doing a blacklisting backoff on failed connection attempts.

While this doesn't solve the underlying issue, we shouldn't have MDNS turned on for these tests. My recommendation is:

I'd recommend we disable MDNS in the tests for now and file an issue for fixing MDNS so this isn't blocking the release as it's not really a critical issue. In reality we should just move to the new MDNS spec for both implementations and not waste time on a fix for this. The default ipfs daemon runs on port 4001 so this shouldn't be an issue in most cases, and the interop fix was a recent update for js.

@jacobheun jacobheun requested a review from Stebalien May 30, 2019 20:27
@Stebalien
Copy link
Member

While this doesn't solve the underlying issue, we shouldn't have MDNS turned on for these tests.

We should also look into fixing our backoff logic. Ideally:

  1. We'd backoff specific addresses in the swarm.
  2. We'd backoff peer routing in the DHT.

That way, if we learn new addresses, we'll dial those.

MDNS: {
Enabled: false
},
webRTCStar: {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

go-ipfs will just ignore this, IIRC.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correct, and we don't really need it for js for this, lazy copy paste on my part, i've removed it from both.

Copy link
Member

@Stebalien Stebalien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All fixed! Thanks!

@jacobheun
Copy link
Contributor Author

We'd backoff specific addresses in the swarm.

That's a good point, we're doing it at the peer level now and not doing any reseting when addresses are updated. This shouldn't take much to transition to for js.

We'd backoff peer routing in the DHT.

What were your thoughts for this, not sure I follow?

@Stebalien
Copy link
Member

My thinking is that we should completely drop the per-peer logic. The point of the backoff is to avoid repeatedly doing something that fails over and over.

So, we can:

  1. Rate limit finding peers in the DHT. If we've searched for a peer within the last N minutes (and we didn't cancel the search half-way through), return the cached results.
  2. Add each individual multiaddr to a blacklist (per peer) instead of blocking the entire peer. That way, if we get new addresses, we can skip the ones that don't work.

But you're right, the simple solution is to just remove the backoff when we learn a new address. In go-ipfs, we reset the backoff when the user calls the ipfs connect command manually.

@jacobheun
Copy link
Contributor Author

Ah, yeah caching the DHT query would be helpful. I created an issue to transition to a multiaddr based backoff for js, https://github.com/libp2p/js-libp2p-switch/issues/339. If it becomes a more prevalent issue before I can get it fixed we could replicate the same backoff reset behavior of go-ipfs pretty easily for js-ipfs.

@jacobheun jacobheun merged commit 2bd6249 into master May 31, 2019
@jacobheun
Copy link
Contributor Author

The test failures are unrelated to this and exist in master. I created an issue to track that failure, #71.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants