-
Notifications
You must be signed in to change notification settings - Fork 446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mplex aborts connections that I want to keep #1925
Comments
Mplex doesn't have any reliable methods to protect itself from bad actors on the network. The error you are seeing is when a remote peer sends lots of messages for streams that don't exist. You can raise the threshold to stop the connection being closed by increasing the disconnectThreshold setting (default: 5, e.g. more than 5 messages per second for missing streams will cause the connection to be closed). Yamux is a lot more reliable and robust, is there any reason you're not using it? |
Thanks for the response. Ok, that makes sense. Increasing the disconnectThreshold should solve it. Yamux in js still suffers big performance penalty ChainSafe/js-libp2p-yamux#42 . So I have been reluctant to migrate. |
Possibly related - #1935 Can you try with the latest version and see if the problem still exists? |
I updated to the latest version. Seems like the majority of the issues where fixed with that patch. Though still I am not able to stop a protocol handler, while a node is running, and restart it without dropping the connection for some reason. I think I need to investigate more into this. But are there tests in js-libp2p where individual protocol handlers are started/stopped while nodes are running? Or am I testing something outside intended functionality. |
Version:
0.46.1
Platform:
Mplex
Severity:
Medium
Description:
I am trying to migrate to 0.46.1 but a large portion of the tests I run fails because the Mplex rate limiter consumption with key 'missing-stream' eventually will abort with
new Error('Too many messages for missing streams')
which will terminate the connection assuming it is a "bad" connection.Before the behaviour was to simple drop the messages (?)
I have not fully why understood why this behaviour occurs for me. But one case I have observed is a test where I am stopping a protocol handler (while receiving messages), then it seem to also drop the connection altogether due to this (after stopping but before restarting).
It seems that Mplex is trying to find a stream from
this._streams.initiators
. Does this mean, I am receiving messages from a stream that I should have initiated?I did not find any notes regarding this in the migration docs, nor in the PR where I assume this behaviour was introduced.
Are there any pointers I could be given on how I could mitigate these kind of issues, like, making sure that I am not pushing messages to an outbound stream before a condition is fulfilled of some sort? Or are there other approaches to work around this?
Steps to reproduce the error:
Not sure yet how to get this error consistently
The text was updated successfully, but these errors were encountered: