-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intermittent "Connection reset by peer" for in-process channel in a recv() with a queued message but no sender #29
Comments
That's the unwrap on line 385 that's panicking. |
jdm
changed the title
Intermittent "Connection reset by peer" for in-process channel with both ends intact
Intermittent "Connection reset by peer" for in-process channel with a queued message but no sender
Jan 18, 2016
jdm
changed the title
Intermittent "Connection reset by peer" for in-process channel with a queued message but no sender
Intermittent "Connection reset by peer" for in-process channel in a recv() with a queued message but no sender
Jan 18, 2016
This was referenced Jan 18, 2016
pcwalton
added a commit
to pcwalton/ipc-channel
that referenced
this issue
Jan 21, 2016
This reduces the probability that the receiver receives errors when we close our end of the socket with data remaining. Might fix servo#29.
pcwalton
added a commit
to pcwalton/ipc-channel
that referenced
this issue
Jan 22, 2016
This reduces the probability that the receiver receives errors when we close our end of the socket with data remaining. There is deadlock potential with this patch, because turning on `SO_LINGER` causes `close()` to block until the receiver has received all the data. If deadlocks happen, a workaround will be to close sockets in a separate thread. This is ugly and slow, so I don't want to do that unless we need to. Might fix servo#29.
bors-servo
pushed a commit
that referenced
this issue
Jan 22, 2016
Turn on `SO_LINGER` for client communication sockets. This reduces the probability that the receiver receives errors when we close our end of the socket with data remaining. There is deadlock potential with this patch, because turning on `SO_LINGER` causes `close()` to block until the receiver has received all the data. If deadlocks happen, a workaround will be to close sockets in a separate thread. This is ugly and slow, so I don't want to do that unless we need to. Might fix #29. r? @jdm
bors-servo
pushed a commit
that referenced
this issue
Jan 22, 2016
Turn on `SO_LINGER` for client communication sockets. This reduces the probability that the receiver receives errors when we close our end of the socket with data remaining. There is deadlock potential with this patch, because turning on `SO_LINGER` causes `close()` to block until the receiver has received all the data. If deadlocks happen, a workaround will be to close sockets in a separate thread. This is ugly and slow, so I don't want to do that unless we need to. Might fix #29. r? @jdm
bors-servo
pushed a commit
that referenced
this issue
Jan 22, 2016
Turn on `SO_LINGER` for client communication sockets. This reduces the probability that the receiver receives errors when we close our end of the socket with data remaining. There is deadlock potential with this patch, because turning on `SO_LINGER` causes `close()` to block until the receiver has received all the data. If deadlocks happen, a workaround will be to close sockets in a separate thread. This is ugly and slow, so I don't want to do that unless we need to. Might fix #29. r? @jdm
bors-servo
pushed a commit
that referenced
this issue
Jan 22, 2016
Turn on `SO_LINGER` for client communication sockets. This reduces the probability that the receiver receives errors when we close our end of the socket with data remaining. There is deadlock potential with this patch, because turning on `SO_LINGER` causes `close()` to block until the receiver has received all the data. If deadlocks happen, a workaround will be to close sockets in a separate thread. This is ugly and slow, so I don't want to do that unless we need to. Might fix #29. r? @jdm
bors-servo
pushed a commit
that referenced
this issue
Jan 23, 2016
Turn on `SO_LINGER` for client communication sockets. This reduces the probability that the receiver receives errors when we close our end of the socket with data remaining. There is deadlock potential with this patch, because turning on `SO_LINGER` causes `close()` to block until the receiver has received all the data. If deadlocks happen, a workaround will be to close sockets in a separate thread. This is ugly and slow, so I don't want to do that unless we need to. Might fix #29. r? @jdm
bors-servo
pushed a commit
that referenced
this issue
Jan 25, 2016
Turn on `SO_LINGER` for client communication sockets. This reduces the probability that the receiver receives errors when we close our end of the socket with data remaining. There is deadlock potential with this patch, because turning on `SO_LINGER` causes `close()` to block until the receiver has received all the data. If deadlocks happen, a workaround will be to close sockets in a separate thread. This is ugly and slow, so I don't want to do that unless we need to. Might fix #29. r? @jdm
pcwalton
added a commit
to pcwalton/ipc-channel
that referenced
this issue
Jan 25, 2016
This reduces the probability that the receiver receives errors when we close our end of the socket with data remaining. There is deadlock potential with this patch, because turning on `SO_LINGER` causes `close()` to block until the receiver has received all the data. If deadlocks happen, a workaround will be to close sockets in a separate thread. This is ugly and slow, so I don't want to do that unless we need to. Might fix servo#29.
bors-servo
pushed a commit
that referenced
this issue
Jan 25, 2016
Turn on `SO_LINGER` for client communication sockets. This reduces the probability that the receiver receives errors when we close our end of the socket with data remaining. There is deadlock potential with this patch, because turning on `SO_LINGER` causes `close()` to block until the receiver has received all the data. If deadlocks happen, a workaround will be to close sockets in a separate thread. This is ugly and slow, so I don't want to do that unless we need to. Might fix #29. r? @jdm
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We got a panic in this code in Servo:
The panic:
This is on Linux.
The text was updated successfully, but these errors were encountered: