Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shutdown socket writes before close #874

Closed
wants to merge 1 commit into from
Closed

Conversation

tilgovi
Copy link
Collaborator

@tilgovi tilgovi commented Sep 6, 2014

Calling shutdown(2) with SHUT_WR before close(2) ensures that a FIN
packet is sent before any RST packet. Doing so ensures that clients
and proxies get a clean shutdown signal when the server terminates the
connection while an upload is pending.

Fix #872

Calling shutdown(2) with SHUT_WR before close(2) ensures that a FIN
packet is sent before any RST packet. Doing so ensures that clients
and proxies get a clean shutdown signal when the server terminates the
connection while an upload is pending.

Fix #872
@tilgovi
Copy link
Collaborator Author

tilgovi commented Sep 6, 2014

Only lightly tested using tcpdump. I observe a FIN packet being sent before an RST (if any is sent at all). Without this patch, only the RST is sent.

Test instructions after patch here: https://gist.github.com/tilgovi/9021a996bdaf21595f12
Before patch, no FIN is sent.

@tilgovi
Copy link
Collaborator Author

tilgovi commented Sep 6, 2014

As far as I know this patch should be fine, but I can't tell if it's possible that util.close is called on a socket-like object that doesn't have a shutdown method. I think it's not, and that every worker that uses this call provides something supporting this API. But just in case, I'll wait for someone else to have a look.

AFAIK there is no problem to ever send a shutdown before close like this. To be extra safe though, maybe I should do two separate try/catch so that close is always called.

@@ -271,6 +271,7 @@ def set_non_blocking(fd):

def close(sock):
try:
sock.shutdown(socket.SHUT_WR)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't we wait for read right after?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean add a lingering close to gunicorn thread @c-nichols raised at the nginx forum and linked over here in #872?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, the only problem I see is how to handle the normal case: all have been received then we close. In that case we don't need to wait for the final EOF since it will block.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if we do this we will have to have a linger pool somewhere and
select on it. If we get back readable but zero bytes we close. I don't
think it is a problem if we don't close immediately, especially if we allow
options for configuring this behavior.
On Sep 9, 2014 12:04 AM, "Benoit Chesneau" notifications@github.com wrote:

In gunicorn/util.py:

@@ -271,6 +271,7 @@ def set_non_blocking(fd):

def close(sock):
try:

  •    sock.shutdown(socket.SHUT_WR)
    

yeah, the only problem I see is how to handle the normal case: all have
been received then we close. In that case we don't need to wait for the
final EOF since it will block.


Reply to this email directly or view it on GitHub
https://github.com/benoitc/gunicorn/pull/874/files#r17284959.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good idea. This pool can be checked from time to time using a dedicated thread or something like it.

Not sure if it's totally relevant, but here is how I check if a socket is still connected in socketpool:
https://github.com/benoitc/socketpool/blob/master/socketpool/util.py#L84-L144

@benoitc
Copy link
Owner

benoitc commented Sep 12, 2014

so yes the idea would be to shutdown the socket on write like you did and put in a linger pool where it will checked on read until it returns . Then we will be able to close the socket. Maybe that work could be started along the work on #792 .

@benoitc
Copy link
Owner

benoitc commented Sep 22, 2014

@tilgovi should we do that for 19.2 or do you want to postpone it for 20.0 in october?

@tilgovi
Copy link
Collaborator Author

tilgovi commented Sep 22, 2014

Let's postpone. At least, I have no time to work on it now.
On Sep 22, 2014 6:19 AM, "Benoit Chesneau" notifications@github.com wrote:

@tilgovi https://github.com/tilgovi should we do that for 19.2 or do
you want to postpone it for 20.0 in october?


Reply to this email directly or view it on GitHub
#874 (comment).

@benoitc
Copy link
Owner

benoitc commented Sep 22, 2014

ok :) postponing it for 19.3

@benoitc benoitc added this to the R19.3 milestone Sep 22, 2014
@benoitc
Copy link
Owner

benoitc commented Dec 21, 2014

While working on #962 I was wondering if we couldn't do the following with the current design to make sure that the sockets are killed;

  1. Put closed sockets in the main worker poller or a dedicated poller/ worker with a close callback associated.
  2. On read or after a time we definitely close them

It could be done in the main thread. Or if we are worrying about the time it could take we could have a linger thread/worker.

Thoughts?

@benoitc
Copy link
Owner

benoitc commented May 10, 2015

I was thinking about this issue and wonder if couldn't solve it with such design:

  • in a linger pool all sockets are put in an event loop using the selector API
  • each time we come back in the accept loop we check if a socket is available for read in the event loop, if yes we then definitely close it using the shutdown method followed by the close method.

I think it could work, but will add an extra time when accepting a connection. Maybe we should maintain a thread? Or just put the linger pool inside the arbiter?

Also how do we handle the gevent and eventlet workers? We have little control on them using their server object ( I wonder if we shouldn't provide our own server impl at some point...) . Maybe we could ignore them for the next release?

Let me know what you think about that design anyway.

@benoitc benoitc mentioned this pull request May 10, 2015
@benoitc
Copy link
Owner

benoitc commented May 10, 2015

superseded by #1025 all discussions now happen in that ticket.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Connection not closed properly after responding with 401
2 participants