-
-
Notifications
You must be signed in to change notification settings - Fork 178
Add transport.closing property #248
Comments
+1 |
The fact that '.closed' will be set while the transport is in the process of being closed (not already closed) bothers me a bit, but I think I'm OK with that. Would be great to hear what @gvanrossum and @Haypo think about this first. |
Sorry, I have no bandwidth left until PEP 484 is squared away. On Thu, May 21, 2015 at 1:42 PM, Yury Selivanov notifications@github.com
--Guido van Rossum (python.org/~guido) |
Absolutely NP. We can consider this even after beta-1, right? |
For such a small thing, yes. On Thu, May 21, 2015 at 1:56 PM, Yury Selivanov notifications@github.com
--Guido van Rossum (python.org/~guido) |
@1st1 Sorry,
I don't think we need And, as I've mentioned above, asyncio stream writer may use |
Wait, I'm not sure that it is a good design to not raise an exception here. Would it be an issue to modify transport.write() to raise an exception if transport.close() was just closed? What is the use case of calling write() just after close()? |
pause_reading() and resume_reading() methods of transports already raises a RuntimeError if the transport is being closed. IMHO pause_writing() and resume_writing() methods of StreamReaderProtocol and SubprocessStreamProtocol should also raise an exception if the transport is being closed. So I agree that a closing property (or method?) is needed. Example of patch (PoC) to check the closing attribute of transport: https://gist.github.com/haypo/a8a38ae21d50cd42adce (proactor_events.py should also be modified). If we block write() after close(), should we also block write_eof()? Currently, write_eof() can be called on a closed socket. It probably fails, since self._sock attribute is set to None in _call_connection_lost(). |
This is exactly my problem as well. But I would go one step further: StreamWriter.write() should raise an exception if the underlying transport is closed or closing. |
I created the pull request #257 to fix this issue. I have question: should transport.closing should return True or False when the transport is closed (which is different than being closed or "closing")? |
My pull request disallow write operations on transports when the transport is closing. It may break application. It would feel more confident if you can try it on your favorite asyncio library or application. |
+1 |
Someone needs to make a PR that adds the |
Please see PR #291. |
Closing the issue -- #291 has been merged. Thanks! |
In aiohttp project I have an issue (aio-libs/aiohttp#370).
Long story short: for handling static files aiohttp uses code like:
When client closes HTTP connection (by socket shutdown for example)
transport._force_close(...)
schedules_call_connection_lost
on next loop iteration and assignstransport._closing
flags (as well astransport._conn_lost
).Actual transport closing will be done on next loop iteration, but aiohttp static file handler has no chance to be informed about transport closing:
stream.write()
and underlyingtransport.write()
don't check for._closing
flag and always succeed.transport.write()
does check for._conn_lost
but sends log message only, not raises exception -- and the behavior is pretty correct.aiohttp static file handler sends no data on
resp.write(chunk)
call, stream buffer is never overloaded andyield from resp.drain()
always returns without pausing.Thus all multi-megabyte file may be iterated over and pushed into stream. Actual data will not be sent via wire of course but the whole process is
a) takes longer than required
b) sends a lot of warning messages
'socket.send() raised exception.'
to asyncio loggerI propose to add public readonly bool property
.closing
(return._closing
internal value).It gives me a way to check if transport is on the middle of closing procedure.
aiohttp.StreamWriter
also should be modified: I guess to add.closing
property toStreamWriter
to mimic transport.StreamWriter.drain()
coroutine should check forstream.closing
first and callyield from asyncio.sleep(0)
to run previously scheduledtransport._call_connection_lost
at first.Sorry for long message, I hope I've described my problem well enough. Feel free to ask if my (complex enough) scenario is still unclean.
If no objections I will prepare a patch.
The text was updated successfully, but these errors were encountered: