-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ASGI support #8233
Comments
We don't support either... (a quick search suggests it wouldn't even be possible to support WSGI as it's sync only). The discussion you linked to Andrew even says that performance may be lower with ASGI: #2902 (comment) #2902 (comment) I think, as per those previous discussions, I don't see any major advantages to implementing this, and so I doubt anyone here will want to invest time into it. However, if someone wants to give it a go, we'll consider merging it. If the integration is a simple as the attempt in 2017, then it looks reasonable: https://github.com/aio-libs/aiohttp/pull/2035/files |
I might have misunderstood something then. I noticed that aiohttp supports Gunicorn (https://docs.aiohttp.org/en/stable/deployment.html#nginx-gunicorn) and assumed that since Gunicorn is a WSGI server aiohttp somehow uses (or abuses) WSGI to communicate with Gunicorn. |
OK, Gunicorn integration is it's own thing: https://github.com/aio-libs/aiohttp/blob/master/aiohttp/worker.py |
Yes, the docs compare it with supervisord. I think it just let's one have parallelism through multiple processes. My understanding is that ASGI is an attempt to reinvent it for async interfaces. However, aiohttp's HTTP parser is a C-extension so perhaps that's the reason Andrew thought it would have a performance hit for aiohttp-based apps — because of an extra layer of indirection and having our parser replaced with a potentially slower one. Note that it was when we were relying on the different underlying parser implementation and things may be slightly different post migration to llhttp, but I wouldn't expect it to be noticeable. Whoever ends up working with this should also make and document a performance comparison. I'm not necessarily against the implementation, provided it isn't contributed in a way that brings a lot of maintenance burden. It'd also have to be fully covered with tests. |
Is your feature request related to a problem?
As far as I know most async Python web frameworks nowadays use ASGI with servers like Uvicorn. However, as far as I can tell aiohttp server does not support ASGI.
Describe the solution you'd like
It would be nice if aiohttp could support ASGI, so that it can benefit from features provided by ASGI web servers (see for example #5999 (comment)).
There may also be some performance benefits to ASGI over WSGI, but I don't know enough about the internals of how aiohttp interfaces with WSGI to make specific claims about this.
Describe alternatives you've considered
Aiohttp isn't fundamentally broken without this feature so doing nothing is always an alternative.
Related component
Server
Additional context
There was pervious discussion of this in 2018 (#2902). The final decision there seemed to be to wait and see. Since then ASGI has become much more stable and much more widely used, so I think it is time to revisit this decision.
Code of Conduct
The text was updated successfully, but these errors were encountered: