-
-
Notifications
You must be signed in to change notification settings - Fork 839
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PoolTimeout when num tasks in asyncio.gather() exceeds client max_connections #1171
Comments
Yup, there's def. an issue here to be dealt with. To get a bit more info, I tried this... import asyncio
import httpx
async def get_url(client, url):
print("GET", url)
print(await client._transport.get_connection_info())
print(await client.get(url))
print(await client._transport.get_connection_info())
async def main() -> None:
url = "https://www.example.com"
max_connections = 2
timeout = httpx.Timeout(5.0, pool=5.0)
limits = httpx.Limits(max_connections=2, max_keepalive_connections=0)
client = httpx.AsyncClient(timeout=timeout, limits=limits)
async with client:
tasks = []
for _ in range(max_connections + 1):
tasks.append(get_url(client, url))
await asyncio.gather(*tasks)
if __name__ == "__main__":
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main())
finally:
loop.close() Which results in... GET https://www.example.com
{}
GET https://www.example.com
{}
GET https://www.example.com
{}
<Response [200 OK]>
{'https://www.example.com': ['HTTP/1.1, IDLE', 'HTTP/1.1, ACTIVE']}
<Response [200 OK]>
{'https://www.example.com': ['HTTP/1.1, IDLE', 'HTTP/1.1, IDLE']} We can see the connections returning from ACTIVE to IDLE, but the keep-alive connections are not being used by the pending request. The issue here is that the pending request is in a state where it's blocking on the connection semaphore waiting to start a new connection, which is not being released by the fact that we've now got an available keep alive connection. Will need a bit of careful thinking about, but clearly needs resolving yup - thanks for raising this. |
I was going to mention the same. I also tested reading the whole response body (which should release the connection) and also closing the response manually but the issue persists either way. |
ah good call checking the connection state! is there existing logic that intends to have pending requests make use of existing idle connections, and it's just not working as expected? or does the code as written only intend for pending requests to create new connections? curious where that logic is if you can link me @tomchristie |
Hello everyone, just want to make sure that this it what I'm looking for. The server I want to send request to have a limited number of allowed connections. Currently I limit the number of async task by using Thanks a lot! fin swimmer |
I'm planning at getting stuck into this one pretty soon yup. |
Until this is resolved, is there any reasonable way to work around this? Maybe we use our own async def main() -> None:
url = "https://www.example.com"
max_connections = 2
timeout = httpx.Timeout(5.0, pool=2.0)
limits = httpx.Limits(max_connections=max_connections)
client = httpx.AsyncClient(timeout=timeout, pool_limits=limits)
semaphore = asyncio.Semaphore(max_connections)
async def aw_task(aw):
async with semaphore:
return await aw
async with client:
tasks = []
for _ in range(max_connections + 1):
tasks.append(aw_task(client.get(url)))
await asyncio.gather(*tasks) |
I can't be certain (still debugging things), but I believe this to be the cause of issues I'm seeing as well. In my case, its not using It appears that the connection attempt is happening, but never actually connecting. Also, the symptoms are sporadic - in my case 1 in 10 fails, and there is no pattern to what type of request fails. After a few weeks of testing, downgrading from |
See #1741 |
Is this still a problem in 0.19 or 1.0.0? I tried running @tomchristie 's code sample but couldn't replicate the behavior on 0.18.x or 0.19. We've held off updating beyond 0.17.1 due to this, but would really like to get back onto the latest. |
I can also confirm that I had this problem in 0.19 and downgrading to 0.17.1 solved the issue. |
Have confirmed that the given example now works in |
Fixed in httpx 0.21, see encode/httpx#1171 (comment)
Fixed in httpx 0.21, see encode/httpx#1171 (comment)
is there a default |
Faced this error in 0.25.1. |
can confirm the regression ^ |
Perhaps related to encode/httpcore#823 in Can you share your httpcore versions? and perhaps try different versions of httpcore? |
this was the fix: - httpcore==1.0.1
- httpx==0.25.1
+ httpcore==0.18.0
+ httpx==0.25.0 |
tested version '0.25.1' has no the op issue. (version 0.23 has) but it has no |
Same issue here. I'm able to reproduce it also with edit: It could be that it happens after some previous requests were cancelled while they were ongoing, but I'm not entirely sure. edit2: Some more information:
|
I've a reproducer: Run this HTTP server script (a simple HTTP server that takes long to respond): import asyncio
from hypercorn.asyncio import serve
from hypercorn.config import Config
from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
async def homepage(request):
await asyncio.sleep(10)
return JSONResponse({})
app = Starlette(
routes=[
Route("/", homepage),
],
)
config = Config.from_mapping({})
config.bind = ["127.0.0.1:8001"]
asyncio.run(serve(app, config)) Then run this client code: from anyio import create_task_group
import asyncio
import httpx
async def main() -> None:
async with httpx.AsyncClient(
limits=httpx.Limits(max_connections=2),
verify=False,
) as client:
async def do_one_request() -> None:
await client.get("http://localhost:8001/")
# First, create many requests, then cancel while they are in progress.
async with create_task_group() as tg:
for i in range(5):
tg.start_soon(do_one_request)
await asyncio.sleep(0.5)
tg.cancel_scope.cancel()
# Starting another request will now fail with a `PoolTimeout`.
await do_one_request()
asyncio.run(main()) Looks like the slots in the connection pool are not released during cancellation. This happens for me on both httpx 0.25.0 + httpcore 0.18.0 as well as on httpx 0.25.2 + httpcore 1.0.2. |
Probably it's in the
|
I have verified that encode/httpcore#880 resolves this issue. Using server example at: #1171 (comment) |
Checklist
Describe the bug
If the number of tasks executed via
asyncio.gather(...)
is greater thanmax_connections
, i get aPoolTimeout
. It seems like maybe this is happening because the tasks that have completed aren't releasing their connections upon completion.I'm new to
asyncio
so it's possible I'm doing something wrong, but haven't been able to find any documentation or issues that cover this case definitively.To reproduce
Expected behavior
I would expect all tasks to complete, rather than getting a
PoolTimeout
on the nth task, wheren = max_connections + 1
.Actual behavior
Getting a
PoolTimeout
on the nth task, wheren = max_connections + 1
.Debugging material
Environment
Additional context
I commented on this issue, but it's closed so figured it would be better to create a new one.
The text was updated successfully, but these errors were encountered: