-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should Transport.request()
return a context manager?
#145
Comments
So, this is a really valuable path to explore, but we do need to be aware of the knock-on implications - there's some big trade-offs to weigh up, and it might well be that it's a bridge too far in purity vs practicality. I'm fairly sure that having Here's how we handle response streams at the moment. with httpx.stream(...) as response:
for chunk in response.iter_text():
print(chunk) Now, if the transport with transport.request(...) as response:
if response.is_redirect:
# Read the response body, then break out of this context block, and loop making another request.
else:
# We've got the final response that we want to return to the user.
# *However*, returning a response from this point, or exiting the block, will close the response,
# so we can return an ingested non-streaming response just fine, but we *can't* return a streaming response. The implication is that the streaming API would need to change, so that any response streaming runs inside the context block. For example... def handle_stream(response):
for chunk in response.iter_text():
print(chunk)
response = httpx.stream(..., on_stream=...) With an implementation, like... with transport.request(...) as response:
if response.is_redirect:
# Read the response body, then break out of this context block, and loop making another request.
else:
if on_stream:
on_stream(response)
# Return the response Anyways... Pros:
Cons:
|
@tomchristie I think Here's what your redirect example would look like: from contextlib import ExitStack
exit_stack = ExitStack()
response = exit_stack.enter_context(transport.request(...))
if response.is_redirect:
# Read the response body, then break out of this context block, and loop making another request.
for _ in response[4]: # stream:
pass
exit_stack.close()
return
# We've got the final response that we want to return to the user.
# Return a response which will close the request context on response close.
return Response(..., on_close=exit_stack.close) Assuming Thoughts? |
Also…
Sure, but I don't really see this as an alternative. Transport-bound and request-bound contexts are two different things the request API could support. Currently we transport-bound, and this proposal is about adding request-bound support. (If using trio for example, a nursery that's the same lifespan than the transport won't behave the same way than a nursery tied to a request lifespan, and vice versa. You can cancel a request-level background task on closing the response if you have a request-level nursery, but you can't if all you have is a transport-level nursery. I don't think there's a way around this, at least as trio defines the nursery/task group semantics.) |
Def worth digging into yup. Let's just be aware that it would also impact the high-level streaming API, and not get too attached to the issue in case we figure that the practicality / purity trade off isn't worth it. (Eg. if it prevents other libs adopting |
@tomchristie Have we already discussed My thoughts here are: using I think the argument I gave was sticking closer to trio APIs, such as |
Well, let's use this opportunity to talk through this API change... For these sorts of discussions I find it's generally more valuable to start by talking through the standard sync API, and then look at the corresponding async API. The existing API that we have is this: response = http.request(method, url, headers, stream, ext)
status_code, headers, stream, ext = response
try:
...
finally:
stream.close() The motivation for moving to a context manager is to have a more tightly structured API that always enforces cleaning up when the response stream is no longer required: with http.request(method, url, headers, stream, ext) as response:
status_code, headers, stream, ext = response
... Note that The Part of the question there is 1. "Do we want to also support the unconstrained style?" and the other part is 2. "What would the API look like if we did?". Those answers might well not be the same as The argument against supporting both cases is that generally we don't want to make the unconstrained case available. In much the same way that trio's nurseries are deliberate designed to always enforce a context managed style. Of course we do have an escape hatch for the odd awkward cases... stack = contextlib.ExitStack()
response = stack.enter_context(http.request(...))
status_code, headers, stream, ext = response
try:
...
finally:
stack.close() I think the answers to these questions might also be somewhat dependant on if the transport API continues to be a "strictly primitive datatypes only" API, as it currently is. If we had reasons to switch to an API with "response as an object instance", then there might be a different slant because, for example, this sort of API is pretty typical... # Either this...
try:
response = http.request(...)
finally:
response.close()
# Or this
with http.request(...) as response:
... ...and feels more obvious with Anyways, it might be that it's worth us expanding this conversation out a bit, since it really is the core of our last bit of "1.0 API" finalisation. Eg. Perhaps worth chatting out with the |
@tomchristie Good point about |
I'm also quite attached to keeping the transport API as plain as possible, including avoiding any custom models. We've been able to work with plain data types so far, I'm not sure if this approach has limitations that could be pain points in the future (I'm thinking of special extensions like upgrades, eg WebSocket or H2C), but so far I don't think so. |
Closing this now that httpx 0.18 has settled on a plainer style of Transport API |
Prompted by https://github.com/encode/httpx/pull/998#issuecomment-671070588…
cc @tomchristie
Prior art
Right now our transport API looks like this... I'm using a slightly edited version of the README, where we only inspect the response headers:
Although it's definitely not obvious to the un-trained eye, this code could be problematic in some situations: the
stream
object is a wrapper over an open socket, and we're not enforcing that users close it properly before exiting the context of the associated request.This example runs fine, but it's mostly by chance.
AsyncConnectionPool
may open and close connections on a per-request basis, but this doesn't pose a problem because the lifespan of a connection typically outranges that of a request (eg because ofKeep-Alive
). So the overall machinery occurs at the transport level. In other words, when theAsyncConnectionPool
block exits, all underlying connections are closed, and so the underlying socket is closed, and nothing leaks.But…
Case study: per-request resourceful transport case
Consider the case of
ASGITransport
in encode/httpx#998, or any other example that requires a per-request I/O or concurrency resource (in theASGITransport
case, a task group for running the ASGI app in the background).Then currently we must do some unclassy
__aenter__()
/__aexit__()
dance with that resource, so thatrequest()
returns astream
containing some kind of reference to that resource, with anaclose_func=...
that properly closes said resource. (This is tricky. I think we need to always passsys.exc_info()
to__aexit__()
so that the resource is aware that it might be being closed in response to an exception within the block, but I'm not sure).For example...
And now my variant of the README example will be broken for this particular transport:
Leverage transport-level context manager syntax (not satisfactory)
One way to fix it would be to track instances of
resource
on the transport class, and then onMyTransport.__aexit__()
make sure allresource
instances are closed. It might not generally be very practical to do this, egresource
might fail if you're trying to close it again. And overall it feels like late patching, instead of fixing the problem as close to its source as possible (the request "context").Proposal: request-level context manager syntax
On the other hand, what if
.request()
was returning a context manager, instead of a plain awaitable…?Then we could do this:
The usage syntax will change slightly, to become:
Now the syntax guarantees us that
resource
will be closed whenever we exit the request context.Benefits:
The text was updated successfully, but these errors were encountered: