Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On switching the Transport API to a context-managed style. #1358

Closed
tomchristie opened this issue Oct 9, 2020 · 1 comment
Closed

On switching the Transport API to a context-managed style. #1358

tomchristie opened this issue Oct 9, 2020 · 1 comment

Comments

@tomchristie
Copy link
Member

Refs. #1274

So, we've got a bit of design refinement in HTTPX that we're working on that needs some careful consideration before we adopt it.

There's an interface split between our client (httpx) which handles all the higher level models, cookie handling, redirects, authentication, user friendly configuration etc. and the lower level "just send an HTTP request" Transport API (httpcore).

Our Transport API currently looks essentially(*) like this...

status_code, headers, stream = transport.request(method, url, headers, stream)

(*) Actually not exactly like that, but for the purposes of this discussion, that's what it looks like. And yes, there's both sync and async counterparts to that API.

In that interface stream is a bytes iterable, that also includes a .close() method, which must be called when you want to release the response, and have the HTTP connection either close, or return to the connection pool.

Something that got teased out along the way is that we probably actually want the Transport API to instead be this...

with transport.request(method, url, headers, stream) as response:
    status_code, headers, stream = response
    ...

Which, although this is more about resources than about concurrency I'm sure any folks gen'ed up on structured concurrency will see immediately see why we'd think that was preferable. Both from a resource-safety point of view, but also from a usability perspective for anyone who's actually working at that lower level, because the alternative is arguably somewhat graceless...

status_code, headers, stream = transport.request(method, url, headers, stream)
try:
    ...
finally:
    steam.close()

All of this is absolutely fine. It's not a problem for us to switch that around and it'll be a version bump that likely effects pretty much nobody, because while it is within our public API, it's not at a level that 99% of our users are working at.

Eventually the high-level API exposes both the common non-streaming case...

response = httpx.request("GET", <url>, ...)
response = httpx.get(<url>, ...)
response = httpx.post(<url>, ...)

And the streaming case...

with httpx.stream("GET", <url>, ...) as repsonse:
    ...

What is interesting in making the switch, is the implication that we end up with @contextmanager functions running all the way through the stack.

The only bit in any of this that feels a bit of a niggle, is that once you've got context-managed methods all the way through the stack, then suddenly any stack traces you end up with are all a bit more obscure for end-users, because every method call is interleaved with a context manager __enter__ call.

Does any of this chime with any design work any of y'all might have done, or are there any obvious reasons why having a few layers of calling through context managed methods might not be a good idea, and more trad. style "close" callbacks might actually be a good alternative, so long as safely wrapped up by the high level API?

@florimondmanca
Copy link
Member

About the stack trace pollution problem due to layers of __enter__ methods, I'd be curious if / how the trio project deals with this. Certainly the structured APIs behind it make heavy use of context managers as well, and this might be something they needed to deal with (or decided not to, if no clear workaround to clear things up was available).

@encode encode locked and limited conversation to collaborators Mar 23, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Projects
None yet
Development

No branches or pull requests

2 participants