-
Notifications
You must be signed in to change notification settings - Fork 347
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add stream_response option to hackney adapter #498
base: master
Are you sure you want to change the base?
Conversation
1283159
to
a4b3802
Compare
Hackney only allows the controller process to access the ref
test/tesla/adapter/hackney_test.exs
Outdated
url: "#{@http}/ip" | ||
} | ||
|
||
assert {:ok, %Env{} = response} = call(request, stream_to_pid: self()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When passing pid explicitly, I'd expect to handle incoming message manually.
Is there anything stopping us from using steam_response: true
instead?
Does the implementation handle multiple concurrent requests originating from the same process?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hackney requires you to transfer ownership of the response to the desired PID, otherwise it will get GCed.
Workaround will be to capture self()
during the request phase, and assume that will be the PID that will consume the steam.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And what happens when the stream is consumed from a different process?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I recall, Hackney shuts down the request after a short amount of time.
So for long running responses you get a partial reply.
Amazing! Let's get the response streaming become a thing 👍 |
👍 for this! |
@tim-smart @teamon anything I can do to get this over the finish line? This would allow me to remove a dependency on Downstream from my app. |
Do you need any help with this? I would also want to help to get this through the finish line. |
I'll add my voice to the "I'd love to see this feature merged, can I help?" crowd! |
* Rename `stream_to_pid` to `stream` * Add `__pid__` to `Env`, for capturing the process that initiated the request * hackney: add `stream_owner` option, which defaults to `env.__pid__`
* Defaults to `:default` * Has `:stream` option, which attempts to stream the response if the adapter supports it
Added some changes to improve the API surface.
New API example: {:ok, %Env{body: stream}} = Tesla.get("https://google.com", response: :stream)
# Use the response stream
data = Enum.join(stream) |
Maybe leaving this as an adapter option is better, as it will not be implemented for every adapter. |
Hey @teamon I used this as a base for a version that is adapter option driver: https://github.com/connorjacobsen/tesla/tree/hackney-response-streaming I would love to be able to use hackney to stream responses like you and Tim have detailed here. Is there anything I can do to help make that happen? Happy to take a stab at some more code as needed, would just want some direction from you so it fits with where you want to guide the library. |
@tim-smart @yordis is there anything I can do to help with this PR? Or the fork I linked above? tesla is a fantastic library and I would love to be able to contribute a little something back to it if it's helpful! |
@connorjacobsen, by all means, please take the lead. In the worst case, @tim-smart solved it differently, and we commit to one of the solutions, and you get to learn about Tesla internals; from my perspective, still a huge win. I am unfamiliar with |
@yordis I am equally happy with either solution. I basically just took @tim-smart's comment and riffed on it in a way that kept mostly everything isolated to the Hackney adapter itself rather than adding the So I guess my question for you + others is: do you want to isolate this to the Hackney adapter or should we go down a more generalized path adding streaming for all adapters (all seem to support streaming responses). If the latter, do we want to pursue that one-by-one or all at once? Do you have a preference? As a meta question: is this the right place to discuss this? Is there somewhere better? |
Ideally, we add streaming to all adapters. About one by one vs. all at once. If you are giving it a try, instead, you do a big PR where you experiment and find a path forward that works for the official adapters, and if we need to, we split the PR into smaller ones if they are too big. I worry about merging something that prevents me from releasing a version or something broken or error-prune. |
There is also this PR #540 that implements streaming for Finch. I’d say we go with the |
Either way is fine by me |
Any progress on this? |
Is there a branch that best embodies an approach that is likely to be merged that can be collaborated on and/or is there a way we can put a bounty on this? |
@feld I haven't used Hackney for years by now, so I wouldn't consider myself qualified to have strong professional judgment. I would appreciate it if folks test it and share your code review. |
2bca420
to
fe7207c
Compare
Playing around with streaming responses (#271).
Only implemented for hackney here.