-
Notifications
You must be signed in to change notification settings - Fork 347
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Streaming Response #271
Labels
Comments
Unfortunately streaming response body is not implemented, but it could be done. case Tesla.get(client, "/stream", stream_response: true) do
{:ok, env} -> # on successful read of status & headers env.body is a Stream
{:error, reason} -> # ...
end
|
Closed
I'd go with |
Response streaming added for Finch in https://github.com/elixir-tesla/tesla/releases/tag/v1.9.0 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
According to the README, Tesla supports streaming if the underlying adapter supports it.
From the code and the examples I found it seems though that streaming means in this instance, streaming of the request.
I am interested in how I would be able to stream the response of a request (very useful for something like large downloads).
Is this supported, if so: how?
The text was updated successfully, but these errors were encountered: