Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support either multipart or Body::sized #49

Closed
emk opened this issue Jan 19, 2017 · 3 comments
Closed

Support either multipart or Body::sized #49

emk opened this issue Jan 19, 2017 · 3 comments

Comments

@emk
Copy link
Contributor

emk commented Jan 19, 2017

Hello! I've converted a bunch of our internal production code over to reqwest in preparation for the arrival of async hyper. It's working really well, and thank you for providing this transition path.

Right now, I'm dealing with a problem with the BigML API. I'm trying to upload multi-gigabyte CSV data files as a MIME multipart form attachment. Unfortunately, BigML requires MIME multipart, and it doesn't support chunked transfer encoding. I have a workaround for the missing multipart support mentioned in #4: I create a custom Reader type that generates a simple multipart body.

Unfortunately, this doesn't quite work. I need to read the entire file into memory anyway, because reqwest's Body::sized method is unimplemented. And this is when our server code crashes.

I'd definitely be interested in preparing a PR that implements something like the following:

pub fn sized<R: Read>(reader: R, len: u64) -> Body

...or an equivalent impl<R: Read> Into<Body> for (R, u64) version.

I know that Read is a bit of a weird case in this brave new work of futures and tokio, but it's still a core Rust API for anybody not writing async code, and it might be good to support Read and sized data long term.

If that's really not interesting, I could take a look at multipart, but that's more complicated to do right (our little workaround only handles a single file attachment).

We have two short-term workarounds that will help minimize this problem in production, but we'll need to find a solution soon, even if that means switching to hyper (either the current released version or async).

Thank you for any advice and guidance you can provide!

@seanmonstar
Copy link
Owner

I definitely think having multipart implemented would be the best, but I'm also open to Body::sized(reader, len).

emk added a commit to faradayio/reqwest that referenced this issue Jan 27, 2017
This is necessary for APIs such as BigML's, where we may need to send
extremely large request bodies, but chunked transfer encoding is not
supported.

This is a partial fix for seanmonstar#49.
emk added a commit to faradayio/reqwest that referenced this issue Feb 17, 2017
This is necessary for APIs such as BigML's, where we may need to send
extremely large request bodies, but chunked transfer encoding is not
supported.

This is a partial fix for seanmonstar#49.
@echochamber
Copy link
Contributor

Just noticed the PR for this got merged but the issue still seems to be open. Is this good to close?

@emk
Copy link
Contributor Author

emk commented Feb 28, 2017

Yup, I think so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants