Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large files get truncated after 2^31 -1 bytes #630

Closed
jamesatha opened this issue Jun 9, 2016 · 9 comments
Closed

Large files get truncated after 2^31 -1 bytes #630

jamesatha opened this issue Jun 9, 2016 · 9 comments

Comments

@jamesatha
Copy link

We have an endpoint serving static files. It seems to work fine for files less than 2GB. But larger ones seem to get truncated at Integer.MAX_VALUE bytes. If you curl or wget these endpoints, it seems to download the truncated file with no errors

We have a different endpoint that sends data that it's holding in an BufferedInputStream. When the stream is greater than Integer.MAX_VALUE, the clients get the first Integer.MAX_VALUE and hang waiting for more data

@sbordet
Copy link
Contributor

sbordet commented Jun 9, 2016

Need more information, as it's not clear what you refer to.
The word "endpoint" is used in WebSocket. Or is it HTTP ? HTTP/2 ?

A "stream greater than Integer.MAX_VALUE" does not make much sense, as streams are meant to be able to stream lots of bytes using a smaller byte[].

Can you attach a reproducible use case ?

@joakime
Copy link
Contributor

joakime commented Jun 9, 2016

@jamesatha
Copy link
Author

jamesatha commented Jun 9, 2016

This is HTTP/2 (so, what's the non-websocket version of an endpoint?) Here is the code that can repro it:

class FileServlet extends ScalatraServlet {
  get("/largeFile") {
      Ok(new File("/tmp/3GB"))
    }
}

I made the file using dd if=/dev/zero of=/tmp/3GB bs=1GB count=3

We are using jetty 9.3.9.v20160517.

Sorry, I posted this same issue on stack overflow first.

@jamesatha
Copy link
Author

After some digging, it looks like scalatra tries to do this:
https://github.com/scalatra/scalatra/blob/2.4.x/core/src/main/scala/org/scalatra/util/io/package.scala#L36
which calls gets here:
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/nio/ch/FileChannelImpl.java#550
And after that we are reducing the size to Integer.MAX_VALUE If you have any idea to get around this, I'm all ears. But I think this is probably a bug I should add to the scalatra team

@sbordet
Copy link
Contributor

sbordet commented Jun 9, 2016

@jamesatha yes it's a Scalatra bug, since the transferTo() call does not guarantee that all the request bytes are written, and the return value is the bytes that were actually written.
The call to transferTo() should be wrapped in a loop with some logic that guarantees that the whole file is written.

@sbordet sbordet closed this as completed Jun 9, 2016
@joakime
Copy link
Contributor

joakime commented Jun 9, 2016

Some hints to the problem (looks like a scalatra bug, assuming that transferTo() will transfer all of the bytes. the javadoc doesn't say that. and you'd need to make transferTo in a loop to be sure.)

@joakime
Copy link
Contributor

joakime commented Jun 10, 2016

@jamesatha would suggest you file a bug at https://github.com/scalatra/scalatra/issues and accept the stackoverflow question so others that search for the same issue see an valid answered question.

@jamesatha
Copy link
Author

Thanks for closing this out. We have a bug filed against scalatra: scalatra/scalatra#575
Sorry for thinking it was a jetty issue :)

@gregw
Copy link
Contributor

gregw commented Jun 11, 2016

@jamesatha Also you might want to look at how jetty servers files from DefaultServlet. The most efficient way to server a large file is to use a file mapped buffer and an asynchronous write. This will write the file with minimum copying (perhaps none) and no thread waiting for it it complete.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants