Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Progress incorrect when content is gzip encoded #13

Closed
ColinEberhardt opened this issue Mar 30, 2020 · 7 comments · Fixed by #14
Closed

Progress incorrect when content is gzip encoded #13

ColinEberhardt opened this issue Mar 30, 2020 · 7 comments · Fixed by #14

Comments

@ColinEberhardt
Copy link

The content length reported by the HTTP header is the encoded length, whereas the length accumulated in the code is decompressed:

https://github.com/AnthumChris/fetch-progress-indicators/blob/master/fetch-basic/supported-browser.js#L36

As a result, progress extends beyond 100%

@anthumchris
Copy link
Owner

Good find! Thanks @ColinEberhardt

anthumchris added a commit that referenced this issue Apr 3, 2020
@anthumchris
Copy link
Owner

I'll plan on adding a "file size" response header or investigating if the Streams API can support compressed encodings.

@anthumchris
Copy link
Owner

@ColinEberhardt I contacted the Streams spec authors and was told that decompression happens at a lower level in the browser before the Streams API can access the raw network transfer. Progress indicators for gzip (or other encodings) can only occur if the server sends a custom header.

I updated my Nginx server to calculate and send x-file-size header. The Streams JavaScript examples are also updated. I added some files below you can use to test gzip.

// uncompressed, no content-encoding
fetch('https://fetch-progress.anthum.com/10kbps/test/data/100kb.dat');

// gzip-compressed
fetch('https://fetch-progress.anthum.com/10kbps/test/data/100kb.txt');

@mitar
Copy link

mitar commented May 2, 2020

@anthumchris Can you provide more information on how to set this x-file-size header in Nginx? How do you configure Nginx to calculate it? Is this because Nginx can access it before encoding?

@anthumchris
Copy link
Owner

@mitar Yes, have a look at b4ef364

@mitar
Copy link

mitar commented May 2, 2020

Awesome, thanks. Not sure how performant that is though, to open every file again and again. :-( But definitely cool. It would be cool if nxing gzip module would expose this through some variable.

@anthumchris
Copy link
Owner

anthumchris commented May 2, 2020

I'm confident it won't impact UX on a page and performance impact is negligible. In-memory caching strategies could also be used. For 10,000 HEAD requests of a 1 GB file, a/b times were ~ 1.102s vs 1.327s, (+0.0000225s per file).

$ ab -ik -c 1 -n 10000 https://fetch-progress.anthum.com/test/data/1gb-bypass.txt

Time taken for tests:   1.102 seconds
Complete requests:      10000
Time per request:       0.110 [ms] (mean)
$ ab -ik -c 1 -n 10000 https://fetch-progress.anthum.com/test/data/1gb.txt

Time taken for tests:   1.327 seconds
Complete requests:      10000
Time per request:       0.133 [ms] (mean)

(those were run locally on server to exclude network latency and highlight response processing)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants