-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't finish download of any big file #5390
Comments
I guess you have to disable proxy_buffering => By default, downloads will be limited to 1GB due to proxy_buffering and proxy_max_temp_file_size on the frontend. |
No, still doesn't work. I set proxy_buffering off; at the server section of nginx, and now the download sometimes fails much later - at 300 MB. I still see only the "file accessed ..." messages in the log, and nothing more. Interestingly, the 100% cpu-eating php-fpm7.0 process remain active even after the download fails, and remains there for quite some time, before disappearing (several minutes). Two consecutive failed downloads result in two 100% cpu-eating php processes. |
@radistmorse is your system 32 Bit or 64 Bit ? |
64bit. I updated the system to debian 9, and while doing so, I accidentally mysql database, the whole thing :) I decided to reinitialize the DB from scratch, since I mostly used NC for external mounts, and it was easier than trying to restore the base backup. And after I did that, the download started to work again. So I guess it was some weird DB-related issue because of my OC8 -> NC9 migration. Or maybe it was related to the dotdeb nginx/php7 packaging (I use the official deb9 packages now). In any case, I don't have this issue anymore. |
The topic is solved and the error can't be reproduced at the moment. So closing here, we can reopen if someone can reproduce this problem. |
Hello, I have same problem. I want to download 187 MB zip, but downloading stops after aprox. 2,5 minutes and download only around 150 MB. Thanks |
@chocholik plz check your configured timeouts and file limits. How "fast" is the download until it stalls ? Also have a look here (relevant for file uploads): Please open another issue in case configuration does not help you out. |
I've got the same problem with NC12 and it got worst when I disabled proxy buffering. Now download fails a few MB after it starts. My nginx config:
The Nextcloud log says only the file is accessed
If I keep proxy buffering enabled (same setting as shown, just commenting the line disabling buffering), download starts and goes for a while to randomly stop at some point. It's not a timeout from nginx though, this comes only after because the backend actually stalls. This could be related to the use of HTTP2. Removing the http2 option in the listen setting of nginx mitigate the issue.... which doesn't make much sense does it? Edit: this happens also when downloading the same file via backend web server without nextcloud in the middle, so this is not a nextcloud bug |
I can confirm that downloads do not work using nginx in conjunction with http2. This persists with NC12 as well as on Master. I opened another issue for this, as the two are unrelated, so please comment here: BTW: Apache2 is unrelated, no problems there. |
I don't think it's a NC bug to be honest. I have the same problem downloading the same file via nginx directly (by placing it inside the document root and downloading it via URL). This is an nginx bug with http2 most likely. FYI I tried with nginx 1.10 |
Well, good to have this feedback. I will try with nginx 1.12 and 1.13 then and review the changelogs. Until, we should update the administrators manual and put a warning on, that only apache2 is working |
@derkostka is this confirmed to be a nginx bug rather than a configuration problem? Is there a newer version of nginx available fixing this? I might have the same problem here, but needs more investigation... |
This is happening with windows edge and windows 10 git desktop as well. Zip file says it finishes but does not with edge. Git Desktop will not clone a large repository (such as unrealengine release branch) |
If you using an RK3399 based ARM-Board. You need to disable TCP/UPD-Offloading. |
Just finished fighting 2 days against this issue, in my case the larger CSF DNSBL blocklist caused a broken connection because they probably could not keep up. After removing these larger DNSBL's the Nextcloud system (19.02) could download larger files again. |
A patch has been merged into mainline to deal with this (kernel >=5.7) 🙂. The same problem occurs with RK3288 based ARM boards (Asus TinkerBoard and TinkerBoard S). Applying the same kind of patch to |
NAME="Ubuntu"
|
In the mean time I experienced a second cause of this issue. In the beginning we had Nextcloud on the provider (Holland, Ziggo / UPC ) routermodem. Because of the havy load for this simple inferior router - and parts of their traffic in the modem going over Utrecht which is 100 kilometers away from here - frequent connection timeouts occurred via this router modem. We put an IPFire installation between the provider and our local systems, DHCP and routing became 100% local over the powerful IPFire node. Thus immediattely solved the issues and we had better protection. |
I use nextcloud 12 which I just updated from OC 8 ( -> NC9 -> NC10 -> NC11 -> NC12).
I can't finish download of any folder (as zip archive) or any large file through normal web. The files I use are on external local mounts. Previously, right after the download failed, I saw an error in logs, identical to the one described in issue #1166, but after I applied the patch owncloud/core#26939 the error disappeared. Now the download just fails silently, without any errors.
My test folder is 3.4 GB large, containing ~200 files, and my test file is a 4.8 GB large. The download fails 15-30 seconds in, which corresponds to 30-60 MBs.
Operating system:
Debian Jessie
Web server:
nginx 1.10
Database:
MySQL 5.6
PHP version:
fpm 7
Nextcloud version: (see Nextcloud admin page)
12.0
List of activated apps:
Enabled:
Disabled:
Nextcloud configuration:
{
"system": {
"instanceid": "oc06764c673f",
"passwordsalt": "REMOVED SENSITIVE VALUE",
"trusted_domains": [
"example.com",
"192.168.10.2"
],
"datadirectory": "/usr/share/nginx/html/nextcloud/data",
"dbtype": "mysql",
"version": "12.0.0.29",
"dbname": "owncloudbase",
"dbhost": "localhost",
"dbtableprefix": "oc_",
"dbuser": "REMOVED SENSITIVE VALUE",
"dbpassword": "REMOVED SENSITIVE VALUE",
"installed": true,
"forcessl": true,
"loglevel": 0,
"theme": "",
"maintenance": false,
"secret": "REMOVED SENSITIVE VALUE",
"updater.release.channel": "stable",
"memcache.local": "\OC\Memcache\APCu"
}
}
Are you using external storage, if yes which one: local and nfs4-mounted local
Are you using encryption: no
Are you using an external user-backend, if yes which one: no
Client configuration
Browser: firefox 53
Operating system: fedora 25
Logs
Web server error log
nothing unusual
Nextcloud log (data/nextcloud.log)
no errors whatsoever. that's the whole point.
Browser log
just instant "download failed". the moment ago it was downloading with the speed of several MBs, and then - fail.
The text was updated successfully, but these errors were encountered: