Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't finish download of any big file #5390

Closed
radistmorse opened this issue Jun 13, 2017 · 18 comments
Closed

Can't finish download of any big file #5390

radistmorse opened this issue Jun 13, 2017 · 18 comments

Comments

@radistmorse
Copy link

I use nextcloud 12 which I just updated from OC 8 ( -> NC9 -> NC10 -> NC11 -> NC12).

I can't finish download of any folder (as zip archive) or any large file through normal web. The files I use are on external local mounts. Previously, right after the download failed, I saw an error in logs, identical to the one described in issue #1166, but after I applied the patch owncloud/core#26939 the error disappeared. Now the download just fails silently, without any errors.

My test folder is 3.4 GB large, containing ~200 files, and my test file is a 4.8 GB large. The download fails 15-30 seconds in, which corresponds to 30-60 MBs.

Operating system:
Debian Jessie
Web server:
nginx 1.10
Database:
MySQL 5.6
PHP version:
fpm 7
Nextcloud version: (see Nextcloud admin page)
12.0

List of activated apps:
Enabled:

  • admin_audit: 1.2.0
  • bruteforcesettings: 1.0.2
  • comments: 1.2.0
  • dav: 1.3.0
  • federatedfilesharing: 1.2.0
  • federation: 1.2.0
  • files: 1.7.2
  • files_external: 1.3.0
  • files_videoplayer: 1.1.0
  • logreader: 2.0.0
  • lookup_server_connector: 1.0.0
  • nextcloud_announcements: 1.1
  • notifications: 2.0.0
  • oauth2: 1.0.5
  • password_policy: 1.2.2
  • provisioning_api: 1.2.0
  • serverinfo: 1.2.0
  • sharebymail: 1.2.0
  • survey_client: 1.0.0
  • systemtags: 1.2.0
  • theming: 1.3.0
  • twofactor_backupcodes: 1.1.1
  • updatenotification: 1.2.0
  • workflowengine: 1.2.0
    Disabled:
  • activity
  • encryption
  • files_pdfviewer
  • files_sharing
  • files_texteditor
  • files_trashbin
  • files_versions
  • firstrunwizard
  • gallery
  • user_external
  • user_ldap

Nextcloud configuration:
{
"system": {
"instanceid": "oc06764c673f",
"passwordsalt": "REMOVED SENSITIVE VALUE",
"trusted_domains": [
"example.com",
"192.168.10.2"
],
"datadirectory": "/usr/share/nginx/html/nextcloud/data",
"dbtype": "mysql",
"version": "12.0.0.29",
"dbname": "owncloudbase",
"dbhost": "localhost",
"dbtableprefix": "oc_",
"dbuser": "REMOVED SENSITIVE VALUE",
"dbpassword": "REMOVED SENSITIVE VALUE",
"installed": true,
"forcessl": true,
"loglevel": 0,
"theme": "",
"maintenance": false,
"secret": "REMOVED SENSITIVE VALUE",
"updater.release.channel": "stable",
"memcache.local": "\OC\Memcache\APCu"
}
}
Are you using external storage, if yes which one: local and nfs4-mounted local
Are you using encryption: no
Are you using an external user-backend, if yes which one: no

Client configuration

Browser: firefox 53
Operating system: fedora 25

Logs

Web server error log

nothing unusual

Nextcloud log (data/nextcloud.log)

no errors whatsoever. that's the whole point.

Browser log

just instant "download failed". the moment ago it was downloading with the speed of several MBs, and then - fail.

@benbrummer
Copy link

https://docs.nextcloud.com/server/9/admin_manual/configuration_files/big_file_upload_configuration.html#nginx

I guess you have to disable proxy_buffering => By default, downloads will be limited to 1GB due to proxy_buffering and proxy_max_temp_file_size on the frontend.

@radistmorse
Copy link
Author

No, still doesn't work. I set proxy_buffering off; at the server section of nginx, and now the download sometimes fails much later - at 300 MB. I still see only the "file accessed ..." messages in the log, and nothing more.

Interestingly, the 100% cpu-eating php-fpm7.0 process remain active even after the download fails, and remains there for quite some time, before disappearing (several minutes). Two consecutive failed downloads result in two 100% cpu-eating php processes.

@derkostka
Copy link
Contributor

@radistmorse is your system 32 Bit or 64 Bit ?

@radistmorse
Copy link
Author

64bit. I updated the system to debian 9, and while doing so, I accidentally mysql database, the whole thing :)

I decided to reinitialize the DB from scratch, since I mostly used NC for external mounts, and it was easier than trying to restore the base backup. And after I did that, the download started to work again. So I guess it was some weird DB-related issue because of my OC8 -> NC9 migration. Or maybe it was related to the dotdeb nginx/php7 packaging (I use the official deb9 packages now). In any case, I don't have this issue anymore.

@tflidd
Copy link
Contributor

tflidd commented Jun 27, 2017

The topic is solved and the error can't be reproduced at the moment. So closing here, we can reopen if someone can reproduce this problem.

@tflidd tflidd closed this as completed Jun 27, 2017
@chocholik
Copy link

Hello, I have same problem. I want to download 187 MB zip, but downloading stops after aprox. 2,5 minutes and download only around 150 MB.
I use new instalation, NC 12, 64 bit.

Thanks

@derkostka
Copy link
Contributor

@chocholik plz check your configured timeouts and file limits. How "fast" is the download until it stalls ?

Also have a look here (relevant for file uploads):
https://docs.nextcloud.com/server/12/admin_manual/configuration_files/big_file_upload_configuration.html?highlight=big%20file%20upload

Please open another issue in case configuration does not help you out.

@enricotagliavini
Copy link

enricotagliavini commented Sep 16, 2017

I've got the same problem with NC12 and it got worst when I disabled proxy buffering. Now download fails a few MB after it starts. My nginx config:

    proxy_http_version 1.1;
    proxy_buffering off;
    proxy_buffers 64 128k;
    proxy_max_temp_file_size 0;
    large_client_header_buffers 16 32k;

The Nextcloud log says only the file is accessed

{"reqId":"Wb2i@HHq51iqoykyjl9VtgAAAAc","level":1,"time":"2017-09-16T22:17:29+00:00","remoteAddr":"<client IPv6>","user":"bcd58981-989b11e2-864f984f-feeb3534","app":"admin_audit","method":"GET","url":"\/nextcloud\/remote.php\/webdav\/Shared\<edited>","message":"File accessed: \"\/Shared\/<edited>\"","userAgent":"Mozilla\/5.0 (X11; Fedora; Linux x86_64; rv:55.0) Gecko\/20100101 Firefox\/55.0","version":"12.0.2.0"}

If I keep proxy buffering enabled (same setting as shown, just commenting the line disabling buffering), download starts and goes for a while to randomly stop at some point. It's not a timeout from nginx though, this comes only after because the backend actually stalls.

This could be related to the use of HTTP2. Removing the http2 option in the listen setting of nginx mitigate the issue.... which doesn't make much sense does it?

Edit: this happens also when downloading the same file via backend web server without nextcloud in the middle, so this is not a nextcloud bug

@derkostka
Copy link
Contributor

I can confirm that downloads do not work using nginx in conjunction with http2.

This persists with NC12 as well as on Master. I opened another issue for this, as the two are unrelated, so please comment here:
#6538

BTW: Apache2 is unrelated, no problems there.

@enricotagliavini
Copy link

I don't think it's a NC bug to be honest. I have the same problem downloading the same file via nginx directly (by placing it inside the document root and downloading it via URL). This is an nginx bug with http2 most likely. FYI I tried with nginx 1.10

@derkostka
Copy link
Contributor

Well, good to have this feedback. I will try with nginx 1.12 and 1.13 then and review the changelogs.

Until, we should update the administrators manual and put a warning on, that only apache2 is working
https://docs.nextcloud.com/server/12/admin_manual/configuration_server/server_tuning.html?highlight=http2

@te-online
Copy link
Contributor

@derkostka is this confirmed to be a nginx bug rather than a configuration problem? Is there a newer version of nginx available fixing this? I might have the same problem here, but needs more investigation...

@uno1982
Copy link

uno1982 commented Dec 17, 2018

This is happening with windows edge and windows 10 git desktop as well. Zip file says it finishes but does not with edge. Git Desktop will not clone a large repository (such as unrealengine release branch)

@buhtux
Copy link

buhtux commented Jun 18, 2019

If you using an RK3399 based ARM-Board. You need to disable TCP/UPD-Offloading.

https://unix.stackexchange.com/a/495378

@RvLDSDM
Copy link

RvLDSDM commented Sep 1, 2020

Just finished fighting 2 days against this issue, in my case the larger CSF DNSBL blocklist caused a broken connection because they probably could not keep up. After removing these larger DNSBL's the Nextcloud system (19.02) could download larger files again.

@dvergeylen
Copy link

If you using an RK3399 based ARM-Board. You need to disable TCP/UPD-Offloading.
https://unix.stackexchange.com/a/495378

A patch has been merged into mainline to deal with this (kernel >=5.7) 🙂.

The same problem occurs with RK3288 based ARM boards (Asus TinkerBoard and TinkerBoard S). Applying the same kind of patch to arch/arm/boot/dts/rk3288.dtsi solves the problem.

@ValheruPL
Copy link

ValheruPL commented Apr 26, 2022

NAME="Ubuntu"
VERSION="20.04.4 LTS (Focal Fossa)"

sudo apt install libsmbclient-dev
sudo apt install php7.4-smbclient
sudo a2enmod proxy_fcgi setenvif
sudo a2enconf php7.4-fpm
sudo service apache2 restart

@RvLDSDM
Copy link

RvLDSDM commented Apr 26, 2022

In the mean time I experienced a second cause of this issue. In the beginning we had Nextcloud on the provider (Holland, Ziggo / UPC ) routermodem. Because of the havy load for this simple inferior router - and parts of their traffic in the modem going over Utrecht which is 100 kilometers away from here - frequent connection timeouts occurred via this router modem.

We put an IPFire installation between the provider and our local systems, DHCP and routing became 100% local over the powerful IPFire node. Thus immediattely solved the issues and we had better protection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests