You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm sure there are no duplicate issues or discussions.
我确定没有重复的issue或讨论。
I'm sure it's due to AList and not something else(such as Network ,Dependencies or Operational).
我确定是AList的问题,而不是其他原因(例如网络,依赖或操作)。
I'm sure this issue is not fixed in the latest version.
我确定这个问题在最新版本中没有被修复。
AList Version / AList 版本
V3.30.0
Driver used / 使用的存储驱动
PikPak
Describe the bug / 问题描述
上传超过50G大文件时,在上传结束后提示:MultipartUpload: upload multipart failed upload id: 53C32F3A332A4F4E9A96077E237B2945 caused by: TotalPartsExceeded: exceeded total allowed configured MaxUploadParts (10000). Adjust PartSize to fit in this limit。无法完成上传
Please make sure of the following things
I have read the documentation.
我已经阅读了文档。
I'm sure there are no duplicate issues or discussions.
我确定没有重复的issue或讨论。
I'm sure it's due to
AList
and not something else(such as Network ,Dependencies
orOperational
).我确定是
AList
的问题,而不是其他原因(例如网络,依赖
或操作
)。I'm sure this issue is not fixed in the latest version.
我确定这个问题在最新版本中没有被修复。
AList Version / AList 版本
V3.30.0
Driver used / 使用的存储驱动
PikPak
Describe the bug / 问题描述
上传超过50G大文件时,在上传结束后提示:MultipartUpload: upload multipart failed upload id: 53C32F3A332A4F4E9A96077E237B2945 caused by: TotalPartsExceeded: exceeded total allowed configured MaxUploadParts (10000). Adjust PartSize to fit in this limit。无法完成上传
Reproduction / 复现链接
PikPak上传50G大文件后会出现此提示,无论是网页上传或者webdav。
Config / 配置
{ "force": false, "site_url": "", "cdn": "", "jwt_secret": "0XZlZZr9PPCnRGXE", "token_expires_in": 48, "database": { "type": "sqlite3", "host": "", "port": 0, "user": "", "password": "", "name": "", "db_file": "data\\data.db", "table_prefix": "x_", "ssl_mode": "" }, "scheme": { "address": "0.0.0.0", "http_port": 5244, "https_port": -1, "force_https": false, "cert_file": "", "key_file": "", "unix_file": "", "unix_file_perm": "" }, "temp_dir": "data\\temp", "bleve_dir": "data\\bleve", "dist_dir": "", "log": { "enable": true, "name": "data\\log\\log.log", "max_size": 50, "max_backups": 30, "max_age": 28, "compress": false }, "delayed_start": 0, "max_connections": 0, "tls_insecure_skip_verify": true, "tasks": { "download": { "workers": 5, "max_retry": 1 }, "transfer": { "workers": 5, "max_retry": 2 }, "upload": { "workers": 5, "max_retry": 0 }, "copy": { "workers": 5, "max_retry": 2 } }, "cors": { "allow_origins": [ "*" ], "allow_methods": [ "*" ], "allow_headers": [ "*" ] } }
Logs / 日志
[GIN] 2024/01/16 - 23:13:13 | 200 | 38m22s | 127.0.0.1 | PUT "/api/fs/put"
�[31mERRO�[0m[2024-01-16 23:40:39] failed put /pikpak/: MultipartUpload: upload multipart failed
upload id: 53C32F3A332A4F4E9A96077E237B2945
caused by: TotalPartsExceeded: exceeded total allowed configured MaxUploadParts (10000). Adjust PartSize to fit in this limit
github.com/alist-org/alist/v3/internal/op.Put
/source/internal/op/fs.go:580
github.com/alist-org/alist/v3/internal/fs.putDirectly
/source/internal/fs/put.go:70
github.com/alist-org/alist/v3/internal/fs.PutDirectly
/source/internal/fs/fs.go:97
github.com/alist-org/alist/v3/server/handles.FsStream
/source/server/handles/fsup.go:65
github.com/gin-gonic/gin.(*Context).Next
/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/alist-org/alist/v3/server/middlewares.FsUp
/source/server/middlewares/fsup.go:43
github.com/gin-gonic/gin.(*Context).Next
/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/alist-org/alist/v3/server/middlewares.Auth
/source/server/middlewares/auth.go:73
github.com/gin-gonic/gin.(*Context).Next
/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/alist-org/alist/v3/server/middlewares.StoragesLoaded
/source/server/middlewares/check.go:14
github.com/gin-gonic/gin.(*Context).Next
/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1
/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102
github.com/gin-gonic/gin.(*Context).Next
/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.LoggerWithConfig.func1
/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240
github.com/gin-gonic/gin.(*Context).Next
/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.(*Engine).handleHTTPRequest
/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620
github.com/gin-gonic/gin.(*Engine).ServeHTTP
/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576
net/http.serverHandler.ServeHTTP
/usr/local/go/src/net/http/server.go:2938
net/http.(*conn).serve
/usr/local/go/src/net/http/server.go:2009
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1650
The text was updated successfully, but these errors were encountered: