-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SDK Panics When Concurrently Uploading 10 Large Blobs #260
Comments
I've encountered the exact same error today. I'm also using v0.13.0 and the code is running in a container using the official In my case I was uploading a single 15Mb file when the error occured:
When I tried to run the same code with a smaller file (<1Mb) the code no longer paniced and an error was returned:
I created the missing container and ran the code again. This time the 15Mb file was uploaded successfully. So apparently uploading a small file to a non-existing container yields an error, as expected. But uploading a 15Mb file to a non-existing container results in a panic. |
This might be related to the new TransferManager introduced in v0.13.0? v0.12.0...v0.13.0 @zezha-msft would you be able to confirm? Thanks! |
@siminsavani-msft could you please take a look? |
any update on this? |
I think before quiting in https://github.com/Azure/azure-storage-blob-go/blob/master/azblob/chunkwriting.go#L59 also cp.close() may be called (or at least c.wg.Wait() to avoid commiting the blocks). |
Any updates here? We are stuck with v0.11.0 which has a memory leak, but we cannot upgrade because we get panics. |
Hi @marwan-at-work ! So sorry for the delaying in getting this fix. We have updated the SDK with a fix that should work! Please let us know if you encounter any issues! |
Which version of the SDK was used?
v0.13.0
Which platform are you using? (ex: Windows, Linux, Debian)
Darwin and Linux
What problem was encountered?
Trying to concurrently upload 10 large files to the same container results in the following panic:
How can we reproduce the problem in the simplest way?
Grab 10 large files (300-400 megabytes) and run the following code concurrently:
Even though I am using gocloud.dev , you can see that the panic actually happens inside this library hence why I've opened an issue here.
Thanks!
Have you found a mitigation/solution?
Passing a smaller buffer size such as 1Mb (as opposed to gocloud's default 5Mb) makes things much slower but they tend to succeed more often. However, increasing the buffer size should not cause a panic anyway. In general, the library should never panic even if it's somehow being misused.
The text was updated successfully, but these errors were encountered: