Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SDK Panics When Concurrently Uploading 10 Large Blobs #260

Closed
marwan-at-work opened this issue Mar 2, 2021 · 7 comments
Closed

SDK Panics When Concurrently Uploading 10 Large Blobs #260

marwan-at-work opened this issue Mar 2, 2021 · 7 comments
Assignees

Comments

@marwan-at-work
Copy link

marwan-at-work commented Mar 2, 2021

Which version of the SDK was used?

v0.13.0

Which platform are you using? (ex: Windows, Linux, Debian)

Darwin and Linux

What problem was encountered?

Trying to concurrently upload 10 large files to the same container results in the following panic:

uploading file of size 393472124
uploading file of size 347726145
uploading file of size 349536845
uploading file of size 355056046
uploading file of size 347411174
uploading file of size 355576358
uploading file of size 393468713
uploading file of size 393473126
uploading file of size 306350950
uploading file of size 349612217

panic: send on closed channel

goroutine 279 [running]:
github.com/Azure/azure-storage-blob-go/azblob.staticBuffer.Put(0xc000502a20, 0x800000, 0xc0005029c0, 0xc00bb00000, 0x800000, 0x800000)
        /Users/home/go/pkg/mod/github.com/!azure/azure-storage-blob-go@v0.13.0/azblob/highlevel.go:427 +0x53
github.com/Azure/azure-storage-blob-go/azblob.(*copier).write(0xc0000c01a0, 0xc00bb00000, 0x800000, 0x800000, 0xc01ac53200, 0x58)
        /Users/home/go/pkg/mod/github.com/!azure/azure-storage-blob-go@v0.13.0/azblob/chunkwriting.go:166 +0x413
github.com/Azure/azure-storage-blob-go/azblob.(*copier).sendChunk.func1()
        /Users/home/go/pkg/mod/github.com/!azure/azure-storage-blob-go@v0.13.0/azblob/chunkwriting.go:136 +0xc5
github.com/Azure/azure-storage-blob-go/azblob.NewStaticBuffer.func1(0xc0005029c0)
        /Users/home/go/pkg/mod/github.com/!azure/azure-storage-blob-go@v0.13.0/azblob/highlevel.go:406 +0x3b
created by github.com/Azure/azure-storage-blob-go/azblob.NewStaticBuffer
        /Users/home/go/pkg/mod/github.com/!azure/azure-storage-blob-go@v0.13.0/azblob/highlevel.go:404 +0xe5
exit status 2

How can we reproduce the problem in the simplest way?

Grab 10 large files (300-400 megabytes) and run the following code concurrently:

import (
	"gocloud.dev/blob"
)

func copy(ctx context, key string, azbucket *blob.Bucket) error {
	w, err := azbucket.NewWriter(ctx, key, &blob.WriterOptions{})
	if err != nil {
		return fmt.Errorf("azbucket.NewWriter: %w", err)
	}
	defer w.Close()
	_, err = io.Copy(w, rdr)
	if err != nil {
		return fmt.Errorf("io.Copy: %w", err)
	}
	err = w.Close()
	if err != nil {
		return fmt.Errorf("azWriter.Close: %w", err)
	}
	return nil
}

Even though I am using gocloud.dev , you can see that the panic actually happens inside this library hence why I've opened an issue here.

Thanks!

Have you found a mitigation/solution?

Passing a smaller buffer size such as 1Mb (as opposed to gocloud's default 5Mb) makes things much slower but they tend to succeed more often. However, increasing the buffer size should not cause a panic anyway. In general, the library should never panic even if it's somehow being misused.

@QuintenDV
Copy link

I've encountered the exact same error today.

I'm also using v0.13.0 and the code is running in a container using the official golang:1.15 image.

In my case I was uploading a single 15Mb file when the error occured:

uploading file of size: 15023723

panic: send on closed channel
goroutine 334 [running]:
github.com/Azure/azure-storage-blob-go/azblob.staticBuffer.Put(0xc0007eab40, 0x100000, 0xc0007eaae0, 0xc00116e000, 0x1000
    /go/pkg/mod/github.com/!azure/azure-storage-blob-go@v0.13.0/azblob/highlevel.go:427 +0x53
github.com/Azure/azure-storage-blob-go/azblob.(*copier).write(0xc0003da340, 0xc00116e000, 0x100000, 0x100000, 0xc0004a6de
    /go/pkg/mod/github.com/!azure/azure-storage-blob-go@v0.13.0/azblob/chunkwriting.go:166 +0x402
github.com/Azure/azure-storage-blob-go/azblob.(*copier).sendChunk.func1()
    /go/pkg/mod/github.com/!azure/azure-storage-blob-go@v0.13.0/azblob/chunkwriting.go:136 +0xc5
github.com/Azure/azure-storage-blob-go/azblob.NewStaticBuffer.func1(0xc0007eaae0
    /go/pkg/mod/github.com/!azure/azure-storage-blob-go@v0.13.0/azblob/highlevel.go:406 +0x3b
created by github.com/Azure/azure-storage-blob-go/azblob.NewStaticBuffer
    /go/pkg/mod/github.com/!azure/azure-storage-blob-go@v0.13.0/azblob/highlevel.go:404 +0xe5

When I tried to run the same code with a smaller file (<1Mb) the code no longer paniced and an error was returned:

The specified container does not exist.

I created the missing container and ran the code again. This time the 15Mb file was uploaded successfully.

So apparently uploading a small file to a non-existing container yields an error, as expected. But uploading a 15Mb file to a non-existing container results in a panic.

@marwan-at-work
Copy link
Author

This might be related to the new TransferManager introduced in v0.13.0? v0.12.0...v0.13.0

@zezha-msft would you be able to confirm? Thanks!

@zezha-msft
Copy link
Contributor

@siminsavani-msft could you please take a look?

@rokukapsch
Copy link

any update on this?
I also get this panic on cancel() UploadStreamToBlockBlob

@rokukapsch
Copy link

rokukapsch commented Apr 12, 2021

I think before quiting in https://github.com/Azure/azure-storage-blob-go/blob/master/azblob/chunkwriting.go#L59 also cp.close() may be called (or at least c.wg.Wait() to avoid commiting the blocks).
This may also be related to #257.

@marwan-at-work
Copy link
Author

Any updates here? We are stuck with v0.11.0 which has a memory leak, but we cannot upgrade because we get panics.

@siminsavani-msft
Copy link
Contributor

Hi @marwan-at-work ! So sorry for the delaying in getting this fix. We have updated the SDK with a fix that should work! Please let us know if you encounter any issues!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants