-
Notifications
You must be signed in to change notification settings - Fork 647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
signer: [feature] Implement streaming signature support in minio-go #607
Comments
For 2. I was thinking that when the size is bigger than 5MiB the data is splitted in parts. |
I believe its 64MB, but Minio needs to know the full size of the data before it starts to upload. |
Hi, are there any news here, is it still work in progress? Kind regards. |
@krisis thank you for your fast response. I am really looking forward to use minio for handling my backup streams. :) |
@krisis I do not want to annoy you but do you have any idea when you are able to fix this? Update: I did not noticed that you are working on #609 again. I checked out your feat/streaming-sig from Feb. 02. By the way your changes there help me a lot, thanks! |
@xxorde Apolgoies for not sharing updates on where this PR is. I have not had time to commit fully on this feature, I am making changes when I find time. I shall update this PR by when we expect to complete this feature in a day or two. Thanks for being patient. |
@xxorde I have made some more changes to the PR. The memory consumption we discussed on slack might be due to md5 computations performed for multipart upload. Could you share a snippet of your code which uses this streaming signature implementation? It would help me understand where the memory usage pattern. |
@krisis yes of cause, here is an example. package main
import (
"flag"
"log"
"os"
minio "github.com/minio/minio-go"
)
func main() {
var object, bucket, accessKey, secretKey string
location := "us-east-1"
flag.StringVar(&bucket, "b", "stream-test", "Bucket name")
flag.StringVar(&object, "o", "stream-test", "Object key name")
flag.StringVar(&accessKey, "a", "", "accessKey")
flag.StringVar(&secretKey, "s", "", "secretKey")
flag.Parse()
client, err := minio.New("127.0.0.1:9000", accessKey, secretKey, false)
// Test if bucket is there
exists, _ := client.BucketExists(bucket)
if !exists {
// Try to create bucket
err = client.MakeBucket(bucket, location)
}
n, err := client.PutObject(bucket, object, os.Stdin, "stream")
if err != nil {
log.Fatal(err)
return
}
log.Printf("Written %d bytes to %s in bucket %s.", n, object, bucket)
} The program takes a stream on os.Stdin and writes it to an object in a bucket. The following command generates a 5GB stream of data and writes it with the tool. base64 /dev/urandom | head -c 5000000000 | ./minio-minexample -a accessKey -s secretKey
2017/04/13 15:18:03 Written 5000000000 bytes to stream-test in bucket stream-test. This uses ~1.7GB of memory.
|
Memory consumption and speed are great! n, err := client.PutObjectStreaming(bucket, object, os.Stdin, 5000000000) I do not understand why the size is hard coded in your example. How do I use |
There is a change in the API #657 - let us know how this works for you. You don't need to specify the size. |
@harshavardhana I checked out your branch When I use If I follow the function calls and look into // If size cannot be found on a stream, it is not possible
// to upload using streaming signature.
if size < 0 {
return 0, ErrEntityTooSmall(size, bucketName, objectName)
} Am I doing it wrong or is there something missing? |
Streaming signature has been implemented and published.. Will send you an example separately @xxorde . Thanks for your patience on this. |
If you want to write a data stream to an S3 backend with PutObject() the data is stored in a memory buffer until the stream is finish. After the stream is finished Minio will start to write it to the backend.
This has two main disadvantages
@harshavardhana told me that a fix is already work in progress, but I like to open the issue to track the progress and to know when I can test it :)
The text was updated successfully, but these errors were encountered: