Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add API to compose objects through server-side copying #715

Merged
merged 1 commit into from
Jul 2, 2017

Conversation

donatello
Copy link
Member

The new ComposeObject API provides a way to create objects by
concatenating existing objects. It takes a list of source objects
along with optional start-end range specifications, and concatenates
them into a new object.

The API supports:

  • Create an object from upto 10000 existing objects.
  • Create object of any size upto 5TiB.
  • Creates objects from even existing objects that are larger than
    5GiB (part size limit in multipart PUTs)
  • Support copy-conditions on each source object separately.
  • Support SSE-C (i.e. Server-Side-Encryption with Customer provided
    key) for both encryption of destination object, and decryption of
    source objects.
  • Support for setting/replacing custom metadata in the destination
    object.

This API has been used to refactor the CopyObject API - that API now
supports source objects of any size, SSE-C for source and destination,
and settings custom metadata.

Finally, the type for a source object (SourceInfo) allows the user to set custom headers (by hand) for further extended usage (such as using AWS KMS based SSE).

customMetadata map[string]string
}

// Function to create a DesinationInfo instance - encryptionInfo may

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment on exported function NewDestinationInfo should be of the form "NewDestinationInfo ..."

"github.com/minio/minio-go/pkg/s3utils"
)

// A type representing Server-Side-Encryption parameters specified by

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment on exported type SSEInfo should be of the form "SSEInfo ..." (with optional leading article)

@donatello donatello force-pushed the compose-object branch 4 times, most recently from 9d32a57 to 1a4a0dc Compare June 20, 2017 02:23
@donatello donatello changed the title WIP: Add API to compose objects through server-side copying Add API to compose objects through server-side copying Jun 20, 2017
@donatello
Copy link
Member Author

Note to reviewers: This PR breaks compatibility by changing the CopyObject API signature. It was deemed necessary to bring an improved version in.

Copy link
Member

@krisis krisis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall the PR looks good to me. There are a few comments that need to be addressed before we merge.

return httpRespToErrorResponse(resp, dst.bucket, dst.object)
}
// Decode copy response on success.
cpObjRes := copyObjectResult{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The following snippet of code only confirms that on copy object success the XML response parses fine into a valid copyObjectResult{}. If that is not important, we can skip reading the response body and decoding it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, it is not required here.

docs/API.md Outdated
@@ -61,6 +61,7 @@ func main() {
|[`ListIncompleteUploads`](#ListIncompleteUploads) | [`RemoveIncompleteUpload`](#RemoveIncompleteUpload) | | | [`ListenBucketNotification`](#ListenBucketNotification) |
| | [`FPutObject`](#FPutObject) | | | |
| | [`FGetObject`](#FGetObject) | | | |
| | [`ComposeObject`](#FGetObject) | | | |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/(#FGetObject)/(#ComposeObject)/

@@ -559,6 +560,58 @@ if err != nil {
}
```

<a name="ComposeObject"></a>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documentation on CopyObject needs to be updated too.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

@@ -559,6 +560,58 @@ if err != nil {
}
```

<a name="ComposeObject"></a>
### ComposeObject(dst DestinationInfo, srcs []SourceInfo) error
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to expand on DestinationInfo and SourceInfo types as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NewSSEInfo, NewSourceInfo and NewDestinationInfo are added in docs. The types are only documented via godoc for now, in keeping with the existing style (e.g. CopyCondition was documented only via godoc). We may add documentation for types in the API.md later but that is a matter for discussion.


// Prepare source decryption key (here we assume same key to
// decrypt all source objects.)
decKey := NewSSEInfo([]byte{1, 2, 3}, "")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this be minio.NewSSEInfo?

docs/API.md Outdated
srcs := []minio.SourceInfo{src1, src2, src3}

// Prepare destination encryption key
encKey := NewSSEInfo([]byte{8, 9, 0}, "")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be minio.NewSSEInfo?

docs/API.md Outdated
```go
// Prepare source decryption key (here we assume same key to
// decrypt all source objects.)
decKey := NewSSEInfo([]byte{1, 2, 3}, "")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be minio.NewSSEInfo?

srcs := []minio.SourceInfo{src1, src2, src3}

// Prepare destination encryption key
encKey := NewSSEInfo([]byte{8, 9, 0}, "")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be minio.NewSSEInfo?

@@ -42,24 +42,28 @@ func main() {
// Enable trace.
// s3Client.TraceOn(os.Stderr)

// Source object
src := NewSourceInfo("my-sourcebucketname", "my-sourceobjectname", nil)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be minio.NewSourceInfo?

// src.SetMatchETagExceptCond("31624deb84149d2f8ef9c385918b653a")

// Destination object
dst := NewDestinationInfo("my-bucketname", "my-objectname", nil)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be minio.NewDestinationInfo?

@donatello donatello force-pushed the compose-object branch 3 times, most recently from 2c19576 to 249d6a2 Compare June 21, 2017 23:59
@donatello
Copy link
Member Author

donatello commented Jun 22, 2017

This is ready for review.

}

// NewSSEInfo - specifies (binary or un-encoded) encryption key and
// algorithm name. If algo is empty, it defaults to "AES256".
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps mention the AES256 AWS S3 link.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


// internal method to output the generated SSE headers.
func (e *SSEInfo) getSSEHeaders(isCopySource bool) map[string]string {
if e == nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this if e == nil necessary? - perhaps okay to let the function fail?

Copy link
Member Author

@donatello donatello Jun 22, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No it is needed here. When getSSEHeaders is called, if there is no key provided during initialization, this safely returns. The other option is to check if the key is nil before calling this function, but I found that to be more repetitive.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i see.. should we ignore it outside rather than letting this call succeed?

Copy link
Member Author

@donatello donatello Jun 22, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we checked if key is nil outside, then each place of using this function would look like (for e.g.):

if src.key != nil {
    for k, v := src.key.getSSEHeaders(false) {
        // do stuff.
    }
}

The outer if block would be repeated in each usage of this function. In the current implementation it is neatly tucked away inside getSSEHeaders and it returns a nil back which can be safely used in a for-range loop.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct but its quite an unnatural usage, usually unallocated struct means its a bug of sort and we are sort of hiding it. That is my only concern.

// `REPLACE`, so that metadata headers from the source are not copied
// over.
func (d *DestinationInfo) getCustomHeadersMap(withCopyDirectiveHeader bool) map[string]string {
if d == nil || d.customMetadata == nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above..

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again same reasoning as before - this is less repetitive.

if isCopySource {
cs = "copy-source-"
}
hsum := md5.Sum(e.key)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could use sumMD5() ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍


start, end int64

// collection of headers to send with every upload-part-copy
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// collect of request headers sent with every upload-part-copy

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated to:

	// Headers to send with the upload-part-copy request involving
	// this source object.

The intension is to allow users to set custom headers on their own in case they need some feature not encapsulated by this library.

}

// Set the source header
r.Headers.Set("x-amz-copy-source", s3utils.EncodePath(fmt.Sprintf("/%s/%s", bucket, object)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps use path.Join() ? it is okay to not have preceding /

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, just used + here.

constants.go Outdated
// putObject behaves internally as multipart.
const minPartSize = 1024 * 1024 * 64

// copyPartSize - default (and maximum) part size to copy in a
// copy-object request.
const copyPartSize = 1024 * 1024 * 1024 * 5
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not 64MB? no reason for this to be 5MB?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This number is actually 5 GiB. I will update the comment to make this more apparent.

Since this is only server-side copying, the most efficient thing to do is to reduce the number of network requests - this is done by copying the largest possible object-part in each request.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually let's not do that.. 5GiB causes for erasure coding block to become super big on a 4 disk setup. Say for example 5GiB *2 so each disk shard will be 2.5GiB. We should avoid this worst case scenario since this would cause easily 2-3secsec worth of delay for ReadFileWithVerify().

We should pick 525MB which would make it 10,000 parts to upload maximum 5TB object. In which case each shard will be at max 256MB.

@donatello
Copy link
Member Author

Review comments addressed!

var currOffset, partSize int64
for currOffset < size {
if currOffset+copyPartSize < size {
partSize = copyPartSize
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just use

       // Calculate the optimal parts info for a given size.
        totalPartsCount, partSize, lastPartSize, err := optimalPartInfo(fileSize)
        if err != nil {
                return 0, err
        }

Since you know the right file size it is better to know the optimal part size. 5GiB is not an ideal size here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand your point about the bad case for erasure coding but there is another problem. If we use up more parts for the first few sources, then there may be too many parts for later sources in the input and it may exceed the max parts limit. I will think a bit and figure out a working algo.


// Initiate copy object.
err = s3Client.CopyObject("my-bucketname", "my-objectname", "/my-sourcebucketname/my-sourceobjectname", copyConds)
err = s3Client.CopyObject(dst, src)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If source object was uploaded with client side encryption and respective headers saved, does CopyObject() take care of those headers as well.

src3.SetMatchETag("5918b653a31624deb84149d2f8ef9c38")

// Create slice of sources.
srcs := []minio.SourceInfo{src1, src2, src3}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If source objects were uploaded with client side encryption, what is the expected behavior of ComposeObject()?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Encrypted sources must be decrypted for copying - see https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html - relevant quote:

If the source object is encrypted using server-side encryption with a customer-provided encryption key, you must use the following headers providing encryption information so that Amazon S3 can decrypt the object for copying.

So ComposeObject also does the same decrypt and copy - each encrypted source must have keys specified for decryption. (Note that this server-side-encryption not client side)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand server side encryption. The question is asked for client side encryption specifically.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, sorry I misread. In the case of client side encryption, the encrypted content would be just copied transparently by the server (it is not aware of client-side encrypted content) - most likely concatenating (client-side-)encrypted objects arbitrarily will lead to garbled data, but server cannot detect this - it is a user error.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to find and error out in this case? I see minio-java sets below header.

    headerMap.put("x-amz-meta-x-amz-key", encDataKey);
    headerMap.put("x-amz-meta-x-amz-iv", ivString);
    headerMap.put("x-amz-meta-x-amz-matdesc", "{}");

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I do not think if we can do this correctly - we cannot try and interpret custom metadata set by the user or other tools that the user uses.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html, client side encryption uses x-amz-meta-x-amz-key header. If client side encryption is used for any of source objects, we should error out than performing compose object. The current behavior leads to data corruption if source objects are deleted after compose.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this comment make sense?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if it is appropriate to do this - as the encryption scheme is a client side scheme. Copying encrypted objects may be a valid operation in certain situations (client logic may be something complicated and it may be an intended operation - there is no need for the library to stop this). It needs further discussion - and a call can be taken after releasing this.

If only one object will be copied (e.g. via copyobject, with no explicit user metadata headers passed in), this problem is mitigated as the headers from source will be copied to the destination.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As this issue discussed with @abperiasamy, issue #738 is opened and this needs to be addressed after this PR merged.

// putObject behaves internally as multipart.
const minPartSize = 1024 * 1024 * 64

// copyPartSize - default (and maximum) part size to copy in a
// copy-object request (5GiB)
const copyPartSize = 1024 * 1024 * 1024 * 5
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If humanize package is already in use, you could use that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using that lib for parsing requires checking for errors. It does not add much.

Copy link
Member

@balamurugana balamurugana Jun 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was talking about using const copyPartSize = 5 * humanize.GiByte

// Return nil on success.
return nil
// CopyObject - copy a source object into a new object
func (c Client) CopyObject(dst DestinationInfo, src SourceInfo) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this function made to break backward compatibility?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this comment make sense?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without breaking backward compatibility, we would have a very complex function(s) to support copy-object with support for decryption of source, encryption of destination, multiple copy-conditions, sub-range of source and setting user metadata headers. Since this is a significant improvement to the existing API, it was agreed to break compatibility and make a new release.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok. got it.

}

// internal method to output the generated SSE headers.
func (e *SSEInfo) getSSEHeaders(isCopySource bool) map[string]string {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why e is short form to SSEInfo? I think you can name it better

// DestinationInfo - type with information about the object to be
// created via server-side copy requests, using the Compose API.
type DestinationInfo struct {
bucket, object string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think bucket and object can be listed in two lines for better readability.

}
for k, v := range d.customMetadata {
if !strings.HasPrefix(k, "x-amz-meta-") {
k = "x-amz-meta-" + k
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it required key should start with x-amz-meta-. if so, you could error out in new(). If two keys named x-amz-meta-my-key and my-key one key/value is lost.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is handled in NewDestinationInfo() validation now.

type SourceInfo struct {
bucket, object string

start, end int64
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above

if withCopyDirectiveHeader {
r["x-amz-metadata-directive"] = "REPLACE"
}
for k, v := range d.customMetadata {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add one comment to this for ?

return err
}
if len(srcs) > maxPartsCount {
return ErrInvalidArgument("Cannot copy more than 10000 source objects.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you replace 10000 with maxPartsCount ?

if err := s3utils.CheckValidObjectName(dst.object); err != nil {
return err
}
if len(srcs) > maxPartsCount {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also return an error when srcs is empty ? this function is little long, returning early error will be easier to read.

r := SourceInfo{
bucket: bucket,
object: object,
start: -1,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know start = -1 means start is unspecified but we can also set it to zero by default to simplify the code. Do we really need -1 state ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before the size of the source is known, 0 <= start <= end represents a valid range specification so we need to use start = -1 to represent an unspecified range. I have added lots of comments to make this clearer now. Ref: see doc for x-amz-copy-source-range at https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html

// `x-amz-meta-` if they are not prefixed in the input. If `metadata`
// is not specified, and only one source

// encryptionInfo may be nil to indicate that no-encryption is to be

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment on exported function NewDestinationInfo should be of the form "NewDestinationInfo ..."

// `x-amz-meta-` if they are not prefixed in the input. If `metadata`
// is not specified, and only one source

// encryptionInfo may be nil to indicate that no-encryption is to be

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment on exported function NewDestinationInfo should be of the form "NewDestinationInfo ..."

@donatello donatello force-pushed the compose-object branch 3 times, most recently from ef81f52 to 8c55995 Compare June 22, 2017 23:12
src3.SetMatchETag("5918b653a31624deb84149d2f8ef9c38")

// Create slice of sources.
srcs := []minio.SourceInfo{src1, src2, src3}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html, client side encryption uses x-amz-meta-x-amz-key header. If client side encryption is used for any of source objects, we should error out than performing compose object. The current behavior leads to data corruption if source objects are deleted after compose.

@harshavardhana
Copy link
Member

ping @krisis @balamurugana do you guys have more reviews on this PR? @donatello are you going to send an update?


// partsRequired is ceiling(size / copyPartSize)
func partsRequired(size int64) int {
var r int64 = size / copyPartSize

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should omit type int64 from declaration of var r; it will be inferred from the right-hand side

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not needed you can make this int64(size/copyPartSize)

@donatello donatello force-pushed the compose-object branch 2 times, most recently from e001a04 to 6316b75 Compare June 27, 2017 01:45
@donatello
Copy link
Member Author

This PR is updated again.

The previous implementation had a bug.

Example case of bug: two sources of size s given for concatenating, where s = 5GiB+1. Then compose object would try to upload 4 parts of sizes 5GiB, 1, 5GiB, 1. This fails because one of the parts not in the end is of length < 5MiB. In the fix, we make every part of a particular source (almost) the same size. So for the above example, it would generate 4 parts sized: 2.5GiB+1, 2.5GiB, 2.5GiB+1, 2.5GiB.

@donatello
Copy link
Member Author

donatello commented Jun 27, 2017 via email

@tejaycar
Copy link
Contributor

@harshavardhana Are there really outstanding changes still? It looks like github is blocking merge, but the only "requested change" I can see is also marked as outdated...

@harshavardhana
Copy link
Member

@harshavardhana Are there really outstanding changes still? It looks like github is blocking merge, but the only "requested change" I can see is also marked as outdated...

@tejaycar its not was waiting on the other reviewers.

@harshavardhana
Copy link
Member

@balamurugana did you get a chance to look at the latest changes?

@tejaycar
Copy link
Contributor

Ah, that's great news. Are you planning to cut a new release when this is merged? We're really hoping to get it pulled into the next release of Mattermost, but would prefer to do that from a formal release rather than master.

@harshavardhana
Copy link
Member

Ah, that's great news. Are you planning to cut a new release when this is merged? We're really hoping to get it pulled into the next release of Mattermost, but would prefer to do that from a formal release rather than master.

Yes @tejaycar it will be a major bump since we broke existing API.

@tejaycar
Copy link
Contributor

Sweet! You guys rock, I'm looking forward to it.

The new ComposeObject API provides a way to create objects by
concatenating existing objects. It takes a list of source objects
along with optional start-end range specifications, and concatenates
them into a new object.

The API supports:

* Create an object from upto 10000 existing objects.
* Create objects upto 5TiB in size, from source objects of any size.
* Support copy-conditions on each source object separately.
* Support SSE-C (i.e. Server-Side-Encryption with Customer provided
  key) for both encryption of destination object, and decryption of
  source objects.
* Support for setting/replacing custom metadata in the destination
  object.

This API has been used to refactor the CopyObject API - that API now
supports source objects of any size, SSE-C for source and destination,
and settings custom metadata.
@donatello
Copy link
Member Author

More tests are added. I believe the PR is complete now and ready for merge.

Copy link
Member

@balamurugana balamurugana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@harshavardhana harshavardhana merged commit 129fe89 into minio:master Jul 2, 2017
poornas added a commit to poornas/minio-go that referenced this pull request Jul 3, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants