-
Notifications
You must be signed in to change notification settings - Fork 32
s3.putStream()
S3 helper for uploading a node.js Readable Stream as S3 object.
s3.putStream(path, stream, cannedAcl, headers, callback)
- 'path' - the S3 Path.
- 'stream' - the Readable Stream instance.
- 'cannedAcl' - the S3 Canned ACL.
- 'headers' - an object containing the HTTP headers you may want to pass to the PUT request, such as the x-amz-* metadata headers.
- 'callback' - the callback that is executed when the processing finishes. It has a couple of arguments: error and result.
- If there's an error, the callback receives the error argument as Error instance.
- If the error argument is null, then the response argument contains the response.headers object as returned by the node.js core HTTPS client.
Since the S3 doesn't speak fluently the HTTP/1.1 protocol, using Transfer-Encoding: chunked into the PUT request is not supported. This means that the headers object must contain the Content-Length information aka the stream length. This is documented into the Amazon S3 Technical FAQ - Q: Is Transfer-Encoding: chunked supported by Amazon S3?; A: Transfer-Encoding: chunked is not supported. The PUT operation must include a Content-Length header.
The lack of chunked transfer also means that the Content-MD5 integrity check is impossible for streams. Digesting the MD5 hash requires the reading of the whole stream, but the HTTP headers must be sent before the HTTP body. So really, in this case, there's a chicken and egg problem. Only the chunked transfer mode allows HTTP footers, which act like HTTP headers, but they may be sent after the HTTP body.
You must pass the Content-Type header information as well. mime-magic doesn't support, yet, the possibility to compute MIMEs based onto buffers of information passing through node.js. Otherwise, it defaults to binary/octet-stream
.