-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 sync does not allow specification of file metadata headers e.g content-encoding #319
Comments
+1, this makes the s3 tool unusable if you're using CloudFront and need to specify cache headers. |
What if I want to sync both gzipped and uncompressed files? |
The two should be synced in separate commands. One with the compression headers and one without them. |
+1 here too. Uploading gzip files and being able to set "Content-Encoding" and "Cache-Control" is important to us. |
We have added a number of new options to the |
--content-disposition is listed above but not in the pull request merge: And, from what I can tell, it doesn't seem to be working. I can perform copies but ContentDisposition isn't set in the meta data. I'm doing the following: Am I missing something or is that functionality missing? |
The comment for #352 is incorrect. There is a
and see if the content disposition is returned for the object. |
Thanks for your reply. Yes, that's how I'm checking and it's not there. I get this:
And I'm copying from one bucket to another, as follows:
|
Okay, thanks for the additional info. I'll try to reproduce the problem locally and update here with my results. |
I see what's happening. If you do a When you do a We are not currently providing a way to set the value of the We should create a separate issue for to track this. |
How about Vary: Accept-Encoding parameter... It would be nice to be able to set that parameter too... |
Where is the "interactive help page" please ? |
ok using sync : |
because otherwise it keeps the metadata of the old files aws/aws-cli#319 (comment) COL-819
Note for posterity, if you have issues with metadata headers not updating on sync, potentially see also #1145 |
How can I download a file that is in Content-Encoding = gzip ? cat test.json��zl��i�4fv�����s�|6��C>����+�ݺ>�EDh�0���0��s�mU��R��]�B66Ļ�)�T���}�>@ |
Hi, how I can find the object to metadata in aws CLI. ? { |
I have the same problem as Bruno, |
It is great that content-encoding can be set, but transparant on the fly zip/unzip would even be better. Now you always have to pre-process the files, whereas in many scenarios it could be done during the sync (and with twice the threads maybe even as fast). |
That's why I created https://github.com/yvele/poosh which allow a metadata configuration file based on glob patterns: {
plugins : ["s3"],
baseDir : "./deploy",
remote : "s3-us-west-2.amazonaws.com/my-bucket",
each: [{
headers : { "cache-control": { cacheable: "public" } }
}, {
match : "**/*.{html,css,js}",
gzip : true,
headers : { "cache-control": { maxAge: "48 hours" } }
}, {
match : "**/*.{jpg,png,gif,ico}",
gzip : false,
headers : { "cache-control": { maxAge: "1 year" } }
}, {
match : "**/*.html",
priority : -1
}]
} I wish AWS add more control over headers while using |
WholeFoods later this is still an unmitigated disaster. When using the CLI to COPY or SYNC from an s3 source to an s3 destination it should (with no extra parameters or caveats about multipart uploads) copy the metadata. How is this too much to ask? |
Uploading to S3 by using the aws s3 sync should have an option to specify the headers for that request. Right now there doesn't seem to be any way to upload gzipped content and expect it to have the appropriate metadata
--add-header='Content-Encoding: gzip'
The text was updated successfully, but these errors were encountered: