-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
upload failed when putting into s3. #449
Comments
What version of the CLI are you using? |
I installed it via pip. |
After I upgrade to 1.2.3, it seems to be suppressed error messages above. I close this. |
I am encountering upload error when I try to Files with 12 bytes and 804 bytes could be copied to bucket successfully. So I guess it's eventually the size that is problem here. I am running the command from my local Macbook and not from EC2. I have DSL internet line. command
command output (without
debug output
|
I think #454 is relevant here as well. |
I'm still seeing this with aws --version upload failed: x to s3://x HTTPSConnectionPool(host='x.s3.amazonaws.com', port=443): Max retries exceeded with url: x (Caused by <class 'socket.error'>: [Errno 32] Broken pipe) |
@1mentat Can you tell me what set of steps you're trying? Syncing? Copying? s3->local, local->s3? Does this happen after a while, right away? One file, multiple files, etc. Anything you can share will help troubleshoot the issue. |
Both cp --recursive and sync seem to have the same issue. This is local to s3. It seems to mostly happen with .jars and .dlls which are in the 10s of MBs. Particularly I was uploading an unzipped version of play-1.2.7.zip from Play Framework. |
I have this problem when trying to copy to a bucket in oregon. If I create it in US Standard, it works fine. |
I also have this problem when creating a bucket via the Web console or via the CLI with region eu-west-1, but creating a bucket in the us-east-1 region works. |
I am also have the same problem copying a 30GB zip file from an Oregon EC2 instance to Oregon S3. It does it as a multipart upload and every part gives this error. I am use the CLI version 1.3.1. |
It also works if I go to a bucket in us-east-1. In general I'd much prefer keeping everything in Oregon though. |
Getting the same error with Oregon, 98MB file, copying from local to S3, smaller files (< 1MB) work fine), version: |
have the same problem... is there a workaround? version: aws-cli/1.3.6 Python/2.7.5 Windows/7 |
Same issue with 1.3 of the cli. I have a 25gb file. I am going try a different region for the bucket. |
This is reliably reproduce able in Oregon buckets and does not happen in US East. |
I get this problem when creating NEW buckets in eu-west-1 when uploading files large enough to trigger multi-part uploading. After a few hours the uploads works fine. My problem is most likely related to #544, not sure if there is another issue as well... |
This issue is the same as: #634, with an overview of the situation here: #634 (comment) There's a fix for this (#634 (comment)), we just need to do more testing to ensure there are no regressions, and then we'll release the fix. |
Thanks for the update. |
I was getting the same issue with 1.3.17 but having upgraded to the latest 1.4.4 it's no longer an issue. |
Seems this is still an issue? I'm encountering it with large uploads:
Currently on AWS CLI version:
|
I experienced this today, and comments above mention problems related to us-west-2 (Oregon). I was able to upload my 500MB file to a different bucket, then copy between buckets and got it to work.
|
I have some error messages like below when I copy (upload) / sync to S3
{bucket-sensored}
is my bucket name.Max retries exceeded
are shown more than two lines.Is it occured by my network issue?
Even if it it true, I want more thorough error messages. As I can't see what should I do to avoid the error.
The text was updated successfully, but these errors were encountered: