Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 Deploy Boto Not Connecting to S3 Correctly #128

Closed
michaelBenin opened this issue Jun 5, 2016 · 19 comments
Closed

S3 Deploy Boto Not Connecting to S3 Correctly #128

michaelBenin opened this issue Jun 5, 2016 · 19 comments

Comments

@michaelBenin
Copy link

I have boto installed correctly and I am using

ansistrano_s3_region: "us-east-1"

In my config.

My output is

TASK [carlosbuenosvinos.ansistrano-deploy : ANSISTRANO | S3 | Get object from S3] ***
task path: /etc/ansible/roles/carlosbuenosvinos.ansistrano-deploy/tasks/update-code/s3.yml:7
fatal: [54.88.188.179]: FAILED! => {"changed": false, "failed": true, "msg": "Failed to connect to S3: Region  does not seem to be available for aws module boto.s3. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path"}
    to retry, use: --limit @./ansible/deploy/deploy.retry

Related: ansible/ansible-modules-core#3447

@michaelBenin
Copy link
Author

OK this should be a legit issue:

ansible/ansible-modules-core#3347

"Indeed. This defect practically rendered the s3 module broken outright. All of our playbooks that use s3 are 100% broken."

@michaelBenin
Copy link
Author

I believe this is only a big in Ansible 1.9+ will dig in further tonight.

@michaelBenin
Copy link
Author

Downgraded to Ansible 1.8 but then ran into this odd issue with permissions, the IAM user has full permissions to S3.

ansible/ansible-modules-core#69

@michaelBenin
Copy link
Author

Under 1.8 even with permissions set for list and read and with root user still having issue. S3 deployments are currently non functional. Going to try to come back to this after ansible stabilizes as a project.

@michaelBenin
Copy link
Author

Figured out the permissions issue, was a couple things combined.

Have a new issue with the way I'm packing and uploading, now happening in unpack:

ansible/ansible-modules-core#74

Getting closer.

@ricardclau
Copy link
Member

ricardclau commented Jun 6, 2016

Hi

None of these are Ansistrano bugs but Ansible bugs or environmental issues.
S3 deployments work as long as the module works (we are only proxying to the official module)

Sorry, but there is not much we can do here if the official modules are broken :)

@michaelBenin
Copy link
Author

@ricardclau I got it working last night, had to upgrade to 2.2 dev branch. But I found a real bug while doing all this. The s3 doesn't work if the object is inside a folder. That's a different issue that I will address in some kind of readme or blog post. Sorry for all the noise on a Sunday and thank you for this project!

@ricardclau
Copy link
Member

Hi @michaelBenin

Yeah it looks like you found all the things that could possibly go wrong :(
I have had weird issues with S3 access using IAM credentials lately. Even with the official AWS CLI tools until I upgraded to the latest ones...

The s3 strategy only supports bucket and object but I thought a folder in the object would work (folders in S3 don't really exist... it is part of the object name). Did you ever get it working with folders? I will have a look myself as I am pretty sure I did not test it when I implemented the s3 strategy

@ricardclau
Copy link
Member

Ok, maybe the problem is:

- name: ANSISTRANO | S3 | Get object from S3
  s3:
    bucket: "{{ ansistrano_s3_bucket }}"
    object: "{{ ansistrano_s3_object }}"
    dest: "{{ ansistrano_release_path.stdout }}/{{ ansistrano_s3_object }}"
    mode: get
    region: "{{ ansistrano_s3_region }}"
    aws_access_key: "{{ ansistrano_s3_aws_access_key | default(omit) }}"
    aws_secret_key: "{{ ansistrano_s3_aws_secret_key | default(omit) }}"

and we might need to do

    dest: "{{ ansistrano_release_path.stdout }}/{{ ansistrano_s3_object | basename }}"

Can you please check if that was the problem?

@michaelBenin
Copy link
Author

michaelBenin commented Jun 8, 2016

I'm trying to get ansistrano working on a side project after hours of work. I'll try to get to it this Sunday the latest. If you want I can give you access to the nano box, bitbucket, and jenkins box where this is all running. Feel free to email - michaellouisbenin at gmail dotc com - otherwise I will try my best to get to it Sunday if the weather is bad.

@michaelBenin
Copy link
Author

Currently it's for a node project, and last thing I ran into was this where ansible hangs on me:

Unitech/pm2#88

The s3 shouldn't take long though it's only a 2 line change then push to master to kick off a build.

@ricardclau
Copy link
Member

We just tagged 1.7.1 which fixes the folders problem

Thanks for reporting it!

@michaelBenin
Copy link
Author

michaelBenin commented Jun 8, 2016

Awesome, first thing when a get I chance on my off time I'll upgrade to the latest and pull it down.

@michaelBenin
Copy link
Author

Upgraded to ansistrano deploy 1.7.1, this fixed the issue with folders in buckets. Thank you!

@michaelBenin
Copy link
Author

BTW I have this working with PM2 now. But I feel some additional documentation could be added to the readme for S3 that ansible 2.2 needs to be used. I will try to open source this project as an example node project with pm2 for 0 downtime deploys and rollbacks in ansible. Currently it's in a jenkins job but I could move it to cli for an example. Thanks again!

@michaelBenin
Copy link
Author

Ansible 2.2.0 / Latest Ansistrano Deploy and rollback

  • Set Correct Permissions set on s3 bucket.
  • Boto installed / configured on host and remote servers.
  • PM2 installed globally with sudo
  • Update the sudoers filepath to include node (node installed with moviedo nvm):
    sudo vi /etc/sudoers
    Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/ubuntu/.nvm/versions/node/v5.10.0/bin"
  • Fix permissions on logging directory pm2 writes to
    • Add pm2 log rotate: (log stash coming instead!)
      sudo pm2 set pm2-logrotate:max_size 100MB
      
      

````[PM2] Module pm2-logrotate restarted
== pm2-logrotate ==
┌───────────────┬───────┐
│ key           │ value │
├───────────────┼───────┤
│ retain        │ 5     │
│ interval      │ 1     │
│ interval_unit │ DD    │
│ max_size      │ 100MB │
└───────────────┴───────┘

Add startup script to pm2:

sudo pm2 startup ubuntu

After any pm2 modifications run:

sudo pm2 save

On first deploy I've found I needed to manually start the process, this is only the case when setting it up the first time.

@ricardclau
Copy link
Member

I have used the S3 module since Ansible 1.8 and it usually works just fine... what function are you using that you specifically need Ansible 2.2?

@michaelBenin
Copy link
Author

Sorry 4 late response. The two native ansible methods outside of ansistrano was the ids unpacking tar and gzip as well as boto and s3 permissions.

@ricardclau
Copy link
Member

That's weird, I tested the patch fixing the folders issue with both Ansible 1.9 and 2.x (not sure about the exact version, but either 2.0.2 or 2.1.0)

Were you supplying access and secret keys to ansistrano or relying in IAM? Which version of boto? I have seen some weirdos with AIM profiles with non latest boto and awscli setups

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants