Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform does not read AWS profile from environment variable #233

Closed
hashibot opened this issue Jun 13, 2017 · 26 comments
Closed

Terraform does not read AWS profile from environment variable #233

hashibot opened this issue Jun 13, 2017 · 26 comments
Labels
bug Addresses a defect in current functionality. provider Pertains to the provider itself, rather than any interaction with AWS.
Milestone

Comments

@hashibot
Copy link

This issue was originally opened by @boompig as hashicorp/terraform#8330. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.7.0

Affected Resource(s)

Probably all of AWS, observed with S3.

Terraform Configuration Files

variable "region" {
    default = "us-west-2"
}

provider "aws" {
    region = "${var.region}"
    profile = "fake_profile"
}

resource "aws_s3_bucket" "bucket" {
    bucket = "fakebucket-something-test-1"
    acl = "private"
}

Debug Output

https://gist.github.com/boompig/f05871140b928ae02b8f835d745158ac

Expected Behavior

Should successfully login then give "noop" text.

Actual Behavior

Does not read the correct profile from environment variable. Works if you provide the profile name in the file, though.

Steps to Reproduce

  1. export AWS_PROFILE= your_real_profile
  2. create a terraform file similar to mine with a fake profile in the name
  3. terraform apply
@hashibot hashibot added the bug Addresses a defect in current functionality. label Jun 13, 2017
@spanktar
Copy link

spanktar commented Jun 22, 2017

I don't understand the "AWS_PROFILE" requirement. Why is this required to work, when it is not required in Packer, for example? Simply defining this in the aws{} block should be sufficient.

For example, the following does not work (0.9.6):

aws.tf

provider "aws" {
  region     = "us-east-1"
  profile    = "sandbox"
}

~/.aws/credentials

[sandbox]
aws_access_key_id = FOO
aws_secret_access_key = BAR
region = us-east-1

~/.aws/config

[default]
region = us-west-2

[profile sandbox]
# Nothing required here, see credentials file

Then running:
terraform plan

What does work? Same config but:
AWS_PROFILE=sandbox terraform plan

So, why does the first fail, while the second work? What is the point of the ENV variable?

(also still trying to figure out how region fits into this whole thing, since it seems to be equally arbitrary)

@spanktar
Copy link

Finally, why is this whole profiles thing so janky? Like, just make it simple:

terraform -aws-profile=foo plan

and call it a day already :-\

@Puneeth-n
Copy link
Contributor

Puneeth-n commented Jun 22, 2017

Terraform v0.9.8

Im not able to recreate this issue.

My config:

variable "region" {
    default = "us-east-1"
}

provider "aws" {
    profile = "sub_account"
    region = "${var.region}"
}

resource "aws_s3_bucket" "bucket" {
    bucket = "fakebucket-something-test-6"
    acl = "private"
}
TF_LOG=DEBUG terraform apply

It uses the key pair from sub_account profile

when I change the profile to test_account It uses the test_account credentials

@evanstachowiak
Copy link

You shouldn't need to set profile in your provider and can just set theAWS_PROFILE in your shell environment. At least that is working for me on 0.9.6.

@spanktar
Copy link

Sorry for the confusion on my end (and the noise). I realized I was missing something crucial. The initial state config and the subsequent terraform run are actually separate credentials. Profiles was working as expected the entire time, but I could not make the initial connection the state bucket with the default profile.

@seanorama
Copy link

seanorama commented Jul 4, 2017

profile is not working for me.

provider "aws" {
    region                = "us-east-2"
    profile                = "dev"
    shared_credentials_file = "~/.aws/credentials"
}
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

Error refreshing state: 1 error(s) occurred:

* provider.aws: No valid credential sources found for AWS Provider.
  Please see https://terraform.io/docs/providers/aws/index.html for more information on
  providing credentials for the AWS Provider

Same result with:

export AWS_DEFAULT_PROFILE=dev
export AWS_PROFILE=dev

My environment:

$ terraform -version
Terraform v0.9.11

OS = macOS 10.12

@Puneeth-n
Copy link
Contributor

Puneeth-n commented Jul 4, 2017

@seanorama Do you use roles in your profile?

you do not need to set any environment variable and works out of the box.

Terraform 0.9.11

main.tf

variable "region" {
    default = "us-east-1"
}

provider "aws" {
    profile = "production"
    region = "${var.region}"
}

resource "aws_s3_bucket" "bucket" {
    bucket = "fakebucket-something-test-6"
    acl = "private"
}

What works
~/.aws/config

[default]
region = eu-west-1

[profile test]
region = eu-west-1

[profile production]
region = eu-west-1

~/.aws/credentials

[default]
aws_access_key_id = foo
aws_secret_access_key = bar

[test]
aws_access_key_id = baz
aws_secret_access_key = blah

[production]
aws_access_key_id = boo
aws_secret_access_key = baa

What DOES NOT work
~/.aws/config

[default]
region = eu-west-1

[profile test]
region = eu-west-1
role_arn = some_role
source_profile = default

[profile production]
region = eu-west-1
role_arn = some_other_role
source_profile = default

~/.aws/credentials

[default]
aws_access_key_id = foo
aws_secret_access_key = bar

USING roles in profile?
This works!

variable "region" {
    default = "us-east-1"
}

provider "aws" {
    region = "${var.region}"
    assume_role {
        role_arn = "some_role"
    }
}

resource "aws_s3_bucket" "bucket" {
    bucket = "fakebucket-something-test-6"
    acl = "private"
}

@kenichi-shibata
Copy link

kenichi-shibata commented Sep 8, 2017

Just putting it here if someone else finds this problem.

profile works out of the box if you configured it correctly on awscli on awscli 1.11.113 and Terraform v0.10.4

aws configure --profile newprofile

provider "aws" {
  region = "eu-west-2"
  profile = "newprofile"
} 

@bflad bflad added the provider Pertains to the provider itself, rather than any interaction with AWS. label Jan 28, 2018
@shavo007
Copy link

shavo007 commented Feb 6, 2018

doesnt work for me. just tried on terraform version v0.11.3 and aws cli aws-cli/1.14.10

Ignore. look like i had an issue with my key. recreated and worked fine

@cornfeedhobo
Copy link

Just in case, check out #1184 as well

@kjenney
Copy link

kjenney commented Apr 23, 2018

The issue is that I'm already using AWS_PROFILE with Packer and boto3 and it works perfectly. To use Terraform I need to unset AWS_PROFILE AND add a profile in the Terraform provider config. This needs to be fixed ASAP - pick one of the other because this is overcomplicating the whole thing.

@cornfeedhobo
Copy link

@kjenney I think they did. Check out #2883

@kjenney
Copy link

kjenney commented Apr 23, 2018

Nope. I just verified against 1.15.0 and got the same exception. The assumed Role has AWSAdmin and boto3 will perform any operations with AWS_PROFILE set:

import boto3
import botocore
import os

os.environ['AWS_PROFILE'] = 'dev'
os.environ['AWS_DEFAULT_REGION'] = 'us-east-1'

ec2 = boto3.resource('ec2')
for i in ec2.instances.all(): print(i)

$ python listec2instances.py
ec2.Instance(id='i-0328bd472a76c42fb')

Here's my Terraform config:

provider "aws" {
  region = "${var.aws_region}"
  version = "~> 1.14"
}

$ export AWS_PROFILE="dev"
$ aws-cli/1.15.0 Python/3.6.5 Darwin/17.5.0 botocore/1.10.0
$ ls -altr .terraform/plugins/darwin_amd64/terraform-provider-aws_v1.15.0_x4
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

null_resource.default: Refreshing state... (ID: 1816969938248271300)

Error: Error refreshing state: 1 error(s) occurred:

* provider.aws: No valid credential sources found for AWS Provider.
	Please see https://terraform.io/docs/providers/aws/index.html for more information on
	providing credentials for the AWS Provider

@kjenney
Copy link

kjenney commented Apr 24, 2018

So after digging further into #2883 I found that AWS_SDK_LOAD_CONFIG needs to be set for AWS_PROFILE to work. There is no mention of that in this issue OR in the public provider documentation: https://www.terraform.io/docs/providers/aws/index.html. This is an acceptable fix but it needs to be documented.

@cornfeedhobo
Copy link

@kjenney yeah, that environment variable is poorly documented in all the SDKs. I have had to dig through each one to see if there was support for it. Sorry you didn't know about that, and sorry I didn't think to bring it up sooner :-/

@alock
Copy link

alock commented May 14, 2018

Will there be a solution for the aws provider and an s3 backend that uses profiles with assumed roles? Seems like this might be causing some problems and unfortunately it is locked hashicorp/terraform#13589.

Setting AWS_SDK_LOAD_CONFIG and AWS_PROFILE works with profiles that have aws_secret_access_key and aws_access_key_id, but does not work if the profile is setup like below:

 [role]
 role_arn       = arn:aws:iam::{{AcountID}}:role/RoleName
 source_profile = profile

Looking for any suggestions? My current solution is statically defining the profile in the provider and the backend, but it has already caused multiple problems :(

@mafrosis
Copy link

mafrosis commented May 15, 2018

@alock The following configuration works for my team.

Our TF code has been converted to a single large module, with instances of this module for each environment. The directory layout is like this:

infra/project
             /bastion.tf
             /variables.tf
infra/staging
             /main.tf
             /provider_backend.tf
infra/prod
             /main.tf
             /provider_backend.tf

The main.tf files above instantiate the module project:

variable "region" {
  default = "us-east-2"
}

module "staging" {
  source = "../project"

  region = "${var.region}"
}

The interesting part is provider_backend.tf:

provider "aws" {
  region = "${var.region}"
  version = "~> 1.16"

  assume_role {
    role_arn = "arn:aws:iam::00000000000:role/OrganizationAccountAccessRole"
  }
}

terraform {
  backend "s3" {
    region = "ap-southeast-2"
    bucket = "project-terraform-states"
    key    = "project_staging"
  }
}

This config can be used with the AWS_PROFILE set to your identity account, and then terraform will assume the correct role before executing.

cd infra/staging
AWS_PROFILE=master terraform plan

@avengers009
Copy link

as @kjenney said "AWS_SDK_LOAD_CONFIG needs to be set for AWS_PROFILE"

@paultyng
Copy link
Contributor

It seems like we can't reproduce this issue. To help the maintainers find the actionable issues in the tracker, I'm going to close this out, but if anyone is still experiencing this and can either supply a reproduction or logs, feel free to reply below or open a new issue. Thanks!

@paultyng paultyng added this to the v1.25.0 milestone Jun 21, 2018
@alock
Copy link

alock commented Jun 21, 2018

@paultyng - Maybe this wasn't meant to be solved with this issue, but I feel like terraform should be able to support environment variables without having to set explicit profiles or assume_role blocks in the files we commit. It can make it difficult since my team creates profiles-names uniquely and some have different Roles depending on the position. I thought that terraform would support env variables just like the AWSCLI. Below are some files that I used to test and highlight my problem. Everything wrapped in {{}} are redacted variables.

~/.aws/config

[profile {{PROFILE_NAME}}]
output          = json
region          = us-west-2
role_arn        = arn:aws:iam::{{ACCOUNT_NUM}}:role/{{ROLE_NAME}}
source_profile  = {{SOURCE}}

[profile {{SOURCE}}]
output          = json
region          = us-west-2

~/.aws/credentials

[{{PROFILE_NAME}}]
role_arn       = arn:aws:iam::{{ACCOUNT_NUM}}:role/{{ROLE_NAME}}
source_profile = {{SOURCE}}

[{{SOURCE}}]
aws_access_key_id     = {{REMOVED}}
aws_secret_access_key = {{REMOVED}}

main.tf

provider "aws" {
  region = "us-west-2"
}

terraform {
  backend "s3" {
    region         = "us-west-2"
    bucket         = "{{S3_BUCKET}}"
    key            = "{{KEYNAME}}.tfstate"
    encrypt        = "true"
    dynamodb_table = "{{TABLE_NAME}}"
    acl            = "bucket-owner-full-control"
  }
}

Failing commands

$ AWS_PROFILE={{PROFILE_NAME}} aws iam list-account-aliases
{
    "AccountAliases": [
        "{{AWS_ALIAS}}"
    ]
}
$ AWS_PROFILE={{PROFILE_NAME}} AWS_SDK_LOAD_CONFIG=1 terraform init

Initializing the backend...

Error configuring the backend "s3": No valid credential sources found for AWS Provider.
  Please see https://terraform.io/docs/providers/aws/index.html for more information on
  providing credentials for the AWS Provider

Please update the configuration in your Terraform files to fix this error
then run this command again.

@wjam
Copy link
Contributor

wjam commented Jul 28, 2018

I've been able to reproduce this locally and have managed to diagnose the issue and worked out why the AWS backend wasn't able to pick up the AWS_PROFILE where as the AWS provider was picking it up.

In short, the reason is that this issue is fixed in this repo but version of this repo that is present in the Terraform repo is an old version before this change was made.

@paultyng or someone at HashiCorp - can you update the version of the terraform-provider-aws library in the Terraform repo to fix this?

@wjam
Copy link
Contributor

wjam commented Jul 28, 2018

Looks like there's already a PR to cover this hashicorp/terraform#17901

@bflad
Copy link
Contributor

bflad commented Jul 31, 2018

Upstream PR has been merged and will release with Terraform core 0.11.8.

@fernandrone
Copy link
Contributor

So after digging further into #2883 I found that AWS_SDK_LOAD_CONFIG needs to be set for AWS_PROFILE to work. There is no mention of that in this issue OR in the public provider documentation: https://www.terraform.io/docs/providers/aws/index.html. This is an acceptable fix but it needs to be documented.

For anyone interested, this still hasn't been documented (and I just lost a lot of time because of the elusive AWS_SDK_LOAD_CONFIG), so I've opened a couple of PRs documenting it:

hashicorp/terraform#21122
#8451

stormbeta added a commit to stormbeta/bashrc that referenced this issue Jun 26, 2019
This appears to be an anti-feature of the AWS Golang SDK which breaks
profile support if not set

Related: hashicorp/terraform-provider-aws#233
@sjatkins
Copy link

sjatkins commented Feb 11, 2020

Why is this closed? I have the same issue and nothing works. The configuration is fine with all aws cli, boto3 etc. so there is nothing wrong with .aws credentials. This sholud not be broken Setting the mysterious extra variable somehow to do do with a mumble GO something or other does nothing for me either. This is embarrassing guys.

@ghost
Copy link

ghost commented Feb 11, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Feb 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. provider Pertains to the provider itself, rather than any interaction with AWS.
Projects
None yet
Development

No branches or pull requests