Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 Remote State and Credentials in non-default profiles #13589

Closed
jbruett opened this issue Apr 12, 2017 · 54 comments
Closed

S3 Remote State and Credentials in non-default profiles #13589

jbruett opened this issue Apr 12, 2017 · 54 comments
Assignees

Comments

@jbruett
Copy link

jbruett commented Apr 12, 2017

Terraform Version

0.9.2

Affected Resource(s)

terraform s3 remote state backend

Terraform Configuration Files

~/.aws/credentials - This would be the entire file, no other entries

[non-default-profile]
aws_access_key_id = <removed>
aws_secret_access_key = <removed>

MAIN.TF

provider "aws" {
    region = "us-west-2"
    profile = "non-default-profile"
}

terraform {
  backend "s3" {
    bucket = "some-s3-bucket"
    key    = "backend/test"
    region = "us-west-2"
    profile = "non-default-profile"
  }
}

Debug Output

NA

Panic Output

NA

Expected Behavior

What should have happened?

The expectation is that I should be able to use a non-default profile credential to complete the backend init (terraform init) call.

Actual Behavior

What actually happened?

command fails with the following error:

Error inspecting state in "s3": AccessDenied: Access Denied
        status code: 403, request id: 7286696E38465508

Prior to changing backends, Terraform inspects the source and destionation
states to determine what kind of migration steps need to be taken, if any.
Terraform failed to load the states. The data in both the source and the
destination remain unmodified. Please resolve the above error and try again.

if i simply move the same credentials under the default profile in the standard shared credentials file it works without issue.

Steps to Reproduce

terraform init

Important Factoids

none

References

not that i was able to find for 0.9.2

@jbardin
Copy link
Member

jbardin commented Apr 12, 2017

Hi @jbruett,

Unfortunately the backend had its own code to configure the aws client, taken from the old s3 remote state which looks like it didn't properly configure the profile.

The S3 backend in the next Terraform release will be sharing the configuration code with the aws provider which solves this issue.

@jbardin jbardin closed this as completed Apr 12, 2017
@jbruett
Copy link
Author

jbruett commented Apr 13, 2017

@jbardin, can you confirm that's 0.9.3 0.9.4?

@jbardin
Copy link
Member

jbardin commented Apr 13, 2017

@jbruett, it should actually be released in 0.9.3. I tested it myself on master before the release today. If it's still not working for you we can reopen this and continue investigating.

@jbruett
Copy link
Author

jbruett commented Apr 13, 2017

@jbardin still seeing the same error with the same workaround.

@jbardin
Copy link
Member

jbardin commented Apr 13, 2017

@jbruett, thanks for the update. Unfortunately 0.9.3 fixed the issue for me, so I'm not sure how to reproduce it yet. The fact you're getting AccessDenied indicates you are loading credentials of some sort, just not the ones you expect.

With 0.9.3 you will be able to see the AWS debug messages is you run TF_LOG=trace terraform init. You should be able to see what the Auth Provider is and what account credentials it ended up using, which may help us narrow down what's going on.

@jbardin jbardin reopened this Apr 13, 2017
@jbardin jbardin self-assigned this Apr 13, 2017
@devoopes
Copy link

Just updated from 0.9.2 to 0.9.3 and tested. Still seeing the same behavior.

As soon as I change one of my AWS profiles to default then comment out both profile lines in provider and backed, it works fine.

Error loading previously configured backend: 
Error configuring the backend "s3": No valid credential sources found for AWS Provider.
  Please see https://terraform.io/docs/providers/aws/index.html for more information on
  providing credentials for the AWS Provider

Please update the configuration in your Terraform files to fix this error.
If you'd like to update the configuration interactively without storing
the values in your configuration, run "terraform init".

~/.aws/credentials

[staging]
aws_access_key_id = <removed>
aws_secret_access_key = <removed>
[production]
aws_access_key_id = <removed>
aws_secret_access_key = <removed>

main.tf

provider "aws" {
  region    = "${var.aws_region}"
  profile   = "staging"
}
terraform {
  backend "s3" {
    bucket  = "some-bucket-name"
    key     = "staging.tfstate"
    region  = "us-west-2"
    profile = "staging"
  }
}

@jbruett
Copy link
Author

jbruett commented Apr 14, 2017

@jbardin I don't have AWS_PROFILE env var set, and no default profile configured in the shared creds file.

@jbardin
Copy link
Member

jbardin commented Apr 14, 2017

@jbruett, yes, but you are trying to access the bucket with credentials of some sort. Can you look at the debug output and see what they are and we may be able to determine where they are being loaded from? This may still be related to what @SeanLeftBelow is seeing, but I can't be sure.

@SeanLeftBelow, If you have changed the credentials required to access the state, and you know that the state file itself is unchanged in the same location, you need to select "no" at the prompt to copy the state. I'll look into the wording there, and see if we can make it more clear one may want to choose no if there is no change to the state data or location.

@devoopes
Copy link

@jbardin I don't get that far on the terraform init.
The error I posted above is the one I get when both of profiles in ~/.aws/credentials are set to custom values.

@jbruett
Copy link
Author

jbruett commented Apr 15, 2017

Ok, so after deleting state (local .terraform folder and state file in S3) and re-running terraform init it worked with custom credentials, and no default credential set. Not sure how someone with this issue in production would handle it, obviously deleting state isn't a possibility

@jbardin
Copy link
Member

jbardin commented Apr 17, 2017

Thanks for the feedback @SeanLeftBelow and @jbruett,

It seems both of these are manifestations of the same underlying issue. If the credentials that were stored from initialization are no longer valid for any reason (including just changing the credentials file), you may not be able to init again because the stored backend will always fail.

If the credentials exist but are incorrect, you can bypass the failures by avoiding the state migration. If the credentials no longer exist, the aws client will fail to initialize early on in the process.

I think we may want to add something like a -reconfigure flag for the case when a backend config changes, but we don't want terraform to attempt any sort of state migration, or even load the saved configuration at all.

@weslleycamilo
Copy link

Hello,

I am having issue as well to use backend "S3"

My terraform version is 0.9.3

using TF_LOG=trace terraform init i get :

2017/04/24 17:41:53 [DEBUG] New state was assigned lineage "e4bfa1b9-2cd7-4653-95d7-86592c8d9132"
2017/04/24 17:41:53 [INFO] Building AWS region structure
2017/04/24 17:41:53 [INFO] Building AWS auth structure
2017/04/24 17:41:53 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id

Error configuring the backend "s3": No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider

Please update the configuration in your Terraform files to fix this error
then run this command again.

@alberts-s
Copy link

I am experiencing the same issue whilst trying to update from 0.8.7 -> 0.9.6.
Since this is a production environment deleting state file is not an option.

I have specified the profile name to be used in a similiar fashion how @SeanLeftBelow above specifies. The trace log shows that it completely ignores the profile parameter therefore resulting in "Access Denied".

@weslleycamilo
Copy link

I realized that if I define the aws credentials as environment variables I could use the S3 backend..

Give it a try and see if it works for you.

@elblivion
Copy link
Contributor

FWIW, I got here searching for the same error - in my case it was because I created the S3 bucket in a different region (the one in my ~/.aws/config) than the one specified in the backend config. 🤦‍♂️

@ascendantlogic
Copy link

ascendantlogic commented Jun 16, 2017

I'm currently having this issue with 0.9.8. I'm circumventing by using direnv to automatically set AWS_PROFILE variables for specific directories.

@kjhosein
Copy link

kjhosein commented Jul 3, 2017

FYI: Another reason you may get the No valid credential sources found for AWS Provider error is if the AWS credential profile name you specified in your .tf or in your backend doesn't exactly match something in square brackets in ~/.aws/credentials.

@asimataurora
Copy link

asimataurora commented Jul 5, 2017

FYI: Another reason you may get the No valid credential sources found for AWS Provider error is if the AWS credential profile name you specified in your .tf or in your backend doesn't exactly match something in square brackets in ~/.aws/credentials.

Bingo! For some reason, I did event have a .aws directory. For anyone who reached here searching for solution: Go on and create ~/.aws/credentials file and put your credentials in this format

[default]
aws_access_key_id =
aws_secret_access_key =

@ascendantlogic
Copy link

Bingo! For some reason, I did event have a .aws directory. For anyone who reached here searching > for solution: Go on and create ~/.aws/credentials file and put your credentials in this format

[default]
aws_access_key_id =
aws_secret_access_key =

Right, the issue here is that when you use a named profile (aka not default), it's not being used by TF. I have triple checked and verified that my names matched up between my .tf file and the profile name in my ~/.aws/credentials file and it still is not working correctly.

As a workaround I'm using direnv and setting my AWS_PROFILE environment variable to the profile I want on a per-directory basis.

@combinatorist
Copy link

Just noting this is related to #5839, right?

@jbardin
Copy link
Member

jbardin commented Aug 1, 2017

@combinatorist:

That issue is regarding the accessing remote state resource, which is now data source. There are some comments about it working with different terraform remote configurations, but the remote backend config has completely changed since then too. This issue is primarily about access to the s3 remote state backend.

There may be hints there though, as I'm suspecting some behavior of the aws skd itself here.
We haven't been able to replicate setting a named profile, and having Terraform not read it from the credentials file, and I routinely use only non-default profiles for testing myself.

@combinatorist
Copy link

combinatorist commented Aug 3, 2017

@jbardin, TBH, these tickets seem really similar to the extent that I understand terraform and your comment, but if you think the difference should be clear to others, then I will catch up eventually.

In case anyone else's stuck, here's the extent of my understanding: both tickets mention (AWS) S3 remote state in their title. Both discuss the mysterious role of the default (AWS) profile in getting credentials for the remote state (as opposed to the profile specified to deploy the actual infrastructure). I picked up a hint that remote state as a data source is different and possibly(?), partially(?) replaces terraform backend. Are these different functionalities or just different syntax / paradigms to accomplish the same thing: i.e. to backup terraform state remotely?

And maybe to clarify this discussion, here are 3 different syntaxes that I easily get easily confused:

Example 1: "terraform (remote state) backend", from this ticket

terraform {
  backend "s3" {
    bucket = "some-s3-bucket"
    key    = "backend/test"
    region = "us-west-2"
    profile = "non-default-profile"
  }
}

Example 2: i.e. "remote state resource", from another ticket:

resource "terraform_remote_state" "ops" {
    backend = "s3"
    config {
        bucket = "eu3-terraform-ops"
        key = "terraform.tfstate"
        region = "${var.region}"
    }
}

Example 3: i.e. "remote state data source", from the docs:

data "terraform_remote_state" "vpc" {
  backend = "atlas"
  config {
    name = "hashicorp/vpc-prod"
  }
}

I get confused because I've only used option 1 and since terraform won't build the backend for me, it seemed like it was effectively a data source. I haven't used option 2, but since I don't see how a terraform project could build it's our remote state / backend, then I just assume it was also just a data source. So best I can understand, option 3 just calls it what it's always been. They may be implemented in different versions in different ways (i.e. different bugs), but they serve the same use case and I should always use option 3 (assuming it's peculiar bugs don't effect my application / I can trust those bugs will be fixed soon).

Am I close?

@jbardin
Copy link
Member

jbardin commented Aug 3, 2017

@combinatorist, I agree this can be confusing because of the rapid evolution of Terraform over the past year or so.

The term "remote state" can be used in two different ways, and are very different concepts. You can configure a backend to store your state "remotely" (option 1, documented here), or you can use another remote state as a data source to reference its resources (option 3).

The resource "terraform_remote_state" configuration was deprecated quite a while ago in favor of defining it as a data source (option 3), but they are used conceptually in the same way.

That older issue also has many references to "remote state" in terraform remote config, which is no longer used in terraform, and has be completely superseded by the "backend" configuration.

@combinatorist
Copy link

combinatorist commented Aug 3, 2017

Ok, thanks for clarifying, @jbardin - that helps a lot!

A key difference I just picked up from @ehumphrey-axial is that the "remote state backend" (option 1) lets you sync your current state remotely, so if you change your infrastructure, then it will write your changes into that remote state. However, "remote state data source" (option 3) let's you pull in information about another terraform project. In this case, it can read from that project's remote state, but it won't ever write to it, because it's not trying to deploy or manage infrastructure for that other project.

It was confusing because neither of them creates the s3 bucket* that holds your remote state, so they seemed like the same thing. But, option 1 does write into that bucket while option 2 can only read from that bucket.

*I'm selfishly talking about the S3 backend, but I'm assuming similar behavior applies to other backends.


Finally, I noticed in your last comment that there's another thing, you called:

"remote state" in terraform remote config

which we could call option 0. I understand correctly, then, option 1 replaced option 0 for managing the current project's remote state and option 3 replaced option 2 for reading another project's (necessarily remote) state.

@jbardin
Copy link
Member

jbardin commented Aug 10, 2017

Hi @ThatGerber,

Thanks for the feed back here!

It is becoming more clear that keeping the name of Terraform's private data file as terraform.tfstate within .terraform/ was a mistake (they share the same data structures, so the default name was used). This file is not related to the actual state of the managed resources, and terraform should never move this to ./terraform.tfstate or use it as your "state". And the usual disclaimer about editing internal bits; the usage of that file isn't guaranteed to remain consistent across releases, and editing that file directly may produce unexpected results.

As for this use case, I think it's covered by init -reconfigure which will initialize a new backend configuration, ignoring the saved config.

@atkinchris
Copy link

atkinchris commented Aug 29, 2017

I encountered (and solved) this issue with 0.10.2. I was attempting to migrate old remote state to new remote state, using terraform init.

In my .terraform/terraform.tfstate file, which was created with 0.8.8, I had both remote and backed keys, both configured for S3.

However, the remote config did not have profile (and role) set - which ultimately meant Terraform 0.10.2 was not assuming my non-default profile when trying to read the old state.

@anugnes
Copy link

anugnes commented Aug 30, 2017

I just had the same issue with 0.10.2 and my AWS credentials file containing a default profile, as well as an additional one.

As already reported, it looks like when declaring the remote backend, Terraform ignores the "profile" directive and only picks up the default profile.

I was able to prove this - and fix my problem as well - simply by calling "default" my additional profile and giving the was-default one another name. I was lucky as in my case it was the right thing to do anyway :D

@TimJDFletcher
Copy link

TimJDFletcher commented Oct 2, 2017

Worth noting that you can sidestep this bug by calling terraform with a profile name in the environment var AWS_PROFILE

ie:

AWS_PROFILE=none-default-profile terraform init works but terraform init does not

I have the following in my backend config:

terraform {
    backend "s3" {
        bucket = "bucket"
        key    = "tfstate/production"
        region = "eu-west-1"
        profile = "none-default-profile"
    }
}

@rererecursive
Copy link

@TimJDFletcher Unfortunately your solution doesn't work for me (I'm using 0.10.7). My config is identical to yours, yet when I run it in a clean environment I receive:

root@user:/app# AWS_PROFILE=default ./terraform init

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error loading state: AccessDenied: Access Denied
	status code: 403, request id: 0CC6149E9728592F, host id: NspHQGKR2o7Po+nKn6T7RF3esMGI3/OAKY0YNXvnop55aZ9HDKTAifTGZsx96zdUghRMN6IdBWw=

This same error occurs whether or not I change/hardcode any of the profile or config options (access_key, secret_key, profile, etc), such as the solutions posted in #5839. However, none of those solutions that others posted work.

Does anyone have any successful solutions? Am I perhaps missing a permission setting on AWS?

@TimJDFletcher
Copy link

TimJDFletcher commented Oct 6, 2017

I don't have a default aws profile defined at all.

My s3 access is a cross account access with the following policy applied at a bucket level:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowList",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::1234567890:user/terraform"
            },
            "Action": [
                "s3:GetBucketLocation",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::bucket-name"
            ]
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::1234567890:user/terraform"
            },
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::bucket-name/production*"
            ]
        }
    ]
}

@gabordk
Copy link

gabordk commented Oct 6, 2017

Unfortunately this issue still exist with Terraform 0.10.7 + S3 backend storage.
Nothing other than default profile can be used.

@TimJDFletcher
Copy link

TimJDFletcher commented Oct 7, 2017

As I explained I do not have a default defined at all and using an environment var I can at least tell the backend which which profile to use, thus:

AWS_PROFILE=none-default-profile terraform init works but terraform init does not because I do not have a default profile setup at all.

@fcisneros
Copy link

I managed to make this work by adding s3:ListObjects and s3:ListBucket to my instance profile policy

@anugnes
Copy link

anugnes commented Oct 9, 2017

@fcisneros would you be so kind and guide me through your process, please? I'd like to simulate it :)

@TimJDFletcher
Copy link

Hi @anugnes I think that the policy I posted above is the minimum you can apply to a Terraform remote state bucket. The policy restricts terraform to a single label (basically a directory) within a single bucket.

I'm not sure if you can remove the delete object perm and just allow terraform to overwrite the state files and just use the S3 object history feature

@fcisneros
Copy link

@anugnes I think this is related to this issue, so the policy shown by @TimJDFletcher (similar to the one I ended up configuring) should do the trick

@bndw
Copy link

bndw commented Oct 10, 2017

I've also been flailing with the aws provider using non-default profiles. FWIW, here's my usecase:

I have multiple AWS accounts and want to deploy my Terraform resources to each account, storing their state in an S3 bucket in the given account. I'm attempting to do this by with the following setup

  • A Terraform Workspace for each AWS account
  • A profile for each AWS account in ~/.aws/credentials
  • I use a wrapper script to format the provider and backend with the appropriate aws profile and bucket.

Is there a better way to achieve this? I'm continuously stuck in weird limbo states when attempting to create/switch workspaces and notice Terraform is always falling back to the default profile.

@TimJDFletcher
Copy link

I'd try removing or renaming your default account. To retain the functionality of just typing "aws XXX yyy" set the env var AWS_PROFILE

@blockjon
Copy link

blockjon commented Nov 14, 2017

I just found my way to this issue when I came to the realization that I need to prefix all of my terraform commands with AWS_PROFILE=staging. I would much rather rely on the tf configuration to do this than for me to remember to type this, or export it. Major thanks to whoever fixes this very important feature.

@bowczarek
Copy link

bowczarek commented Dec 5, 2017

Can confirm that with Terraform v0.11.1 I still got the same issue. My current workaround is as others already said, set env variable before using terraform export AWS_PROFILE=myCustomProfile. Hopefully, someone fixes that soon so we can use profile configuration in .tf files rather than environment variable.

@dmportella
Copy link
Contributor

this should be fixable. I will investigate and see where i get too.

@slykar
Copy link

slykar commented Dec 23, 2017

@jbardin

The S3 backend in the next Terraform release will be sharing the configuration code with the aws provider which solves this issue.

Does it actually work? Just started a new project and it seem to always look at the default credentials from ~/.aws/credentials during terraform init despite having the aws provider configured. I've looked at the output of TF_LOG=trace to confirm this.

I even hardcoded my secrets for the aws provider to make sure it's not a issue related to interpolation of variables.

Only when I explicitly use -backend-config during terraorm init, the local .terraform dir is created and credentials are stored within terraform.tfstate.

I'm using ver 0.10.6.

@mattolenik
Copy link

This is still present on 0.11.1 and AWS provider 1.7. If I specify config with -backend-config, it works, otherwise it breaks. The on-screen prompts to fill in the config do not work.

@jbardin
Copy link
Member

jbardin commented Jan 17, 2018

@slykar, I haven't had any reports that it doesn't work. You say "despite having the aws provider configured", but the provider configuration is completely separate form the backend configuration. Are you certain that you have the backend configured correctly too?

To those who have added that this is still not working for them, I sympathize that there may still be an undiagnosed issue, but the example configurations are known to work with various user profiles, and numerous production use cases are also working without issue.

What we really need here is a complete and verifiable example showing that the incorrect user profile is being used for init. Once we have a way to reproduce the issue it will be much easier to fix the root cause.

@bowczarek
Copy link

bowczarek commented Jan 17, 2018

@jbardin I have everything set up and it does not work, both provider and backend:

provider "aws" {
  region  = "us-east-1"
  profile = "dev"
}

terraform {
  backend "s3" {
    bucket  = "xyz"
    key     = "terraform.tfstate"
    profile = "dev"
    region  = "us-east-1"
  }
}

And when I run terraform init I still get:

Error loading previously configured backend: 
Error configuring the backend "s3": No valid credential sources found for AWS Provider.
  Please see https://terraform.io/docs/providers/aws/index.html for more information on
  providing credentials for the AWS Provider

Please update the configuration in your Terraform files to fix this error.
If you'd like to update the configuration interactively without storing
the values in your configuration, run "terraform init".

Terraform 0.11.2 and newest AWS provider 1.7. As you can notice even the error message says about missing AWS Provider credential even though it is a backend as you mentioned.

Additionally if you want to parametrize the profile, then you can't really use string interpolation with variables in backend configuration, which would be also not convenient.

Error loading backend config: 1 error(s) occurred:

* terraform.backend: configuration cannot contain interpolations

The backend configuration is loaded by Terraform extremely early, before
the core of Terraform can be initialized. This is necessary because the backend
dictates the behavior of that core. The core is what handles interpolation
processing. Because of this, interpolations cannot be used in backend
configuration.

@jbardin
Copy link
Member

jbardin commented Jan 17, 2018

@bowczarek, The error starts with Error loading previously configured backend:, which means that the stored configuration doesn't contain the aws credentials, probably because you were previously using the environment variables. The -reconfigure option mentioned above was added for this case, where you need to set the backend configuration while ignoring the previous config.

@mattolenik
Copy link

I'm getting the same thing with brand new state, first time init. It breaks from the command prompts but not when using -backend-config. However, init seems to work if the state is already there.

@bowczarek
Copy link

bowczarek commented Jan 18, 2018

@jbardin thanks! it's actually working now. This option is documented in Backend Initialization section rather than General Options, that's why I probably didn't notice that or forgot about it, when I was reading it some time ago.

@soar
Copy link

soar commented Jan 30, 2018

Same problem. profile option in backend configuration is ignored, only setting AWS_PROFILE env var helps.

@jbardin
Copy link
Member

jbardin commented Jan 30, 2018

He everyone,

As I've stated above, what we need here is a complete, reproducible example that demonstrates the stated issue. Any examples I have seen have been resolved as other issues, but I do accept that there may be edge cases that have yet to be covered.

This issue is getting unwieldy, and has too many diversions for someone trying to read through. I'm going to lock this for now, but keep it open to hopefully make it easier to find. If anyone comes across this with the same issue, please feel free to open a new one along with the configuration to reproduce it.

@hashicorp hashicorp locked as off-topic and limited conversation to collaborators Jan 30, 2018
@hashibot
Copy link
Contributor

Hello! 🤖

This issue relates to an older version of Terraform that is no longer in active development, and because the area of Terraform it relates to has changed significantly since the issue was opened we suspect that the issue is either fixed or that the circumstances around it have changed enough that we'd need an updated issue report in order to reproduce and address it.

If you're still seeing this or a similar issue in the latest version of Terraform, please do feel free to open a new bug report! Please be sure to include all of the information requested in the template, even if it might seem redundant with the information already shared in this issue, because the internal details relating to this problem are likely to be different in the current version of Terraform.

Thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests