-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 Remote State and Credentials in non-default profiles #13589
Comments
Hi @jbruett, Unfortunately the backend had its own code to configure the aws client, taken from the old s3 remote state which looks like it didn't properly configure the profile. The S3 backend in the next Terraform release will be sharing the configuration code with the aws provider which solves this issue. |
@jbardin, can you confirm that's |
@jbruett, it should actually be released in 0.9.3. I tested it myself on master before the release today. If it's still not working for you we can reopen this and continue investigating. |
@jbardin still seeing the same error with the same workaround. |
@jbruett, thanks for the update. Unfortunately 0.9.3 fixed the issue for me, so I'm not sure how to reproduce it yet. The fact you're getting With 0.9.3 you will be able to see the AWS debug messages is you run |
Just updated from 0.9.2 to 0.9.3 and tested. Still seeing the same behavior. As soon as I change one of my AWS profiles to default then comment out both profile lines in provider and backed, it works fine.
~/.aws/credentials
main.tf
|
@jbardin I don't have AWS_PROFILE env var set, and no default profile configured in the shared creds file. |
@jbruett, yes, but you are trying to access the bucket with credentials of some sort. Can you look at the debug output and see what they are and we may be able to determine where they are being loaded from? This may still be related to what @SeanLeftBelow is seeing, but I can't be sure. @SeanLeftBelow, If you have changed the credentials required to access the state, and you know that the state file itself is unchanged in the same location, you need to select "no" at the prompt to copy the state. I'll look into the wording there, and see if we can make it more clear one may want to choose no if there is no change to the state data or location. |
@jbardin I don't get that far on the terraform init. |
Ok, so after deleting state (local .terraform folder and state file in S3) and re-running |
Thanks for the feedback @SeanLeftBelow and @jbruett, It seems both of these are manifestations of the same underlying issue. If the credentials that were stored from initialization are no longer valid for any reason (including just changing the credentials file), you may not be able to If the credentials exist but are incorrect, you can bypass the failures by avoiding the state migration. If the credentials no longer exist, the aws client will fail to initialize early on in the process. I think we may want to add something like a |
Hello, I am having issue as well to use backend "S3" My terraform version is 0.9.3 using TF_LOG=trace terraform init i get : 2017/04/24 17:41:53 [DEBUG] New state was assigned lineage "e4bfa1b9-2cd7-4653-95d7-86592c8d9132" Error configuring the backend "s3": No valid credential sources found for AWS Provider. Please update the configuration in your Terraform files to fix this error |
I am experiencing the same issue whilst trying to update from 0.8.7 -> 0.9.6. I have specified the profile name to be used in a similiar fashion how @SeanLeftBelow above specifies. The trace log shows that it completely ignores the profile parameter therefore resulting in "Access Denied". |
I realized that if I define the aws credentials as environment variables I could use the S3 backend.. Give it a try and see if it works for you. |
FWIW, I got here searching for the same error - in my case it was because I created the S3 bucket in a different region (the one in my |
I'm currently having this issue with 0.9.8. I'm circumventing by using direnv to automatically set |
FYI: Another reason you may get the |
Bingo! For some reason, I did event have a .aws directory. For anyone who reached here searching for solution: Go on and create
|
Right, the issue here is that when you use a named profile (aka not As a workaround I'm using direnv and setting my |
Just noting this is related to #5839, right? |
That issue is regarding the accessing remote state resource, which is now data source. There are some comments about it working with different There may be hints there though, as I'm suspecting some behavior of the aws skd itself here. |
@jbardin, TBH, these tickets seem really similar to the extent that I understand terraform and your comment, but if you think the difference should be clear to others, then I will catch up eventually. In case anyone else's stuck, here's the extent of my understanding: both tickets mention (AWS) S3 remote state in their title. Both discuss the mysterious role of the default (AWS) profile in getting credentials for the remote state (as opposed to the profile specified to deploy the actual infrastructure). I picked up a hint that remote state as a data source is different and possibly(?), partially(?) replaces terraform backend. Are these different functionalities or just different syntax / paradigms to accomplish the same thing: i.e. to backup terraform state remotely? And maybe to clarify this discussion, here are 3 different syntaxes that I easily get easily confused: Example 1: "terraform (remote state) backend", from this ticket
Example 2: i.e. "remote state resource", from another ticket:
Example 3: i.e. "remote state data source", from the docs:
I get confused because I've only used option 1 and since terraform won't build the backend for me, it seemed like it was effectively a data source. I haven't used option 2, but since I don't see how a terraform project could build it's our remote state / backend, then I just assume it was also just a data source. So best I can understand, option 3 just calls it what it's always been. They may be implemented in different versions in different ways (i.e. different bugs), but they serve the same use case and I should always use option 3 (assuming it's peculiar bugs don't effect my application / I can trust those bugs will be fixed soon). Am I close? |
@combinatorist, I agree this can be confusing because of the rapid evolution of Terraform over the past year or so. The term "remote state" can be used in two different ways, and are very different concepts. You can configure a backend to store your state "remotely" (option 1, documented here), or you can use another remote state as a data source to reference its resources (option 3). The That older issue also has many references to "remote state" in |
Ok, thanks for clarifying, @jbardin - that helps a lot! A key difference I just picked up from @ehumphrey-axial is that the "remote state backend" (option 1) lets you sync your current state remotely, so if you change your infrastructure, then it will write your changes into that remote state. However, "remote state data source" (option 3) let's you pull in information about another terraform project. In this case, it can read from that project's remote state, but it won't ever write to it, because it's not trying to deploy or manage infrastructure for that other project. It was confusing because neither of them creates the s3 bucket* that holds your remote state, so they seemed like the same thing. But, option 1 does write into that bucket while option 2 can only read from that bucket. *I'm selfishly talking about the S3 backend, but I'm assuming similar behavior applies to other backends. Finally, I noticed in your last comment that there's another thing, you called:
which we could call option 0. I understand correctly, then, option 1 replaced option 0 for managing the current project's remote state and option 3 replaced option 2 for reading another project's (necessarily remote) state. |
Hi @ThatGerber, Thanks for the feed back here! It is becoming more clear that keeping the name of Terraform's private data file as As for this use case, I think it's covered by |
I encountered (and solved) this issue with In my However, the |
I just had the same issue with 0.10.2 and my AWS credentials file containing a default profile, as well as an additional one. As already reported, it looks like when declaring the remote backend, Terraform ignores the "profile" directive and only picks up the default profile. I was able to prove this - and fix my problem as well - simply by calling "default" my additional profile and giving the was-default one another name. I was lucky as in my case it was the right thing to do anyway :D |
Worth noting that you can sidestep this bug by calling terraform with a profile name in the environment var ie:
I have the following in my backend config:
|
@TimJDFletcher Unfortunately your solution doesn't work for me (I'm using 0.10.7). My config is identical to yours, yet when I run it in a clean environment I receive:
This same error occurs whether or not I change/hardcode any of the profile or config options ( Does anyone have any successful solutions? Am I perhaps missing a permission setting on AWS? |
I don't have a default aws profile defined at all. My s3 access is a cross account access with the following policy applied at a bucket level:
|
Unfortunately this issue still exist with Terraform 0.10.7 + S3 backend storage. |
As I explained I do not have a default defined at all and using an environment var I can at least tell the backend which which profile to use, thus:
|
I managed to make this work by adding s3:ListObjects and s3:ListBucket to my instance profile policy |
@fcisneros would you be so kind and guide me through your process, please? I'd like to simulate it :) |
Hi @anugnes I think that the policy I posted above is the minimum you can apply to a Terraform remote state bucket. The policy restricts terraform to a single label (basically a directory) within a single bucket. I'm not sure if you can remove the delete object perm and just allow terraform to overwrite the state files and just use the S3 object history feature |
@anugnes I think this is related to this issue, so the policy shown by @TimJDFletcher (similar to the one I ended up configuring) should do the trick |
I've also been flailing with the aws provider using non-default profiles. FWIW, here's my usecase: I have multiple AWS accounts and want to deploy my Terraform resources to each account, storing their state in an S3 bucket in the given account. I'm attempting to do this by with the following setup
Is there a better way to achieve this? I'm continuously stuck in weird limbo states when attempting to create/switch workspaces and notice Terraform is always falling back to the default profile. |
I'd try removing or renaming your default account. To retain the functionality of just typing "aws XXX yyy" set the env var |
I just found my way to this issue when I came to the realization that I need to prefix all of my terraform commands with |
Can confirm that with Terraform v0.11.1 I still got the same issue. My current workaround is as others already said, set env variable before using terraform |
this should be fixable. I will investigate and see where i get too. |
Does it actually work? Just started a new project and it seem to always look at the I even hardcoded my secrets for the Only when I explicitly use I'm using ver |
This is still present on 0.11.1 and AWS provider 1.7. If I specify config with |
@slykar, I haven't had any reports that it doesn't work. You say "despite having the aws provider configured", but the provider configuration is completely separate form the backend configuration. Are you certain that you have the backend configured correctly too? To those who have added that this is still not working for them, I sympathize that there may still be an undiagnosed issue, but the example configurations are known to work with various user profiles, and numerous production use cases are also working without issue. What we really need here is a complete and verifiable example showing that the incorrect user profile is being used for init. Once we have a way to reproduce the issue it will be much easier to fix the root cause. |
@jbardin I have everything set up and it does not work, both provider and backend:
And when I run
Terraform 0.11.2 and newest AWS provider 1.7. As you can notice even the error message says about missing AWS Provider credential even though it is a backend as you mentioned. Additionally if you want to parametrize the profile, then you can't really use string interpolation with variables in backend configuration, which would be also not convenient.
|
@bowczarek, The error starts with |
I'm getting the same thing with brand new state, first time init. It breaks from the command prompts but not when using -backend-config. However, init seems to work if the state is already there. |
@jbardin thanks! it's actually working now. This option is documented in Backend Initialization section rather than General Options, that's why I probably didn't notice that or forgot about it, when I was reading it some time ago. |
Same problem. |
He everyone, As I've stated above, what we need here is a complete, reproducible example that demonstrates the stated issue. Any examples I have seen have been resolved as other issues, but I do accept that there may be edge cases that have yet to be covered. This issue is getting unwieldy, and has too many diversions for someone trying to read through. I'm going to lock this for now, but keep it open to hopefully make it easier to find. If anyone comes across this with the same issue, please feel free to open a new one along with the configuration to reproduce it. |
Hello! 🤖 This issue relates to an older version of Terraform that is no longer in active development, and because the area of Terraform it relates to has changed significantly since the issue was opened we suspect that the issue is either fixed or that the circumstances around it have changed enough that we'd need an updated issue report in order to reproduce and address it. If you're still seeing this or a similar issue in the latest version of Terraform, please do feel free to open a new bug report! Please be sure to include all of the information requested in the template, even if it might seem redundant with the information already shared in this issue, because the internal details relating to this problem are likely to be different in the current version of Terraform. Thanks! |
Terraform Version
0.9.2
Affected Resource(s)
terraform s3 remote state backend
Terraform Configuration Files
~/.aws/credentials - This would be the entire file, no other entries
MAIN.TF
Debug Output
NA
Panic Output
NA
Expected Behavior
What should have happened?
The expectation is that I should be able to use a non-default profile credential to complete the backend init (
terraform init
) call.Actual Behavior
What actually happened?
command fails with the following error:
if i simply move the same credentials under the default profile in the standard shared credentials file it works without issue.
Steps to Reproduce
terraform init
Important Factoids
none
References
not that i was able to find for 0.9.2
The text was updated successfully, but these errors were encountered: