Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS_PROFILE not respected for S3 backend when running terraform init/terraform workspace #20062

Closed
Stretch96 opened this issue Jan 20, 2019 · 10 comments · Fixed by #25134
Closed
Assignees
Milestone

Comments

@Stretch96
Copy link

Stretch96 commented Jan 20, 2019

Terraform Version

Terraform v0.11.11

Terraform Configuration Files

terraform {
  backend "s3" {
    bucket  = "example-bucket"
    key     = "example/terraform.tfstate"
    region  = "eu-west-2"
    encrypt = "true"
  }
}

Debug Output

I have created a user, which has no permissions, except the permission to assume the develop role, which has full permissions

Example 1

Running terraform init.
This output is expected, as the user does not have permissions to allow access to the S3 bucket:

Error loading state: AccessDenied: Access Denied
	status code: 403, ...

Example 2

Running AWS_PROFILE=develop terraform init

Error configuring the backend "s3": No valid credential sources found for AWS Provider.
	Please see https://terraform.io/docs/providers/aws/index.html for more information on
	providing credentials for the AWS Provider

Please update the configuration in your Terraform files to fix this error.
If you'd like to update the configuration interactively without storing
the values in your configuration, run "terraform init".

Example 3

Running AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=develop terraform init

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

These examples are also true with terraform workspace commands

Additional Context

Unfortunately, terraform apply/terraform plan can't be ran with AWS_SDK_LOAD_CONFIG:

Error: Error refreshing state: 1 error(s) occurred:

* provider.aws: No valid credential sources found for AWS Provider.
  Please see https://terraform.io/docs/providers/aws/index.html for more information on
  providing credentials for the AWS Provider

This makes me think there is a difference in the way that credentials are loaded when using init vs plan/apply

If this can't be reproduced by others, I can provide TRACE logs ... There's just too many redactions to go through, if this can be reproduced elsewhere ...

@Stretch96 Stretch96 changed the title AWS_PROFILE not respected for S3 backend when running terraform init AWS_PROFILE not respected for S3 backend when running terraform init/terraform workspace Jan 20, 2019
@abiydv
Copy link

abiydv commented Jan 21, 2019

Hi @Stretch96,

From the outputs you have posted, seems your aws cli profile is not setup to use your new user. Can you try to setup the profile using aws configure? Once this is ready, you can update your provider block to add the role ARN this user can switch to. The example is mentioned in docs here

@Stretch96
Copy link
Author

Stretch96 commented Jan 21, 2019

Hi @abiydv, the aws cli profile is set up correctly, eg:

~/.aws/credentials

[default]
aws_access_key_id=blah
aws_secret_access_key=blah

~/.aws/config:

[profile develop]
region=eu-west-2
cli_follow_urlparam=false
role_arn = arn:aws:iam::123456789012:role/develop
source_profile = default

AWS_PROFILE is respected by provider blocks, however it is not respected by terraform blocks (for the S3 backend) - Without the need to hardcode a role to assume

As shown in the examples, it does work, but AWS_SDK_LOAD_CONFIG needs to be set

@Ashex
Copy link

Ashex commented Jan 23, 2019

@Stretch96 I've encountered this issue and the root of it is that terraform doesn't seem to support role assumption within credential profiles.

The only workaround I've found is the one mentioned.

@Stretch96
Copy link
Author

I've found a way to make terraform crash with AWS_SDK_LOAD_CONFIG set

If you attempt to set the AWS_PROFILE that uses a source_profile that doesn't exist, eg:

# ~/.aws/credentials
[default]
aws_access_key_id=blah
aws_secret_access_key=blah

# ~/.aws/config
[profile test_profile]
region=eu-west-2
role_arn = arn:aws:iam::123456789012:role/test-role
source_profile = oops_typo

Then run:

AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=test_profile terraform workspace list

Produces the crash output:

2019/05/14 22:25:42 [INFO] Terraform version: 0.11.13
2019/05/14 22:25:42 [INFO] Go runtime version: go1.12
2019/05/14 22:25:42 [INFO] CLI args: []string{"/usr/local/Cellar/terraform/0.11.13/bin/terraform", "workspace", "list"}
2019/05/14 22:25:42 [DEBUG] Attempting to open CLI config file: [redacted]
2019/05/14 22:25:42 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2019/05/14 22:25:42 [INFO] CLI command args: []string{"workspace", "list"}
2019/05/14 22:25:42 [DEBUG] command: loading backend config file: [redacted]
2019/05/14 22:25:42 [TRACE] Preserving existing state lineage "[redacted]"
2019/05/14 22:25:42 [TRACE] Preserving existing state lineage "[redacted]"
2019/05/14 22:25:42 [INFO] Building AWS region structure
2019/05/14 22:25:42 [INFO] Building AWS auth structure
2019/05/14 22:25:42 [INFO] Setting AWS metadata API timeout to 100ms
2019/05/14 22:25:42 [DEBUG] plugin: waiting for all plugin processes to complete...
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1]
goroutine 1 [running]:
github.com/hashicorp/terraform/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata.unmarshalHandler
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/service.go:119 +0x3e
github.com/hashicorp/terraform/vendor/github.com/aws/aws-sdk-go/aws/request.(*HandlerList).Run
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go:213 +0x98
github.com/hashicorp/terraform/vendor/github.com/aws/aws-sdk-go/aws/request.(*Request).Send
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/vendor/github.com/aws/aws-sdk-go/aws/request/request.go:525 +0x49c
github.com/hashicorp/terraform/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata.(*EC2Metadata).GetMetadata
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go:28 +0x29a
github.com/hashicorp/terraform/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata.(*EC2Metadata).Available(...)
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go:129
github.com/hashicorp/terraform/vendor/github.com/terraform-providers/terraform-provider-aws/aws.GetCredentials
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/vendor/github.com/terraform-providers/terraform-provider-aws/aws/auth_helpers.go:164 +0x13ea
github.com/hashicorp/terraform/vendor/github.com/terraform-providers/terraform-provider-aws/aws.(*Config).Client
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/vendor/github.com/terraform-providers/terraform-provider-aws/aws/config.go:278 +0x138
github.com/hashicorp/terraform/backend/remote-state/s3.(*Backend).configure
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/backend/remote-state/s3/backend.go:266 +0xaa6
github.com/hashicorp/terraform/helper/schema.(*Backend).Configure
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/helper/schema/backend.go:80 +0x177
github.com/hashicorp/terraform/command.(*Meta).backend_C_r_S_unchanged
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/command/meta_backend.go:1113 +0x22f
github.com/hashicorp/terraform/command.(*Meta).backendFromConfig
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/command/meta_backend.go:401 +0xe1a
github.com/hashicorp/terraform/command.(*Meta).Backend
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/command/meta_backend.go:88 +0x710
github.com/hashicorp/terraform/command.(*WorkspaceListCommand).Run
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/command/workspace_list.go:44 +0x2ef
github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli.(*CLI).Run
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/cli.go:255 +0x1dd
main.wrappedMain
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/main.go:223 +0xb04
main.realMain
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/main.go:100 +0xb4
main.main()
        /private/tmp/terraform-20190311-48945-1cw2gue/terraform-0.11.13/src/github.com/hashicorp/terraform/main.go:36 +0x3b

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.

[1]: https://github.com/hashicorp/terraform/issues

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

@scho
Copy link

scho commented May 24, 2019

I ran into the same issue. Simply specifying the profile in terraform => backend did the trick for me. After that, terraform init|plan|apply works without any permission issues:

terraform {
  backend "s3" {
    # ...
    profile        = "develop"
  }
}

@scalp42
Copy link
Contributor

scalp42 commented Sep 5, 2019

see #22377 as well

@Puvipavan
Copy link

I had the same issue. This is how I fixed it!

According to the documentation, "If you're running Terraform from an EC2 instance with IAM Instance Profile using IAM Role, Terraform will just ask the metadata API endpoint for credentials." Therefore it always taking Instance Role instead of specified profile. So I set environment variable to overwrite aws metadata url.

AWS_METADATA_URL=http://InvalidHost/

So totally you should insert 3 environment variables:

        AWS_METADATA_URL=http://InvalidHost/
        AWS_SDK_LOAD_CONFIG=1
        AWS_PROFILE=develop

If you need to know what happens, then try adding another environment variable for debugging:
TF_LOG=DEBUG

My terraform version is 0.12.21

@bflad bflad self-assigned this Jun 2, 2020
@bflad bflad added this to the v0.13.0 milestone Jun 2, 2020
bflad added a commit that referenced this issue Jun 4, 2020
Reference: #13410
Reference: #18774
Reference: #19482
Reference: #20062
Reference: #20599
Reference: #22103
Reference: #22161
Reference: #22601
Reference: #22992
Reference: #24252
Reference: #24253
Reference: #24480
Reference: #25056

Changes:

```
NOTES

* backend/s3: Deprecated `lock_table`, `skip_get_ec2_platforms`, `skip_requesting_account_id` arguments have been removed
* backend/s3: Credential ordering has changed from static, environment, shared credentials, EC2 metadata, default AWS Go SDK (shared configuration, web identity, ECS, EC2 Metadata) to static, environment, shared credentials, default AWS Go SDK (shared configuration, web identity, ECS, EC2 Metadata)
* The `AWS_METADATA_TIMEOUT` environment variable no longer has any effect as we now depend on the default AWS Go SDK EC2 Metadata client timeout of one second with two retries

ENHANCEMENTS

* backend/s3: Always enable shared configuration file support (no longer require `AWS_SDK_LOAD_CONFIG` environment variable)
* backend/s3: Automatically expand `~` prefix for home directories in `shared_credentials_file` argument
* backend/s3: Add `assume_role_duration_seconds`, `assume_role_policy_arns`, `assume_role_tags`, and `assume_role_transitive_tag_keys` arguments

BUG FIXES

* backend/s3: Ensure configured profile is used
* backend/s3: Ensure configured STS endpoint is used during AssumeRole API calls
* backend/s3: Prefer AWS shared configuration over EC2 metadata credentials
* backend/s3: Prefer ECS credentials over EC2 metadata credentials
* backend/s3: Remove hardcoded AWS Provider messaging
```

Output from acceptance testing:

```
--- PASS: TestBackend (16.32s)
--- PASS: TestBackendConfig (0.58s)
--- PASS: TestBackendConfig_AssumeRole (0.02s)
--- PASS: TestBackendConfig_conflictingEncryptionSchema (0.00s)
--- PASS: TestBackendConfig_invalidKey (0.00s)
--- PASS: TestBackendConfig_invalidSSECustomerKeyEncoding (0.00s)
--- PASS: TestBackendConfig_invalidSSECustomerKeyLength (0.00s)
--- PASS: TestBackendExtraPaths (13.21s)
--- PASS: TestBackendLocked (28.98s)
--- PASS: TestBackendPrefixInWorkspace (5.65s)
--- PASS: TestBackendSSECustomerKey (17.60s)
--- PASS: TestBackend_impl (0.00s)
--- PASS: TestForceUnlock (17.50s)
--- PASS: TestKeyEnv (50.25s)
--- PASS: TestRemoteClient (4.78s)
--- PASS: TestRemoteClientLocks (16.85s)
--- PASS: TestRemoteClient_clientMD5 (12.08s)
--- PASS: TestRemoteClient_impl (0.00s)
--- PASS: TestRemoteClient_stateChecksum (17.92s)
```
bflad added a commit that referenced this issue Jun 5, 2020
* deps: Update github.com/hashicorp/aws-sdk-go-base@v0.5.0

Updated via:

```
$ go get github.com/hashicorp/aws-sdk-go-base@v0.5.0
$ go mod tidy
$ go mod vendor
```

* backend/s3: Updates for Terraform v0.13.0

Reference: #13410
Reference: #18774
Reference: #19482
Reference: #20062
Reference: #20599
Reference: #22103
Reference: #22161
Reference: #22601
Reference: #22992
Reference: #24252
Reference: #24253
Reference: #24480
Reference: #25056

Changes:

```
NOTES

* backend/s3: Deprecated `lock_table`, `skip_get_ec2_platforms`, `skip_requesting_account_id` arguments have been removed
* backend/s3: Credential ordering has changed from static, environment, shared credentials, EC2 metadata, default AWS Go SDK (shared configuration, web identity, ECS, EC2 Metadata) to static, environment, shared credentials, default AWS Go SDK (shared configuration, web identity, ECS, EC2 Metadata)
* The `AWS_METADATA_TIMEOUT` environment variable no longer has any effect as we now depend on the default AWS Go SDK EC2 Metadata client timeout of one second with two retries

ENHANCEMENTS

* backend/s3: Always enable shared configuration file support (no longer require `AWS_SDK_LOAD_CONFIG` environment variable)
* backend/s3: Automatically expand `~` prefix for home directories in `shared_credentials_file` argument
* backend/s3: Add `assume_role_duration_seconds`, `assume_role_policy_arns`, `assume_role_tags`, and `assume_role_transitive_tag_keys` arguments

BUG FIXES

* backend/s3: Ensure configured profile is used
* backend/s3: Ensure configured STS endpoint is used during AssumeRole API calls
* backend/s3: Prefer AWS shared configuration over EC2 metadata credentials
* backend/s3: Prefer ECS credentials over EC2 metadata credentials
* backend/s3: Remove hardcoded AWS Provider messaging
```

Output from acceptance testing:

```
--- PASS: TestBackend (16.32s)
--- PASS: TestBackendConfig (0.58s)
--- PASS: TestBackendConfig_AssumeRole (0.02s)
--- PASS: TestBackendConfig_conflictingEncryptionSchema (0.00s)
--- PASS: TestBackendConfig_invalidKey (0.00s)
--- PASS: TestBackendConfig_invalidSSECustomerKeyEncoding (0.00s)
--- PASS: TestBackendConfig_invalidSSECustomerKeyLength (0.00s)
--- PASS: TestBackendExtraPaths (13.21s)
--- PASS: TestBackendLocked (28.98s)
--- PASS: TestBackendPrefixInWorkspace (5.65s)
--- PASS: TestBackendSSECustomerKey (17.60s)
--- PASS: TestBackend_impl (0.00s)
--- PASS: TestForceUnlock (17.50s)
--- PASS: TestKeyEnv (50.25s)
--- PASS: TestRemoteClient (4.78s)
--- PASS: TestRemoteClientLocks (16.85s)
--- PASS: TestRemoteClient_clientMD5 (12.08s)
--- PASS: TestRemoteClient_impl (0.00s)
--- PASS: TestRemoteClient_stateChecksum (17.92s)
```
@bflad
Copy link
Contributor

bflad commented Jun 5, 2020

Multiple fixes for credential ordering, automatically using the AWS shared configuration file if present, and profile configuration handling of the S3 Backend have been merged and will release with version 0.13.0-beta2 of Terraform.

@byron70
Copy link

byron70 commented Jun 9, 2020

Sorry about the bad post fix etiquette, but thought many may not be able to wait til 0.13.

A work around that worked for me with 0.12.26 was removing all AWS_ environment variables, having all my AWS profile configs in ~/.aws/credentials and explicitly setting the shared_credentials_file pathway along with the profile attribute. I did this in both Windows (c:/users/me/.aws/credentials) and Linux container with success. .

Also, this worked with the S3 backend in one AWS account and provider resources in another (so 2 different aws creds profiles).

mildwonkey pushed a commit that referenced this issue Jun 12, 2020
* deps: Update github.com/hashicorp/aws-sdk-go-base@v0.5.0

Updated via:

```
$ go get github.com/hashicorp/aws-sdk-go-base@v0.5.0
$ go mod tidy
$ go mod vendor
```

* backend/s3: Updates for Terraform v0.13.0

Reference: #13410
Reference: #18774
Reference: #19482
Reference: #20062
Reference: #20599
Reference: #22103
Reference: #22161
Reference: #22601
Reference: #22992
Reference: #24252
Reference: #24253
Reference: #24480
Reference: #25056

Changes:

```
NOTES

* backend/s3: Deprecated `lock_table`, `skip_get_ec2_platforms`, `skip_requesting_account_id` arguments have been removed
* backend/s3: Credential ordering has changed from static, environment, shared credentials, EC2 metadata, default AWS Go SDK (shared configuration, web identity, ECS, EC2 Metadata) to static, environment, shared credentials, default AWS Go SDK (shared configuration, web identity, ECS, EC2 Metadata)
* The `AWS_METADATA_TIMEOUT` environment variable no longer has any effect as we now depend on the default AWS Go SDK EC2 Metadata client timeout of one second with two retries

ENHANCEMENTS

* backend/s3: Always enable shared configuration file support (no longer require `AWS_SDK_LOAD_CONFIG` environment variable)
* backend/s3: Automatically expand `~` prefix for home directories in `shared_credentials_file` argument
* backend/s3: Add `assume_role_duration_seconds`, `assume_role_policy_arns`, `assume_role_tags`, and `assume_role_transitive_tag_keys` arguments

BUG FIXES

* backend/s3: Ensure configured profile is used
* backend/s3: Ensure configured STS endpoint is used during AssumeRole API calls
* backend/s3: Prefer AWS shared configuration over EC2 metadata credentials
* backend/s3: Prefer ECS credentials over EC2 metadata credentials
* backend/s3: Remove hardcoded AWS Provider messaging
```

Output from acceptance testing:

```
--- PASS: TestBackend (16.32s)
--- PASS: TestBackendConfig (0.58s)
--- PASS: TestBackendConfig_AssumeRole (0.02s)
--- PASS: TestBackendConfig_conflictingEncryptionSchema (0.00s)
--- PASS: TestBackendConfig_invalidKey (0.00s)
--- PASS: TestBackendConfig_invalidSSECustomerKeyEncoding (0.00s)
--- PASS: TestBackendConfig_invalidSSECustomerKeyLength (0.00s)
--- PASS: TestBackendExtraPaths (13.21s)
--- PASS: TestBackendLocked (28.98s)
--- PASS: TestBackendPrefixInWorkspace (5.65s)
--- PASS: TestBackendSSECustomerKey (17.60s)
--- PASS: TestBackend_impl (0.00s)
--- PASS: TestForceUnlock (17.50s)
--- PASS: TestKeyEnv (50.25s)
--- PASS: TestRemoteClient (4.78s)
--- PASS: TestRemoteClientLocks (16.85s)
--- PASS: TestRemoteClient_clientMD5 (12.08s)
--- PASS: TestRemoteClient_impl (0.00s)
--- PASS: TestRemoteClient_stateChecksum (17.92s)
```
@ghost
Copy link

ghost commented Jul 6, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Jul 6, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants