Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS S3 bucket policies overwrite each other #6334

Open
xird opened this issue Nov 2, 2018 · 4 comments
Open

AWS S3 bucket policies overwrite each other #6334

xird opened this issue Nov 2, 2018 · 4 comments
Labels
service/s3 Issues and PRs that pertain to the s3 service. upstream-terraform Addresses functionality related to the Terraform core binary.

Comments

@xird
Copy link

xird commented Nov 2, 2018

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Report

I'm aware of #409 , but there was no mention in that ticket of any plans to mitigate the problem of earlier bucket policies getting discarded, which I believe is the biggest problem here.

What happened here was that I attempted to add a bucket policy in a production bucket without realizing that there was already a policy in the bucket. Terraform happily ran, and promptly removed the original policy from the bucket, causing all kinds of headaches for the systems attempting to use the bucket. I don't see a situation where this would be the desired end result, so I'm considering this a bug.

The easiest way to deal with this would be that Terraform wouldn't run if there are multiple aws_s3_bucket_policy resources pointing to the same bucket. Another option would be to merge the Statements from each resource into the bucket's policy, but I think it would be better to just exit and have the user merge the policies in the script.

Terraform Version

v0.11.10 + provider.aws v1.41.0

Affected Resource(s)

  • aws_s3_bucket_policy

Terraform Configuration Files

provider "aws" {
  region     = "eu-west-1"
  version    = "~> 1.14"
}

resource "aws_s3_bucket" "test_bucket" {
  bucket = "tf-bucket-policy-test-bucket2"
}

resource "aws_s3_bucket_policy" "test_bucket_policy_a" {
  bucket = "${aws_s3_bucket.test_bucket.id}"
  policy =<<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
        "Principal": {
           "AWS": "arn:aws:iam::811061967346:root"
        },
        "Effect": "Allow",
        "Action": [
           "s3:ListBucket"
        ],
        "Resource": ["arn:aws:s3:::tf-bucket-policy-test-bucket2"]
    }
  ]
}
POLICY
}

resource "aws_s3_bucket_policy" "test_bucket_policy_b" {
  bucket = "${aws_s3_bucket.test_bucket.id}"
  policy =<<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
        "Principal": {
           "AWS": "arn:aws:iam::131338444691:root"
        },
        "Effect": "Allow",
        "Action": [
           "s3:ListBucket"
        ],
        "Resource": ["arn:aws:s3:::tf-bucket-policy-test-bucket2"]
    }
  ]
}
POLICY
}

Expected Behavior

Either the policies should be merged into a single bucket policy, or Terraform should refuse to run if there are conflicting bucket policies referring to the same bucket resource.

Actual Behavior

Only policy B is left in the bucket's policy. Policy A is discarded.

Steps to Reproduce

  • Rename the bucket in the above Terraform script (due to global bucket namespace).
  • Remove "test_bucket_policy_b" from the script.
  • Run the script.
  • Add "test_bucket_policy_b" back in the script.
  • Re-run the script.
@bflad bflad added upstream-terraform Addresses functionality related to the Terraform core binary. service/s3 Issues and PRs that pertain to the s3 service. labels Nov 2, 2018
@bflad
Copy link
Contributor

bflad commented Nov 2, 2018

Very briefly, the solution is not exactly straightforward since provider resources are completely independent of each other (currently) in terms of how Terraform core works with and handles them. There are likely two fixes/enhancements that can occur here to help this situation (which is fairly common across the provider for APIs that use the same "PUT" type methodology for Create and Update).

Within the existing resource, we can split up the Create and Update functions to do the following (in pseudo Go code):

func resourceXCreate(d *resource.ResourceData, meta interface{}) error {
  // Likely grab a mutex
  // Perform a read against the API
  if output != nil {
    return fmt.Errorf("resource already exists")
  }

  err := resourceXUpdate(d, meta)
  // Likely return the mutex
  return nil
}

func resourceXUpdate(d *resource.ResourceData, meta interface{}) error {
  // ... existing logic ...
  return resourceXRead(d, meta)
}

The problem with the above setup is that Terraform runs with a default concurrency of 10. This solution does not handle race conditions well (the reads may occur at the same time prior to update, so we would have to implement a mutex) and does not account for API eventual consistency (there is nothing we can really do here other than potentially retry the read on create for a minute or two to be sure nothing exists). Given the complexity introduced by a solution like this, I believe we have shied away from this setup in the past.

That said, some potential mitigation could occur in Terraform core and/or the provider SDK to allow some form of plan-time validation that two resources with the same exact reference/value for X attribute triggers a validation error. Neither the provider SDK nor Terraform core have any support for this functionality currently as far as I know. To get the ball rolling on that enhancement would require filing upstream issue(s) and waiting for an implementation.

Hopefully this context provides some benefit.

@OJFord
Copy link

OJFord commented Feb 20, 2020

policy can already be specified on an aws_s3_bucket resource, would it not be better to simply (deprecate &) remove the aws_s3_bucket_policy resource, since at most one can exist per aws_s3_bucket?

Even in cases where 'at most X' things are allowed per other resource Y; especially now that Terraform has dynamic blocks, I'd argue that they should always be specified on Y so that it can encapsulate that logic.

It's misleading to suggest it's a separate resource when there's such tight coupling, IMO.

@cristim
Copy link

cristim commented Mar 12, 2020

I just ran into this and spent a lot of time until I figured it out the hard way and eventually got it work by converting multiple policies into a combined one (which was also quite hard because I had multiple related Deny rules that conflicted with each other and had to be combined into one with multiple conditions).

I totally agree with @OJFord's point that this should be enforced as a singleton by specifying it at the bucket level.

@kcd83
Copy link

kcd83 commented Aug 11, 2021

Perhaps the solution is a documentation one. An "at most 1" warning like the exclusive iam_policy_attachment

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy_attachment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
service/s3 Issues and PRs that pertain to the s3 service. upstream-terraform Addresses functionality related to the Terraform core binary.
Projects
None yet
Development

No branches or pull requests

5 participants