Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google_compute_project_metadata_item overwrites existing key/value #5514

Closed
luigizhou opened this issue Jan 27, 2020 · 10 comments · Fixed by GoogleCloudPlatform/magic-modules#3063, #5576 or hashicorp/terraform-provider-google-beta#1714
Assignees
Labels

Comments

@luigizhou
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

Terraform v0.12.18

  • provider.google v3.5.0

Affected Resource(s)

  • google_compute_project_metadata_item

Terraform Configuration Files

provider "google" {
  project = "project-name"
}

resource "google_compute_project_metadata_item" "test" {
  key     = "ssh-keys"
  value   = "user:random_ssh_public_key"
}

Expected Behavior

Applying this to an account with an existing ssh-keys config should gracefully fail / return a 403

Actual Behavior

The ssh-keys value is being overwritten even if it is managed by another Terraform workspace/Terraform codebase

Steps to Reproduce

  1. terraform apply
  2. terraform workspace new example
  3. terraform apply
  4. terraform destroy
  5. terraform workspace select default
  6. terraform plan

You'll see the resource needs to be recreated

References

@ghost ghost added the bug label Jan 27, 2020
@edwardmedia edwardmedia self-assigned this Jan 28, 2020
@edwardmedia
Copy link
Contributor

@luigizhou when you plan to use workspace like your cases, shouldn't you set config like below instead? Let me know if this works for you.

resource "google_compute_project_metadata_item" "test" {
  key     = "ssh-keys-${terraform.workspace == "default" ? 5 : 1}"
  value   = "user:random_ssh_public_key ${terraform.workspace == "default" ? 5 : 1}"
}

@luigizhou
Copy link
Author

@edwardmedia the workspace was just a simple way to reproduce the issue, the main problem is that I believe the behavior of Terraform, in this case, is wrong. I expected to receive a 403 because I was trying to create a resource that existed already.

The real-world scenario that I want to avoid is having someone testing their terraform code and accidentally delete/overwrite my configured project metadata item (especially if this is managed by a different Terraform already).

I'm not sure if this is possible with the API available from google cloud, in which case I believe it would help to at least highlight this particular behaviour in the documentation.

Anyway, I tried your code and although multiple keys are being created (ssh-keys-1 and ssh-keys-5) those don't translate into actual ssh-keys metadata that can be used to login into instances in the project.

@ghost ghost removed the waiting-response label Jan 29, 2020
@edwardmedia
Copy link
Contributor

edwardmedia commented Jan 29, 2020

@luigizhou I don't think terraform checks if the resource already exists for the purpose you wanted. If check happened, how terraform performs update?
Instead of using workspace, have you check out the feature of backend which saves state files remotely. gcs is an option.

@luigizhou
Copy link
Author

@edwardmedia I don't think the solution you are proposing makes sense. Let's say for the sake of the argument that I would use the backend feature. What would happen if another user would apply their own Terraform code where they are defining their own google_compute_project_metadata_item in their own repo separate from mine? They may not be aware of a previously existing resource defined by a pre-existing Terraform configuration and I would expect for them to either receive a 409 (I mistakenly mentioned 403 earlier which was the wrong response code number) or for this overwriting behaviour to be highlighted in the documentation.

Please take a look at the google_dns_managed_zone resource or even the google_service_account resource. They both return a 409 stating the resource exists already if already defined within the google cloud project.

To answer your other question, update and create are two very different operations in Terraform.
When creating a resource, terraform checks if said resource exists already and should return a 409 if it does. An update operation instead requires terraform to be aware of the resources it is already managing in which case they would perform a change to an existing managed resource.

Because this should be the ideal behaviour of Terraform, anything that doesn't fully comply with it should be highlighted in the documentation if it is not possible to fix it in the first place.

Please also take a look at this same issue if it is still unclear what I'm trying to say

#1214

@ghost ghost removed the waiting-response label Jan 29, 2020
@edwardmedia
Copy link
Contributor

edwardmedia commented Jan 29, 2020

@luigizhou Do you use github or other tools to manage the code? Is the second person able to see if the key is already used from the source control? Does the source control help address your concerns?

@luigizhou
Copy link
Author

luigizhou commented Jan 29, 2020

Hey @edwardmedia I didn't mean to antagonise you, I'm just trying to be as clear as possible since we are on two very different timezones. I apologise if I came across in a bad way.

To clarify, I believe with best practices, effective communication and good access control to gcp the issue can be mitigated. I'm just looking to see if it's possible to implement a solution upstream either by fixing it from the provider point of view or, if it is not possible, by adding information about this behaviour on the documentation page.

@ghost ghost removed the waiting-response label Jan 29, 2020
@edwardmedia
Copy link
Contributor

@luigizhou you don't need to feel bad at all. I am here trying to assist you. What I understood is you were trying to avoid overwriting existing keys. If you use source control, does it help? If my understanding is not accurate, could you explain a little more in details about your use cases?
To answer your question, technically yes to implement steps to check if the resources have been created or not. But thinking of massive number of resources, it is not practical to implement that check logic. That is why terraform offers state files that reflect the state of your resources. If terraform can verify resources in real-time, do you think state files are still needed?

@ish-xyz
Copy link

ish-xyz commented Jan 30, 2020

@edwardmedia I had the same behaviour too.

I’ve done:
terraform workspace select default
terraform apply
ssh key created

terraform workspace select testing
terraform apply
the new ssh key overwrite the previous one

As said before, the expected behaviour should be a 409 HTTP error as per other GCP resources.

Could you please flag it as bug?

@ghost ghost removed the waiting-response label Jan 30, 2020
@danawillow
Copy link
Contributor

I took a look at the original implementation of this resource (#176) and it doesn't seem like having it overwrite an existing entry was a conscious decision, so I'm happy to make the change to make it fail when it tries to do so. That would bring it in line with the behavior that most of the rest of our resources have.

For some clarity: the 409 that most other resources (like service account) return is just a pass-through from the API when we try to blindly call Create on something that already exists. compute_project_metadata_item doesn't have its own API endpoint: instead, we call GET for project metadata, patch the response such that it contains the entry we want, and send it back. That's why it doesn't natively have the same behavior as other resources.

Will get started on a fix now.

@ghost
Copy link

ghost commented Mar 28, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 28, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.