Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prevent_destroy should let you succeed #3874

Open
ketzacoatl opened this issue Nov 12, 2015 · 156 comments
Open

prevent_destroy should let you succeed #3874

ketzacoatl opened this issue Nov 12, 2015 · 156 comments

Comments

@ketzacoatl
Copy link
Contributor

Call me crazy, but I'm willing to call the current implementation of prevent_destroy a bug. Here is why: The current implementation of this flag prevents you from using it for 1/2 the use case.

The net result is more frustration when trying to get Terraform to succeed instead of destroying your resources.. prevent_destroy adds to the frustration more than alleviating it.

prevent_destroy is for these two primary use cases, right?

  1. you don't want this resource to be deleted, and you want to see errors when TF tries to do that
  2. you don't want this resource to be deleted, and you don't want to hear a peep out of TF - TF should skip over its usual prerogative to rm -rf on change.

I see no reason why TF must return an error when using prevent_destroy for the second use case, and in doing so, TF is completely ignoring my utterly clear directive to let me get work done. As a user, I end up feeling as though TF is wasting my time because I am focused on simple end goals which I am unable to attain while I spin my wheels begging TF to create more resources without destroying what exists.

You might say the user should update their plan to not be in conflict, and I would agree that is what you want to do in most cases.. but, honestly, that is not always the right solution for the situation at hand when using a tool like TF for the real-world. I believe in empowering users, and the current implementation of this flag prevents sensible use of the tool.

@jen20
Copy link
Contributor

jen20 commented Nov 12, 2015

Hi @ketzacoatl - thanks for opening this! Based on your description I'm certainly sympathetic to the idea that Terraform should not terminate with an error code if the user intent is to prevent resources being deleted, but I'm inclined to say that the output should indicate that resources where prevent_destroy was a factor in the execution should indicate this. @phinze, do you have any thoughts on this?

@apparentlymart
Copy link
Contributor

Definitely sympathetic about this use-case too. I think a concern is that if Terraform fails to include one part of the update then that may have downstream impact in the dependency graph, which can be fine if you're intentionally doing it but would be confusing if Terraform just did it "by default".

Do you think having the ability to exclude resources from plan, as proposed in #3366, would address your use-case? I'm imagining the following workflow:

  • Run terraform plan and see the error you described.
  • Study the plan and understand why the error occured and decide whether it's ignorable.
  • If it's ignorable, re-run terraform plan with an exclude argument for the resource in question, thus allowing the plan to succeed for all of the remaining resources.

I'm attracted to this solution because it makes the behavior explicit while still allowing you to proceed as you said. It requires you to still do a little more work to understand what is failing and thus what you need to exclude, but once you're sure about it you only need to modify your command line rather than having to potentially rebuild a chunk of your config.

@ketzacoatl
Copy link
Contributor Author

I'd agree @jen20, I am primarily looking for the ability to tell TF that it does not need to quit/error out hard. Same on @apparentlymart's comment on default behavior - I agree, this is a specific use case and not meant as a default.

Do you think having the ability to exclude resources from plan, as proposed in #3366, would address your use-case?

I had to re-read that a few times to make enough sense out of how that works (the doc addition helps: Prefixing the resource with ! will exclude the resource. - this is for the -target arg). That is close, but if my understanding is correct, no it would not get me through.

In my nuanced situation, I have an aws_instance resource, and I increased count, and modified user_data. I tell TF to ignore user_data, but #3555 is preventing that from working, and so TF wants to destroy my instance before creating additional ones. All I want is for TF to create more resources that fit the current spec, leaving the existing node alone (I'm changing user_data, just leave it be..) I would like to see the same if I change EBS volumes.

#3366 is to use exclude with -target, which would have TF skip that resource.. which does not help when you want to modify/update the resource TF wants to destroy - TF wants to destroy a resource, and I want that resource both left alone, and included in the plan to apply.

When I found prevent_destroy in the docs, it sounded perfect, except it was clear that it would not work because it would throw an error if it ran into a situation where TF wanted to destroy, but prevent_destroy was enabled. I believe a user should be able to tell TF that hard error/exit can be skipped this time.

@mrfoobar1
Copy link

Would it be possible to get an additional flag when calling: terraform plan -destroy [ -keep-prevent-destroy ]

I have the same problem, I have a few EIP associated with some instances. I want to be able to destroy every but keep the EIP for obvious reasons like whitelisting but I get the same kind of problem. I understand what destroy is all about, but in some cases it would be nice getting a warning saying this and that didn't get destroyed because of lifecycle.prevent_destroy = true.

@ketzacoatl exclude would be nice!

@erichmond
Copy link

+1, I need something along these lines as well.

Would #3366 allow you to skip destroying a resource, but modify it instead? My specific use case is that I have a staging RDS instance I want to persist (never be destroyed), but I want the rest of my staging infrastructure to disappear. As a side effect of the staging environment disappearing, I need to modify the security groups on the RDS instance, since it is being deleted.

So, if I had

  • two AWS instances
  • one rds security group
  • one rds instance.

Upon running "terraform destroy -force" I'd see:

  • two AWS instances shutting down
  • one RDS security group disappearing
  • one RDS instance would change to remove the RDS security group associated with it, but the instance itself would persist (but essentially be unreachable).

@phinze
Copy link
Contributor

phinze commented Dec 3, 2015

Hey folks,

Good discussion here. It does sound like there's enough real world use cases to warrant a feature here.

What about maintaining the current semantics of prevent_destroy and adding a new key called something like skip_destroy indicating: any plan that would destroy this resource should be automatically modified to not destroy it.

Would something like this address all the needs expressed in this thread? If so, we can spec out the feature more formally and get it added.

@ketzacoatl
Copy link
Contributor Author

@phinze, that sounds good, yes.. I'd hope that in most cases, TF would be able to let the apply proceed, and let the user flag some resources as being left alone/not destroyed, and your proposal seems to provide the level of control needed, while retaining sensible semantics.

@erichmond
Copy link

👍 to what @ketzacoatl said

@trmitchell7
Copy link

👍 to what @phinze proposed.
This would be very convenient for me right now. :)

@chadgrant
Copy link

I keep running in to this, I would like the ability for TF to only create if it does not exist and do not delete it. I would like to keep some ebs or rds data around and keep the rest of my stack as ephemeral (letting TF apply/destroy at will).

Currently been doing this with different projects/directories. But it would be nice to keep the entire stack together as one piece.

I too thought the prevent_destroy would not create an error and have been hacking my way around it quite a bit :(

@tsailiming
Copy link

👍 to what @phinze said. During apply, I want it to be created but ignored during destroy. Currently, I have to explicitly define the rest of the targets just to ignore 1 s3 resource.

@gservat
Copy link

gservat commented Mar 27, 2016

+1 - just ran into this. Another example are key pairs. I want to create (import) them if they don't exist, but if I destroy, I don't want to delete the keypair as other instances may be using the shared keypair.

Is there a way around this for now?

@mrfoobar1
Copy link

Yes, split your terraform project into multiple parts.

Example:

  • base
  • core (like persistent data)
  • application stack

Le dimanche 27 mars 2016, gservat notifications@github.com a écrit :

+1 - just ran into this. Another example are key pairs. I want to create
them if they don't exist, but if I destroy, I don't want to delete the
keypair as other instances may be using the shared keypair.

Is there a way around this for now?


You are receiving this because you commented.
Reply to this email directly or view it on GitHub
#3874 (comment)

Sent from Gmail Mobile

@cescoferraro
Copy link

this is a must for me to be able to work with the digitalocean_volume

@colutti

This comment was marked as off-topic.

@mitchellh
Copy link
Contributor

Changing the flags here to enhancement, but still agree this is a good idea.

@mitchellh mitchellh added enhancement and removed bug labels Oct 27, 2016
@bbakersmith

This comment was marked as off-topic.

@jbrown-rp
Copy link

Is this being looked at? I can't imagine there are many use cases that would NOT benefit from it.
One example is 'Anyone using key pairs ever'.

@steve-gray
Copy link

This is absolutely one of the banes of my life too. I've got dozens of resources I want to preserve from accidental overwrites - such as DynamoDB tables. A pair of flags for:

  • Keeping items that you prevent_destroy on (i.e. Don't delete the users from DynamoDB, ever - just skip it during a routine destroy)
  • Destroy, force.

The flags could be something explicit like:

  • terraform destroy --skip-protected
  • terraform destroy --force-destroy-protected

This would allow us to have the desired behaviour and only require an operator intervention in the case where the resource still exists, but is not mutatable into the target state during a terraform apply (i.e. If you've still got the same table, but the keys are now incompatible or some other potentially destructive update).

@glasser
Copy link
Contributor

glasser commented Jan 19, 2017

Here's the use case we'd like this for: we have a module that we can use either for production (where some resources like Elastic IPs should not be accidentally deleted) or for running integration tests (where all resources should be destroyed afterwards).

Because of #10730/#3116, we can't set these resources to be conditionally prevent_destroy, which would be the ideal solution. As a workaround, we'd be happy to have our integration test scripts run terraform destroy --ignore-prevent-destroy if that existed.

@andyjcboyle
Copy link

andyjcboyle commented Jan 24, 2017

This would defintely be a useful feature.

I've been using terraform for less than a month and ran into this required feature in order to protect DNS Managed zone ... everything else in my infrastucture is transient but dealing with a new DNS zone comes with it computed ( potentially new ) Name Servers on what is a delegated zone, and this would introduce an unnecessary manual step to update the parent DNS managed zone - not to mention the DNS change time delay permeating making any auto testing have a much higher latency.

Reading above looks like the workaround is to split my project into different parts. Not sure I can pass in a resource from one project into another ... but I guess I can use variables in worst case scenario.

@kaii-zen
Copy link

I'm hitting a slightly different use case with Vault. I'm not 100% sure whether this belongs here. Might be best handled in the Vault resource itself.

Example:

resource "vault_generic_secret" "github_auth_enable" {
  path      = "sys/auth/github"
  data_json = "...some json..."
}

resource "vault_generic_secret" "github_auth_config" {
  path      = "auth/github/config"
  data_json = "...some json..."
  depends_on = ["vault_generic_secret.github_auth_enable"]
}

The problem is that the 'auth/github/config' path does not even support the delete operation: the entire 'auth/github' prefix gets wiped as soon as 'sys/auth/github' is deleted. Not only does this result in an error, but also a broken state: a subsequent apply would assume that key still exists.

@mengesb
Copy link
Contributor

mengesb commented Feb 24, 2017

So my instance and issue would be things like rapid development and say docker_image / docker_container usage.

I set prevent_destroy = true on the docker_image resources because I don't want terraform deleting the image from my disk so that I can rapidly destroy/create and run through development. When I set that, now I have to use a fancy scripting method to execute my targeted destroy list to destroy everything BUT the docker_image resources:

TARGETS=$(for I in $(terraform state list | grep -v docker_image); do echo " -target $I"; done); echo terraform destroy $TARGETS

What I would like would be two methods. One that allows me to still succeed because the plan says "hey, don't destroy this", and if I am bold and say -force what I mean is "yeah... I said to not destroy it, but I'm forcing you to do it anyway... OBEY ME!"

@sbocinec
Copy link

sbocinec commented Jan 27, 2024

@evbo you are right, that's perfectly valid case . I saw a bunch of people mentioning doing state rm manually, and I was quick to make conclusion about the "majority", I should have used "some" 🙇

@alethenorio
Copy link

Is it possible to define a new resource alongside a remove block so the resource is created on apply and immediately removed from the state (but not destroyed)?

@anthosz
Copy link

anthosz commented Jan 28, 2024

Is it possible to define a new resource alongside a remove block so the resource is created on apply and immediately removed from the state (but not destroyed)?

If it's possible, the issue is that it will retry to recreate it indefinitely until you remove it from tf configuration :/

@alethenorio
Copy link

alethenorio commented Jan 28, 2024

@anthosz How so? If I understand the docs, the removed block is meant to be added alongside the definition of a resource so as long as the order of applying them is done correctly (create resource first followed by remove) it should make no difference whether they are both applied from the plan or different ones. Or did I miss something? If terraform attempts to create the resource on the next apply because it does not exist in its state even though a removed block exists for that resource then I am not sure I understand what kind of use cases removed is meant to solve.

@g13013
Copy link

g13013 commented Feb 7, 2024

At least introduce a new setting skip_destroy!

@camilo-s
Copy link

Here's a use case this feature would be great for
We're using Terraform to deploy data product infrastructure on-demand (data mesh).
This includes a repository for data products to source control their code (with azuredevops_git_repository).
Now, we intend to manage the full data product lifecycle with Terraform, from onboarding to decommissioning. In the latter case though, we'd like to keep the repository for auditing purposes, so Terraform should just forget it rather than destroying it.

@BalintGeri

This comment was marked as off-topic.

@hadim

This comment was marked as off-topic.

@evbo

This comment was marked as off-topic.

@unknownconstant
Copy link

This issue is blocking me trying to use bind9 for DNS.

I want terraform to manage the NS records for the zone, but when I come to delete the zone, bind9 can't delete the last NS record as it'd leave an invalid config. I need terraform to not delete this record, so that the zone itself can be removed.

Not sure how to do this at present but I've already wasted hours trying to find a workaround.

@unknownconstant
Copy link

I've tried the removed block that came out in January but it's pretty weak in this context.

The NS record is defined in a module. I can't use the remove block because it errors saying the the NS record is still defined (which it should be for other configs using the module).

@francescomucio
Copy link

francescomucio commented Mar 18, 2024 via email

@mbklein
Copy link

mbklein commented Mar 18, 2024

All the solutions in this thread so far violate one of my long-held principles of development: “Never let the workaround become the work.”

@alethenorio
Copy link

Is it possible to define a new resource alongside a remove block so the resource is created on apply and immediately removed from the state (but not destroyed)?

Just as an FYI, I tried this out and one cannot add a config-driven remove resource without first removing the Terraform resource it refers to from the config. A bit unfortunate :/

@willianferrari
Copy link

The solution we came up with is to build our Terraform setup in layers, each with his own state. The outer layer stays when the inner is destroyed. A few example: - with Snowflake we have multiple environments in the same account. Account level object are created in the Account Layer (for example human users or spending monitors), env layers (dev/test/prod) take care of the env specific objects. - for the Data Product repos. Handle the repo creation/deletion at the external layer, other operations in the specific Data Product layer. Outputs and loops are your friends in this setup. Happy to add more details if anyone is interested.

On Sun, Mar 17, 2024, 13:14 unknownconstant @.> wrote: I've tried the removed block that came out in January but it's pretty weak in this context. The NS record is defined in a module. I can't use the remove block because it errors saying the the NS record is still defined (which it should be for other configs using the module). — Reply to this email directly, view it on GitHub <#3874 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAXKTXZJLFQM3XERTJRAVCTYYWCKHAVCNFSM4BUDL332U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TEMBQGI2DGNJQGM3A . You are receiving this because you commented.Message ID: @.>

Hello,

I need to use the same volume for several workspaces.

Can I use this same approach, my problem is that when I delete a workspace I need to keep the volume.

@simaotwx

This comment was marked as off-topic.

@arnaudfroidmont
Copy link

I concur with almost everything that was said above. In our case, we would like to not delete a FileSystem that was created by terraform so that we make sure we don't erase any data. I would love for the destroy to let the rest of the ressource be destroyed and only the ones that were tagged prevent_destroy to be left alone.

@gxo13

This comment was marked as duplicate.

@barnuri-cp
Copy link

any update about this feature ?

1 similar comment
@shad-zam

This comment was marked as duplicate.

@BenJackGill
Copy link

BenJackGill commented Aug 27, 2024

This was raised in 2015 with 100+ comments and multiple concrete use cases provided, but still no resolution.

Do we give up hope?

@voycey
Copy link

voycey commented Aug 27, 2024

The removed block covers some eventualities but not all of them unfortunately

@ceso
Copy link

ceso commented Sep 18, 2024

Arghh...running exactly into the same issue here, but when it comes to the usage of KMS keys in AWS, as I might want to have them created but to skip destroys. Doing a "manual" step to remove the resource from the state (even if automated in a Pipeline), is quite a dirty thing to do, is it still there any plan to add this??

@simaotwx
Copy link
Contributor

We run into this every now and then, having to do changes in the Terraform code to accommodate this.
Sometimes it might be possible to decouple the resources, but in most cases it is not.
The workaround that seems best is to just remove the resource from the state. But again, that requires manual steps and waiting for the dependents to be removed.

This was raised in 2015 with 100+ comments and multiple concrete use cases provided, but still no resolution.

Do we give up hope?

Quite interesting that they have time to mark all the comments as off-topic or duplicate but they don't appear to bother replying to this issue or doing anything about it. If we wait another year, we can celebrate the 10-year anniversary.
Maybe this is something for OpenTofu instead?

@barnuri-cp
Copy link

We run into this every now and then, having to do changes in the Terraform code to accommodate this. Sometimes it might be possible to decouple the resources, but in most cases it is not. The workaround that seems best is to just remove the resource from the state. But again, that requires manual steps and waiting for the dependents to be removed.

This was raised in 2015 with 100+ comments and multiple concrete use cases provided, but still no resolution.
Do we give up hope?

Quite interesting that they have time to mark all the comments as off-topic or duplicate but they don't appear to bother replying to this issue or doing anything about it. If we wait another year, we can celebrate the 10-year anniversary. Maybe this is something for OpenTofu instead?

yeah removing state or writing a code in python and execute it with null resource
its very annoying

@nikhilparmar86
Copy link

Is there any update on this feature of having skip_destroy with no errors?

@crw
Copy link
Contributor

crw commented Oct 2, 2024

No update at this time. It is on our list of issues that gets revisited every planning cycle (e.g., the top 25 most upvoted issues).

@TechIsCool
Copy link

Pretty disappointed in the fact that this has sat for so long, I could see a easy way out that just allows on the removed block if terraform is trying to destroy the resource actually just remove it from state.

Our use case is that KMS keys are managed in the same state as the Backups for RDS, it takes a final snapshot but 30d later we now have a snapshot that isn't accessible because the KMS key has been deleted. Now I could go add a feature that just doesn't make the API request on the AWS Terraform Provider for KMS so it just doesn't schedule but I doubt it will be accepted.

@ilulillirillion
Copy link

This issue STILL being open is not surprising to me and indicative of why I've never been able to get any of my teams to really embrace Terraform. It's oddities like this that just make the tool very difficult to use and frankly quite often in the way of getting any work done. Terraform needs to stop being so opinionated and give users simple CLI configurability to make decisions.

@ArbitraryCritter
Copy link

ArbitraryCritter commented Dec 9, 2024

I mostly disagree that terraform is too opinionated, for me, a lot of the value of terraform is in its opinionated nature.

But I do think the lack of this feature is forcing developers into splitting their code into a module structure, this is in some cases an anti-pattern.
I generally don't think creating KMS keys, certificates, some dns records etc. separately from the resources they serve is useful, but you're often forced into it this way.

I suspect the reason for the lack of this feature, is that some internals inside terraform might make this harder than it seems, but I don't know and I would really appreciate a short rundown if someone has it. Converting a "delete" into a "forget" seems like a simple concept, but it might cause some complex problems in the terraform vs. terraform provider relationship, though I cannot see how.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests