-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect (and weird) cycle on destroy #21662
Comments
Hi @martaver! Sorry for this confusing behavior. I suspect the cycle is being detected in the "apply destroy" graph rather than in the "plan destroy" graph; if so, that would explain why you can't see the cycles in the Although
When given a plan file, It looks like you may also have requested that |
Hi mate! Thanks for the quick response. I did get a different graph with the commands you suggested, but still no cycles. digraph source:
(graph image is kinda big) Also, I'm using the default It's weird that removing the |
Sorry for the confusion about An oddity I notice here is that the I suspect this may be a bug in When managed resources are destroyed, their dependency edges are inverted to reflect that we must destroy the thing being depended on before that depends on it. I suspect if we could get a rendering of the graph that In order to see the actual graph that was causing this problem, I think we will need to review the full trace log for the destroy run, as the original issue template requested:
In that trace output there will be lines including the text "Completed graph transform ... with new graph:" which are printed as Terraform gradually constructs the graph. I'll be looking for the last instance of a line like that before the cycle error was printed, so if you'd prefer not to share the whole log and are able to extract just that log entry (which will include underneath it a list of nodes in the graph, and indented edges) Here's an example of what that log entry looks like in a local test configuration I had handy:
|
I was able to reproduce this with a lightly-modified set of configuration files where I replaced the GCP data sources with
With it reproduced, I was able to try some other things. The oddest thing I noticed is that it seems to work if I do the plan and apply as separate steps, like this:
So it seems like either something is incorrectly persisting between the plan and apply steps implied by |
I have narrowed the problem down to one of the graph transforms: My next step is to review the implementation of that transform and try to first understand why it is taking actions that seem to create a cycle, and then from there decide whether to remove it entirely (because in practice not having it there seems to make things work better, at least for some contrived examples) or fix it so that it correctly does whatever it is trying to do. |
Really appreciate the stream-of-consciousness here, btw... interesting to hear about the guts of terraform. Let me know if there's anything I can do to help your search! |
Interestingly this issue also drew us back to a little bit of technical debt we've been carrying since some graph builder architecture changes in Terraform 0.8: in principle the apply graph is supposed to be built entirely from the saved (either in memory or on disk) plan, but because of some historical details of how output values are implemented they remained one detail where the apply phase was checking whether the command is The Terraform 0.12 refactoring work left us in a good spot where it seems like we should be able to now pay down this technical debt and make output values behave like everything else: record what we intend to do in the plan (which, for I made some good progress on this yesterday but there are some remaining details to clean up. Once we have a candidate change for this we'll make a decision about whether it seems low-risk enough to make that change now and fix this issue as a side-effect. If the change seems too invasive (which, so far, it doesn't 🤞) we may need to find a more tactical solution to this problem for now and return to this in a later major release. |
Thanks for looking into this. I have something similar which I believe is caused by the same thing. Code here: https://stackoverflow.com/questions/56548780/terraform-cycle-when-altering-a-count |
Same issue with 12.2. |
Unfortunately this issue turned out to be a bit of a "can of worms": the behavior that is causing this (which runs only in To mitigate that, the We can't break destroy-time provisioners in a minor release to fix this, so we'll need to try to find a different approach that can support both of these possibilities at once. We have some ideas but the resulting changeset (currently brewing as part of #21762) has grown much larger than I originally hoped, so we unfortunately need to put it on hold for the moment so we can work on some lower-risk bugfixes instead and then return to this issue later when we have more bandwidth to spend thinking through the best design to address it without regressing other features. In the mean time, using the two step destroy with Sorry there isn't a more straightforward fix for this one! |
Thanks Martin,
Appreciate the in depth dive on this.
Looking forward to 0.12's promising progress!
…On Tue, Jun 18, 2019 at 7:16 PM Martin Atkins ***@***.***> wrote:
Unfortunately this issue turned out to be a bit of a "can of worms": the
behavior that is causing this (which runs only in terraform destroy mode
specifically, not when applying a saved destroy plan, due to some
historical technical debt) is there to accommodate destroy-time
provisioners, which would otherwise tend to cause cycles whenever a
destroy-time provisioner refers to something else in the configuration:
destroying of resources happens in reverse dependency order, but the
destroy-time provisioner references are normal "forward" dependencies, and
so they tend to point in the opposite direction than the other destroy
dependencies and create cycles.
To mitigate that, the terraform destroy command takes some special
post-processing steps on the final graph to avoid the possibility of those
cycles, which then in turn causes the problem covered by this issue as a
side-effect. As far as I can tell this issue has actually been in Terraform
for a while now, but Terraform 0.12 has made it easier to hit because a
reference to a whole module creates a dependency for every output of the
module, and so it's much easier to inadvertently create a dependency that
contributes to a cycle.
We can't break display-time provisioners in a minor release to fix this,
so we'll need to try to find a different approach that can support both of
these possibilities at once. We have some ideas but the resulting changeset
(currently brewing as part of #21762
<#21762>) has grown much
larger than I originally hoped, so we unfortunately need to put it on hold
for the moment so we can work on some lower-risk bugfixes instead and then
return to this issue later when we have more bandwidth to spend thinking
through the best design to address it without regressing other features.
In the mean time, using the two step destroy with terraform plan
-out=tfplan && terraform apply tfplan should allow this to work as long
as the configuration doesn't also include destroy-time provisioners. (If it
does, then I expect separate plan and apply would *cause* cycles, due to
that having the opposite problem as this bug is representing.)
Sorry there isn't a more straightforward fix for this one!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#21662?email_source=notifications&email_token=ABOXAMSWHGABX4LVCF7VZO3P3EC5TA5CNFSM4HWJLKN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODX7FGHI#issuecomment-503206685>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABOXAMWKU6DA6UIYHKS3PYLP3EC5TANCNFSM4HWJLKNQ>
.
|
We are doing the two step destroy but sadly still having the same issue. |
I forgot to note here that the two-step destroy workaround also still fails with the same cycle for us. |
I had the same issue with destroying aws eks module cluster with plan -destroy && apply |
Also seeing this in 0.12.6 (with two-step destroy) |
@apparentlymart Any updates? This continues to be an issue in Terraform 0.12.8. The two-step method is also not an option for Terraform Enterprise customers when hooked up with source control. |
We are facing this issue at a client as well - Terraform v0.12.7 |
The two-step option doesn't work in Terraform v0.12.8 for me also. But the same code works with one-step destroy. I am able to isolated this issue with counter resource in module. This is tough to deal with a dynamic resource. |
This fails in terraform cloud as well on v0.12.7 and v0.12.8 |
For now I just use a destroy.bash script to destroy things in proper order (with -target flag) because cycle error on destroy action is too easy to hit when using module inter-dependency. I'll be watching this thread for sure :) |
Two-step destroy fails for me on 0.12.9 with the following setup:
Example (module's code): data "aws_subnet" "this" {
provider = aws.local
### This fails
for_each = toset(["${local.region}a"])
availability_zone = each.key
### This works
# for_each = toset(["${data.aws_region.this.name}a"])
# availability_zone = each.key
### This works
# for_each = toset(["a"])
# availability_zone = "${local.region}${each.key}"
}
data "aws_region" "this" {
provider = aws.local
}
locals {
region = data.aws_region.this.name
} Error:
|
We are also running into this issue. I was able to come up with a minimal configuration to reproduce it and experiment with workarounds: locals {
l = data.template_file.d.rendered
}
data "template_file" "d" {
template = "true"
}
resource "null_resource" "a" {
count = local.l ? 1 : 0
} One-step destroy works fine, two-step destroy produces this error:
So as in the comment above, when count depends on a local which depends on a data source, it causes this problem. Removing the local is a potential workaround; putting the data source reference directly in the count rather than through a local fixes it. Another potential workaround would be to not depend on data sources or resources in count parameters. Edit: I just realized that I basically just repeated the last comment above. I was working on this before I saw the comment. Great minds think alike I guess! |
I am also having this issue. I'm using the I get a cyclical dependency error when I attempt to reduce the count from 3 to 2. This should destroy one of the I captured this partial dependency graph from the debug log output of
The cycle is
In this case, I can manually run
first, followed by the full My Terraform version:
|
Same issues here on Terraform 12.9. When I try to just
I'm still digging to see if I can find anything else... my graph doesn't show any cycle.
I'm kinda lost at the moment. |
Same issue and neither 1 or 2 step destroys works, Terraform 12.9.
I am using counts inside modules that are dependent on arrays being passed in. The strange thing is this was working fine for a time as i did a lot of testing and tearing down. |
I got it to destroy with:
|
Same error even after v0.12.13 |
Seems to be fixed in 0.12.15. Tested on @meyertime example:
0.12.13 two-step destroy fails with cycle error, 0.12.15 two-steps works without any issues. more complicated code works as well. |
That's a great news! I just tested with version 0.12.14 and 0.12.15. Both of them are working with two-way destroy me now. |
Thanks for confirming that this behavior has improved in 0.12.14, @aliusmiles and @edli2. It's likely that either #22937 or #22976 was responsible for the changed behavior. We know that there are still some remaining cases that can lead to cycles, so if you find yourself with a similar error message or situation after upgrading to Terraform 0.12.13 or later please open a new issue and complete the issue template so that we can gather a fresh set of reproduction steps against the improved graph construction behavior. The changes linked above have invalidated the debugging work that everyone did above in this issue by changing the graph shape, so we're going to lock this issue just to reinforce that any situations with similar symptoms will need to be reproduced and debugged again in a new issue against the latest versions of Terraform. Thanks for all the help in digging into this issue, everyone! |
Terraform Version
Terraform Configuration Files
main.tf
in my root provider:Here's module 'organisation-info':
Then module 'stack-info':
And finally, the 'project-info' module:
Debug Output
After doing
terraform destroy -auto-approve
, I get:And
terraform graph -verbose -draw-cycles -type=plan-destroy
gives me this graph:Source:
Crash Output
No crash output...
Expected Behavior
The idea is to use modules at the org, project and stack levels to set up naming conventions that can be re-used across all resources. Organisation-info loads organisation info, project-info about projects, and stack-info determines which project to target based on current workspace.
I have omitted a bunch of other logic in the modules in order to keep them clean for this issue.
According to
terraform
there are no cycles, anddestroy
should work fine.Actual Behavior
We get the cycle I posted above, even though terraform shows no cycles.
Steps to Reproduce
organisation-info
,project-info
, andstack-info
as shown above.terraform init
terraform destroy
(it doesn't seem to matter if you've applied first)Additional Context
The weird thing is that if I comment out this output in
stack-info
, the cycle stops:This seems really weird... I neither understand why outputting a variable should make a difference, nor why I'm getting a cycle error when there's no cycle.
Oddly,
terraform plan -destroy
does not reveal the cycle, onlyterraform destroy
.My spidey sense tells me evil is afoot.
References
None found.
The text was updated successfully, but these errors were encountered: