Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect (and weird) cycle on destroy #21662

Closed
martaver opened this issue Jun 9, 2019 · 30 comments
Closed

Incorrect (and weird) cycle on destroy #21662

martaver opened this issue Jun 9, 2019 · 30 comments
Labels
bug core v0.12 Issues (primarily bugs) reported against v0.12 releases

Comments

@martaver
Copy link

martaver commented Jun 9, 2019

Terraform Version

Terraform v0.12.1

Terraform Configuration Files

main.tf in my root provider:

provider "google" {}

module "organisation_info" {
  source           = "../../modules/organisation-info"
  top_level_domain = "smoothteam.fi"
  region           = "us-central1"
}

module "stack_info" {
  source            = "../../modules/stack-info"
  organisation_info = "${module.organisation_info}"
}

Here's module 'organisation-info':

variable "top_level_domain" {}
variable "region" {}

data "google_organization" "organization" {
  domain = "${var.top_level_domain}"
}

locals {
  organization_id    = "${data.google_organization.organization.id}"
  ns = "${replace("${var.top_level_domain}", ".", "-")}-"
}

output "organization_id" {
  value = "${local.organization_id}"
}

output "ns" {
  value = "${local.ns}"
}

Then module 'stack-info':

variable "organisation_info" {
  type        = any
  description = "The organisation-scope this environment exists in."
}

module "project_info" {
  source            = "../project-info"
  organisation_info = "${var.organisation_info}"
  name              = "${local.project}"
}

locals {
  # Use the 'default' workspace for the 'staging' stack.
  name = "${terraform.workspace == "default" ? "staging" : terraform.workspace}"
  # In the 'production' stack, target the production project. Otherwise, target the staging project.
  project = "${local.name == "production" ? "production" : "staging"}"
}

output "project" {
  value = "${module.project_info}" # COMMENTING THIS OUTPUT REMOVES THE CYCLE.
}

And finally, the 'project-info' module:

variable "organisation_info" {
  type        = any
}
variable "name" {}

data "google_project" "project" {
  project_id = "${local.project_id}"
}

locals {
  project_id = "${var.organisation_info.ns}${var.name}"
}

output "org" {
  value = "${var.organisation_info}"
}

Debug Output

After doing terraform destroy -auto-approve, I get:

Error: Cycle: module.stack_info.module.project_info.local.project_id, module.stack_info.output.project, module.stack_info.module.project_info.data.google_project.project (destroy), module.organisation_info.data.google_organization.organization (destroy), module.stack_info.var.organisation_info, module.stack_info.module.project_info.var.organisation_info, module.stack_info.module.project_info.output.org

And terraform graph -verbose -draw-cycles -type=plan-destroy gives me this graph:
image
Source:

digraph {
        compound = "true"
        newrank = "true"
        subgraph "root" {
                "[root] module.organisation_info.data.google_organization.organization" [label = "module.organisation_info.data.google_organization.organization", shape = "box"]
                "[root] module.stack_info.module.project_info.data.google_project.project" [label = "module.stack_info.module.project_info.data.google_project.project", shape = "box"]
                "[root] provider.google" [label = "provider.google", shape = "diamond"]
                "[root] module.organisation_info.data.google_organization.organization" -> "[root] module.stack_info.module.project_info.data.google_project.project"
                "[root] module.organisation_info.data.google_organization.organization" -> "[root] provider.google"
                "[root] module.stack_info.module.project_info.data.google_project.project" -> "[root] provider.google"
        }
}

Crash Output

No crash output...

Expected Behavior

The idea is to use modules at the org, project and stack levels to set up naming conventions that can be re-used across all resources. Organisation-info loads organisation info, project-info about projects, and stack-info determines which project to target based on current workspace.

I have omitted a bunch of other logic in the modules in order to keep them clean for this issue.

According to terraform there are no cycles, and destroy should work fine.

Actual Behavior

We get the cycle I posted above, even though terraform shows no cycles.

Steps to Reproduce

  1. Set up the three modules, organisation-info, project-info, and stack-info as shown above.
  2. Set up a root provider as shown above.
  3. terraform init
  4. terraform destroy (it doesn't seem to matter if you've applied first)

Additional Context

The weird thing is that if I comment out this output in stack-info, the cycle stops:

output "project" {
  value = "${module.project_info}" # IT'S THIS... COMMENTING THIS OUT REMOVES THE CYCLE.
}

This seems really weird... I neither understand why outputting a variable should make a difference, nor why I'm getting a cycle error when there's no cycle.

Oddly, terraform plan -destroy does not reveal the cycle, only terraform destroy.

My spidey sense tells me evil is afoot.

References

None found.

@apparentlymart
Copy link
Contributor

Hi @martaver! Sorry for this confusing behavior.

I suspect the cycle is being detected in the "apply destroy" graph rather than in the "plan destroy" graph; if so, that would explain why you can't see the cycles in the terraform graph output.

Although terraform graph is documented as taking an optional configuration directory as an argument, it can also be given a saved plan file instead, and so this sequence of commands may yield some more information about what's going on here:

terraform plan -destroy -out=tfplan
terraform graph -verbose -draw-cycles tfplan

When given a plan file, terraform graph returns the graph to apply the saved plan, rather than the graph used to produce the plan in the first place.

It looks like you may also have requested that terraform graph collapse modules into a single node, rather than showing all of the objects inside. I think the cycle rendering may not function correctly in that case, because a module is not a graph node itself and thus Terraform probably can't correlate the addresses shown in the cycle set with the nodes shown in the graph output in that case. (I'm not sure about this off the top of my head, but just trying to think through reasons why the cycles might not show up.)

@martaver
Copy link
Author

martaver commented Jun 9, 2019

Hi mate! Thanks for the quick response.

I did get a different graph with the commands you suggested, but still no cycles.

digraph source:

digraph {
        compound = "true"
        newrank = "true"
        subgraph "root" {
                "[root] module.organisation_info.data.google_organization.organization" [label = "module.organisation_info.data.google_organization.organization", shape = "box"]
                "[root] module.organisation_info.local.ns" [label = "module.organisation_info.local.ns", shape = "note"]
                "[root] module.organisation_info.local.organization_id" [label = "module.organisation_info.local.organization_id", shape = "note"]
                "[root] module.organisation_info.output.ns" [label = "module.organisation_info.output.ns", shape = "note"]
                "[root] module.organisation_info.output.organization_id" [label = "module.organisation_info.output.organization_id", shape = "note"]
                "[root] module.organisation_info.var.region" [label = "module.organisation_info.var.region", shape = "note"]
                "[root] module.organisation_info.var.top_level_domain" [label = "module.organisation_info.var.top_level_domain", shape = "note"]
                "[root] module.stack_info.local.name" [label = "module.stack_info.local.name", shape = "note"]
                "[root] module.stack_info.local.project" [label = "module.stack_info.local.project", shape = "note"]
                "[root] module.stack_info.module.project_info.data.google_project.project" [label = "module.stack_info.module.project_info.data.google_project.project", shape = "box"]
                "[root] module.stack_info.module.project_info.local.project_id" [label = "module.stack_info.module.project_info.local.project_id", shape = "note"]
                "[root] module.stack_info.module.project_info.output.org" [label = "module.stack_info.module.project_info.output.org", shape = "note"]
                "[root] module.stack_info.module.project_info.var.name" [label = "module.stack_info.module.project_info.var.name", shape = "note"]
                "[root] module.stack_info.module.project_info.var.organisation_info" [label = "module.stack_info.module.project_info.var.organisation_info", shape = "note"]
                "[root] module.stack_info.output.project" [label = "module.stack_info.output.project", shape = "note"]
                "[root] module.stack_info.var.organisation_info" [label = "module.stack_info.var.organisation_info", shape = "note"]
                "[root] provider.google" [label = "provider.google", shape = "diamond"]
                "[root] provider.google (close)" [label = "provider.google (close)", shape = "diamond"]
                "[root] meta.count-boundary (EachMode fixup)" -> "[root] module.stack_info.output.project"
                "[root] module.organisation_info.data.google_organization.organization" -> "[root] module.organisation_info.var.top_level_domain"
                "[root] module.organisation_info.data.google_organization.organization" -> "[root] provider.google"
                "[root] module.organisation_info.local.ns" -> "[root] module.organisation_info.var.top_level_domain"
                "[root] module.organisation_info.local.organization_id" -> "[root] module.organisation_info.data.google_organization.organization"
                "[root] module.organisation_info.output.ns" -> "[root] module.organisation_info.local.ns"
                "[root] module.organisation_info.output.organization_id" -> "[root] module.organisation_info.local.organization_id"
                "[root] module.stack_info.local.project" -> "[root] module.stack_info.local.name"
                "[root] module.stack_info.module.project_info.data.google_project.project" -> "[root] module.stack_info.module.project_info.local.project_id"
                "[root] module.stack_info.module.project_info.local.project_id" -> "[root] module.stack_info.module.project_info.var.name"
                "[root] module.stack_info.module.project_info.local.project_id" -> "[root] module.stack_info.module.project_info.var.organisation_info"
                "[root] module.stack_info.module.project_info.output.org" -> "[root] module.stack_info.module.project_info.var.organisation_info"
                "[root] module.stack_info.module.project_info.var.name" -> "[root] module.stack_info.local.project"
                "[root] module.stack_info.module.project_info.var.organisation_info" -> "[root] module.stack_info.var.organisation_info"
                "[root] module.stack_info.output.project" -> "[root] module.stack_info.module.project_info.data.google_project.project"
                "[root] module.stack_info.output.project" -> "[root] module.stack_info.module.project_info.output.org"
                "[root] module.stack_info.var.organisation_info" -> "[root] module.organisation_info.output.ns"
                "[root] module.stack_info.var.organisation_info" -> "[root] module.organisation_info.output.organization_id"
                "[root] module.stack_info.var.organisation_info" -> "[root] module.organisation_info.var.region"
                "[root] provider.google (close)" -> "[root] module.stack_info.module.project_info.data.google_project.project"
                "[root] root" -> "[root] meta.count-boundary (EachMode fixup)"
                "[root] root" -> "[root] provider.google (close)"
        }
}

(graph image is kinda big)

Also, I'm using the default module-depth, which should be -1 (infinite). Same result as if I set it to 100. No cycles.

It's weird that removing the project output from stack-info fixes the cycle. Any idea on why this might be the case?

@apparentlymart
Copy link
Contributor

Sorry for the confusion about -module-depth... I was not reading thoroughly the graph output and confused myself. Indeed, those are fully-expanded modules and I think I was just thrown off by how few nodes there were in that graph.

An oddity I notice here is that the terraform graph output doesn't include a node for module.stack_info.module.project_info.data.google_project.project (destroy) or module.organisation_info.data.google_organization.organization (destroy), which suggests that terraform graph still isn't rendering the same graph that terraform destroy is building. 🤔

I suspect this may be a bug in terraform graph, but I'm not sure exactly what's going wrong with it. I still went through and manually highlighted the nodes for the objects mentioned in the cycle though, and got this result:

Visual rendering of the relationships described in dot format in the previous comment.

When managed resources are destroyed, their dependency edges are inverted to reflect that we must destroy the thing being depended on before that depends on it. I suspect if we could get a rendering of the graph that terraform destroy is using then we'd see that these inverted dependency edges are causing the problem.

In order to see the actual graph that was causing this problem, I think we will need to review the full trace log for the destroy run, as the original issue template requested:

Full debug output can be obtained by running Terraform with the environment variable TF_LOG=trace. Please create a GitHub Gist containing the debug output. Please do not paste the debug output in the issue, since debug output is long.

Debug output may contain sensitive information. Please review it before posting publicly, and if you are concerned feel free to encrypt the files using the HashiCorp security public key.

In that trace output there will be lines including the text "Completed graph transform ... with new graph:" which are printed as Terraform gradually constructs the graph. I'll be looking for the last instance of a line like that before the cycle error was printed, so if you'd prefer not to share the whole log and are able to extract just that log entry (which will include underneath it a list of nodes in the graph, and indented edges)

Here's an example of what that log entry looks like in a local test configuration I had handy:

2019/06/10 10:27:47 [TRACE] Completed graph transform *terraform.TransitiveReductionTransformer with new graph:
meta.count-boundary (EachMode fixup) - *terraform.NodeCountBoundary
  null_resource.test - *terraform.NodePlannableResource
null_resource.test - *terraform.NodePlannableResource
  provider.null - *terraform.NodeApplyableProvider
provider.null - *terraform.NodeApplyableProvider
provider.null (close) - *terraform.graphNodeCloseProvider
  null_resource.test - *terraform.NodePlannableResource
root - terraform.graphNodeRoot
  meta.count-boundary (EachMode fixup) - *terraform.NodeCountBoundary
  provider.null (close) - *terraform.graphNodeCloseProvider
------

@apparentlymart
Copy link
Contributor

I was able to reproduce this with a lightly-modified set of configuration files where I replaced the GCP data sources with null_data_source:

Error: Cycle: module.stack_info.module.project_info.output.org, module.stack_info.output.project, module.stack_info.module.project_info.data.null_data_source.project (destroy), module.organisation_info.data.null_data_source.organization (destroy), module.stack_info.var.organisation_info, module.stack_info.module.project_info.var.organisation_info, module.stack_info.module.project_info.local.project_id

With it reproduced, I was able to try some other things. The oddest thing I noticed is that it seems to work if I do the plan and apply as separate steps, like this:

terraform plan -destroy -out=tfplan
terraform apply tfplan

So it seems like either something is incorrectly persisting between the plan and apply steps implied by terraform destroy or the destroy command is configuring the context differently enough that the graph is constructed in a different way. Either way that is incorrect behavior, so I'm focusing my next step of investigation on trying to discover how and why terraform destroy is behaving differently.

@apparentlymart
Copy link
Contributor

I have narrowed the problem down to one of the graph transforms: DestroyValueReferenceTransformer. That transform is incorrectly activated only when running terraform destroy and not when applying a destroy plan, and whenever it is activated it creates a graph containing a cycle.

My next step is to review the implementation of that transform and try to first understand why it is taking actions that seem to create a cycle, and then from there decide whether to remove it entirely (because in practice not having it there seems to make things work better, at least for some contrived examples) or fix it so that it correctly does whatever it is trying to do.

@apparentlymart apparentlymart self-assigned this Jun 12, 2019
@martaver
Copy link
Author

Really appreciate the stream-of-consciousness here, btw... interesting to hear about the guts of terraform. Let me know if there's anything I can do to help your search!

@apparentlymart
Copy link
Contributor

Interestingly this issue also drew us back to a little bit of technical debt we've been carrying since some graph builder architecture changes in Terraform 0.8: in principle the apply graph is supposed to be built entirely from the saved (either in memory or on disk) plan, but because of some historical details of how output values are implemented they remained one detail where the apply phase was checking whether the command is terraform destroy to modify some behaviors.

The Terraform 0.12 refactoring work left us in a good spot where it seems like we should be able to now pay down this technical debt and make output values behave like everything else: record what we intend to do in the plan (which, for terraform destroy is a plan to destroy everything) and then act generically on that plan in the apply phase by noting whether the planned action for each output is "update" or "delete".

I made some good progress on this yesterday but there are some remaining details to clean up. Once we have a candidate change for this we'll make a decision about whether it seems low-risk enough to make that change now and fix this issue as a side-effect. If the change seems too invasive (which, so far, it doesn't 🤞) we may need to find a more tactical solution to this problem for now and return to this in a later major release.

@afandian
Copy link

Thanks for looking into this. I have something similar which I believe is caused by the same thing. Code here: https://stackoverflow.com/questions/56548780/terraform-cycle-when-altering-a-count

@martaver
Copy link
Author

Same issue with 12.2.

@apparentlymart
Copy link
Contributor

apparentlymart commented Jun 18, 2019

Unfortunately this issue turned out to be a bit of a "can of worms": the behavior that is causing this (which runs only in terraform destroy mode specifically, not when applying a saved destroy plan, due to some historical technical debt) is there to accommodate destroy-time provisioners, which would otherwise tend to cause cycles whenever a destroy-time provisioner refers to something else in the configuration: destroying of resources happens in reverse dependency order, but the destroy-time provisioner references are normal "forward" dependencies, and so they tend to point in the opposite direction than the other destroy dependencies and create cycles.

To mitigate that, the terraform destroy command takes some special post-processing steps on the final graph to avoid the possibility of those cycles, which then in turn causes the problem covered by this issue as a side-effect. As far as I can tell this issue has actually been in Terraform for a while now, but Terraform 0.12 has made it easier to hit because a reference to a whole module creates a dependency for every output of the module, and so it's much easier to inadvertently create a dependency that contributes to a cycle.

We can't break destroy-time provisioners in a minor release to fix this, so we'll need to try to find a different approach that can support both of these possibilities at once. We have some ideas but the resulting changeset (currently brewing as part of #21762) has grown much larger than I originally hoped, so we unfortunately need to put it on hold for the moment so we can work on some lower-risk bugfixes instead and then return to this issue later when we have more bandwidth to spend thinking through the best design to address it without regressing other features.

In the mean time, using the two step destroy with terraform plan -out=tfplan && terraform apply tfplan should allow this to work as long as the configuration doesn't also include destroy-time provisioners. (If it does, then I expect separate plan and apply would cause cycles, due to that having the opposite problem as this bug is representing.)

Sorry there isn't a more straightforward fix for this one!

@martaver
Copy link
Author

martaver commented Jul 1, 2019 via email

@drissamri
Copy link

We are doing the two step destroy but sadly still having the same issue.

@martaver
Copy link
Author

I forgot to note here that the two-step destroy workaround also still fails with the same cycle for us.

@radepal
Copy link

radepal commented Jul 22, 2019

I had the same issue with destroying aws eks module cluster with plan -destroy && apply
terraform 0.12.5

@alewando
Copy link

alewando commented Aug 2, 2019

Also seeing this in 0.12.6 (with two-step destroy)

@hashibot hashibot added the v0.12 Issues (primarily bugs) reported against v0.12 releases label Aug 22, 2019
@eytanhanig
Copy link

@apparentlymart Any updates? This continues to be an issue in Terraform 0.12.8.

The two-step method is also not an option for Terraform Enterprise customers when hooked up with source control.

@zghafari
Copy link

We are facing this issue at a client as well - Terraform v0.12.7

@edli2
Copy link

edli2 commented Sep 12, 2019

The two-step option doesn't work in Terraform v0.12.8 for me also. But the same code works with one-step destroy. I am able to isolated this issue with counter resource in module. This is tough to deal with a dynamic resource.

@Aaron-ML
Copy link

This fails in terraform cloud as well on v0.12.7 and v0.12.8

@JnMik
Copy link

JnMik commented Sep 20, 2019

For now I just use a destroy.bash script to destroy things in proper order (with -target flag) because cycle error on destroy action is too easy to hit when using module inter-dependency. I'll be watching this thread for sure :)

@tbondarchuk
Copy link

Two-step destroy fails for me on 0.12.9 with the following setup:

module: data => local variable => for_each|count => data

Example (module's code):

data "aws_subnet" "this" {
  provider = aws.local

  ### This fails
  for_each          = toset(["${local.region}a"])
  availability_zone = each.key

  ### This works
  # for_each          = toset(["${data.aws_region.this.name}a"])
  # availability_zone = each.key

  ### This works
  # for_each          = toset(["a"])
  # availability_zone = "${local.region}${each.key}"
}

data "aws_region" "this" {
  provider = aws.local
}

locals {
  region = data.aws_region.this.name
}

Error:

Error: Cycle: module.test.local.region, module.test.data.aws_subnet.this (prepare state), module.test.data.aws_subnet.this["us-east-1a"] (destroy), module.test.data.aws_region.this (destroy)

@meyertime
Copy link
Contributor

meyertime commented Sep 20, 2019

We are also running into this issue. I was able to come up with a minimal configuration to reproduce it and experiment with workarounds:

locals {
    l = data.template_file.d.rendered
}

data "template_file" "d" {
    template = "true"
}

resource "null_resource" "a" {
    count = local.l ? 1 : 0
}

One-step destroy works fine, two-step destroy produces this error:

Error: Cycle: null_resource.a (prepare state), null_resource.a[0] (destroy), data.template_file.d (destroy), local.l

So as in the comment above, when count depends on a local which depends on a data source, it causes this problem. Removing the local is a potential workaround; putting the data source reference directly in the count rather than through a local fixes it. Another potential workaround would be to not depend on data sources or resources in count parameters.

Edit: I just realized that I basically just repeated the last comment above. I was working on this before I saw the comment. Great minds think alike I guess!

@armsnyder
Copy link

armsnyder commented Oct 1, 2019

I am also having this issue. I'm using the aws_lb_target_group_attachment resource in a different module than the aws_instance resource.

I get a cyclical dependency error when I attempt to reduce the count from 3 to 2.

This should destroy one of the aws_instance resources along with one of the aws_lb_target_group_attachment resources, but there is a cycle.

I captured this partial dependency graph from the debug log output of terraform apply, which shows the cycle:

module.rancher.module.kubernetes-metal.aws_instance.nodes[2] (destroy)
  module.rancher.module.kubernetes-metal.aws_instance.nodes (prepare state)
  module.rancher.module.rancher-ha-lb.aws_lb_target_group_attachment.tcp_443[2] (destroy)
module.rancher.module.kubernetes-metal.output.instance_ids
  module.rancher.module.kubernetes-metal.aws_instance.nodes (prepare state)
  module.rancher.module.kubernetes-metal.aws_instance.nodes[2] (destroy)
module.rancher.module.rancher-ha-lb.aws_lb_target_group_attachment.tcp_443 (prepare state)
  module.rancher.module.rancher-ha-lb.var.node_ids
module.rancher.module.rancher-ha-lb.aws_lb_target_group_attachment.tcp_443[2] (destroy)
  module.rancher.module.rancher-ha-lb.aws_lb_target_group_attachment.tcp_443 (prepare state)
module.rancher.module.rancher-ha-lb.var.node_ids
  module.rancher.module.kubernetes-metal.output.instance_ids

The cycle is

module.rancher.module.kubernetes-metal.aws_instance.nodes[2] (destroy) ->
module.rancher.module.rancher-ha-lb.aws_lb_target_group_attachment.tcp_443[2] (destroy) ->
module.rancher.module.rancher-ha-lb.aws_lb_target_group_attachment.tcp_443 (prepare state) ->
module.rancher.module.rancher-ha-lb.var.node_ids ->
module.rancher.module.kubernetes-metal.output.instance_ids ->
module.rancher.module.kubernetes-metal.aws_instance.nodes[2] (destroy)

In this case, I can manually run

terraform destroy -target module.rancher.module.rancher-ha-lb.aws_lb_target_group_attachment.tcp_443[2]

first, followed by the full terraform apply in order to get around the issue.

My Terraform version:

Terraform v0.12.9
+ provider.aws v2.30.0
+ provider.local v1.3.0
+ provider.null v2.1.2
+ provider.random v2.2.1

@muratso
Copy link

muratso commented Oct 7, 2019

Same issues here on Terraform 12.9. When I try to just terraform destroy it works like a charm. But when I try to terraform plan -destroy -out=destroy.tfplan followed by a terraform apply destroy.tfplan I get a cycler error.

Error: Cycle: module.gke_base.module.istio.helm_release.istio_crd (prepare state), module.gke_base.module.gke.google_container_node_pool.gke_nodes (destroy), module.gke_base.module.tiller.kubernetes_service_account.tiller (prepare state), module.gke_base.module.istio.null_resource.helm_rolebinding (destroy), module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller (prepare state), module.gke_base.module.gke.output.cluster_ca_certificate, module.gke_base.provider.kubernetes, module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller (destroy), module.gke_base.module.istio.null_resource.wait_istio_crd (destroy), module.gke_base.module.tiller.kubernetes_service_account.tiller (destroy), module.gke_base.module.gke.google_container_cluster.gke (destroy), module.gke_base.module.gke.output.cluster_endpoint, module.gke_base.provider.helm, module.gke_base.module.istio.helm_release.istio (prepare state), module.gke_base.module.istio.helm_release.istio (destroy), module.gke_base.module.istio.helm_release.istio_crd (destroy), module.gke_base.module.istio.null_resource.get_istio (destroy), module.gke_base.module.tiller.null_resource.wait_nodepool (destroy)

I'm still digging to see if I can find anything else... my graph doesn't show any cycle.

digraph {
	compound = "true"
	newrank = "true"
	subgraph "root" {
		"[root] module.gke_base.data.google_client_config.google-beta_current" [label = "module.gke_base.data.google_client_config.google-beta_current", shape = "box"]
		"[root] module.gke_base.data.google_client_config.google_current" [label = "module.gke_base.data.google_client_config.google_current", shape = "box"]
		"[root] module.gke_base.data.google_dns_managed_zone.avenuecode" [label = "module.gke_base.data.google_dns_managed_zone.avenuecode", shape = "box"]
		"[root] module.gke_base.module.gke-vpc.google_compute_network.vpc" [label = "module.gke_base.module.gke-vpc.google_compute_network.vpc", shape = "box"]
		"[root] module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork" [label = "module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork", shape = "box"]
		"[root] module.gke_base.module.gke.google_container_cluster.gke" [label = "module.gke_base.module.gke.google_container_cluster.gke", shape = "box"]
		"[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes" [label = "module.gke_base.module.gke.google_container_node_pool.gke_nodes", shape = "box"]
		"[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api" [label = "module.gke_base.module.gke.google_project_service.cloudresourcemanager-api", shape = "box"]
		"[root] module.gke_base.module.gke.google_project_service.gke-api" [label = "module.gke_base.module.gke.google_project_service.gke-api", shape = "box"]
		"[root] module.gke_base.module.istio.helm_release.istio" [label = "module.gke_base.module.istio.helm_release.istio", shape = "box"]
		"[root] module.gke_base.module.istio.helm_release.istio_crd" [label = "module.gke_base.module.istio.helm_release.istio_crd", shape = "box"]
		"[root] module.gke_base.module.istio.null_resource.get_istio" [label = "module.gke_base.module.istio.null_resource.get_istio", shape = "box"]
		"[root] module.gke_base.module.istio.null_resource.helm_rolebinding" [label = "module.gke_base.module.istio.null_resource.helm_rolebinding", shape = "box"]
		"[root] module.gke_base.module.istio.null_resource.wait_istio_crd" [label = "module.gke_base.module.istio.null_resource.wait_istio_crd", shape = "box"]
		"[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller" [label = "module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller", shape = "box"]
		"[root] module.gke_base.module.tiller.kubernetes_service_account.tiller" [label = "module.gke_base.module.tiller.kubernetes_service_account.tiller", shape = "box"]
		"[root] module.gke_base.module.tiller.null_resource.wait_nodepool" [label = "module.gke_base.module.tiller.null_resource.wait_nodepool", shape = "box"]
		"[root] module.gke_base.provider.google" [label = "module.gke_base.provider.google", shape = "diamond"]
		"[root] module.gke_base.provider.google-beta" [label = "module.gke_base.provider.google-beta", shape = "diamond"]
		"[root] module.gke_base.provider.helm" [label = "module.gke_base.provider.helm", shape = "diamond"]
		"[root] module.gke_base.provider.kubernetes" [label = "module.gke_base.provider.kubernetes", shape = "diamond"]
		"[root] provider.null" [label = "provider.null", shape = "diamond"]
		"[root] module.gke_base.data.google_client_config.google-beta_current" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.data.google_client_config.google-beta_current" -> "[root] module.gke_base.module.istio.helm_release.istio_crd"
		"[root] module.gke_base.data.google_client_config.google-beta_current" -> "[root] module.gke_base.module.istio.null_resource.get_istio"
		"[root] module.gke_base.data.google_client_config.google-beta_current" -> "[root] module.gke_base.module.istio.null_resource.helm_rolebinding"
		"[root] module.gke_base.data.google_client_config.google-beta_current" -> "[root] module.gke_base.module.istio.null_resource.wait_istio_crd"
		"[root] module.gke_base.data.google_client_config.google-beta_current" -> "[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller"
		"[root] module.gke_base.data.google_client_config.google-beta_current" -> "[root] module.gke_base.module.tiller.kubernetes_service_account.tiller"
		"[root] module.gke_base.data.google_client_config.google-beta_current" -> "[root] module.gke_base.provider.google-beta"
		"[root] module.gke_base.data.google_client_config.google_current" -> "[root] module.gke_base.provider.google"
		"[root] module.gke_base.data.google_dns_managed_zone.avenuecode" -> "[root] module.gke_base.provider.google"
		"[root] module.gke_base.module.gke-vpc.google_compute_network.vpc" -> "[root] module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork"
		"[root] module.gke_base.module.gke-vpc.google_compute_network.vpc" -> "[root] module.gke_base.module.gke.google_container_cluster.gke"
		"[root] module.gke_base.module.gke-vpc.google_compute_network.vpc" -> "[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes"
		"[root] module.gke_base.module.gke-vpc.google_compute_network.vpc" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.module.gke-vpc.google_compute_network.vpc" -> "[root] module.gke_base.module.istio.helm_release.istio_crd"
		"[root] module.gke_base.module.gke-vpc.google_compute_network.vpc" -> "[root] module.gke_base.module.istio.null_resource.get_istio"
		"[root] module.gke_base.module.gke-vpc.google_compute_network.vpc" -> "[root] module.gke_base.module.istio.null_resource.helm_rolebinding"
		"[root] module.gke_base.module.gke-vpc.google_compute_network.vpc" -> "[root] module.gke_base.module.istio.null_resource.wait_istio_crd"
		"[root] module.gke_base.module.gke-vpc.google_compute_network.vpc" -> "[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller"
		"[root] module.gke_base.module.gke-vpc.google_compute_network.vpc" -> "[root] module.gke_base.module.tiller.kubernetes_service_account.tiller"
		"[root] module.gke_base.module.gke-vpc.google_compute_network.vpc" -> "[root] module.gke_base.module.tiller.null_resource.wait_nodepool"
		"[root] module.gke_base.module.gke-vpc.google_compute_network.vpc" -> "[root] module.gke_base.provider.google"
		"[root] module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork" -> "[root] module.gke_base.module.gke.google_container_cluster.gke"
		"[root] module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork" -> "[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes"
		"[root] module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork" -> "[root] module.gke_base.module.istio.helm_release.istio_crd"
		"[root] module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork" -> "[root] module.gke_base.module.istio.null_resource.get_istio"
		"[root] module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork" -> "[root] module.gke_base.module.istio.null_resource.helm_rolebinding"
		"[root] module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork" -> "[root] module.gke_base.module.istio.null_resource.wait_istio_crd"
		"[root] module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork" -> "[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller"
		"[root] module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork" -> "[root] module.gke_base.module.tiller.kubernetes_service_account.tiller"
		"[root] module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork" -> "[root] module.gke_base.module.tiller.null_resource.wait_nodepool"
		"[root] module.gke_base.module.gke-vpc.google_compute_subnetwork.vpc_subnetwork" -> "[root] module.gke_base.provider.google"
		"[root] module.gke_base.module.gke.google_container_cluster.gke" -> "[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes"
		"[root] module.gke_base.module.gke.google_container_cluster.gke" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.module.gke.google_container_cluster.gke" -> "[root] module.gke_base.module.istio.helm_release.istio_crd"
		"[root] module.gke_base.module.gke.google_container_cluster.gke" -> "[root] module.gke_base.module.istio.null_resource.get_istio"
		"[root] module.gke_base.module.gke.google_container_cluster.gke" -> "[root] module.gke_base.module.istio.null_resource.helm_rolebinding"
		"[root] module.gke_base.module.gke.google_container_cluster.gke" -> "[root] module.gke_base.module.istio.null_resource.wait_istio_crd"
		"[root] module.gke_base.module.gke.google_container_cluster.gke" -> "[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller"
		"[root] module.gke_base.module.gke.google_container_cluster.gke" -> "[root] module.gke_base.module.tiller.kubernetes_service_account.tiller"
		"[root] module.gke_base.module.gke.google_container_cluster.gke" -> "[root] module.gke_base.module.tiller.null_resource.wait_nodepool"
		"[root] module.gke_base.module.gke.google_container_cluster.gke" -> "[root] module.gke_base.provider.google-beta"
		"[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes" -> "[root] module.gke_base.module.istio.helm_release.istio_crd"
		"[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes" -> "[root] module.gke_base.module.istio.null_resource.get_istio"
		"[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes" -> "[root] module.gke_base.module.istio.null_resource.helm_rolebinding"
		"[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes" -> "[root] module.gke_base.module.istio.null_resource.wait_istio_crd"
		"[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes" -> "[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller"
		"[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes" -> "[root] module.gke_base.module.tiller.kubernetes_service_account.tiller"
		"[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes" -> "[root] module.gke_base.module.tiller.null_resource.wait_nodepool"
		"[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes" -> "[root] module.gke_base.provider.google-beta"
		"[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api" -> "[root] module.gke_base.module.gke.google_container_cluster.gke"
		"[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api" -> "[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes"
		"[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api" -> "[root] module.gke_base.module.gke.google_project_service.gke-api"
		"[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api" -> "[root] module.gke_base.module.istio.helm_release.istio_crd"
		"[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api" -> "[root] module.gke_base.module.istio.null_resource.get_istio"
		"[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api" -> "[root] module.gke_base.module.istio.null_resource.helm_rolebinding"
		"[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api" -> "[root] module.gke_base.module.istio.null_resource.wait_istio_crd"
		"[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api" -> "[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller"
		"[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api" -> "[root] module.gke_base.module.tiller.kubernetes_service_account.tiller"
		"[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api" -> "[root] module.gke_base.module.tiller.null_resource.wait_nodepool"
		"[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api" -> "[root] module.gke_base.provider.google"
		"[root] module.gke_base.module.gke.google_project_service.gke-api" -> "[root] module.gke_base.module.gke.google_container_cluster.gke"
		"[root] module.gke_base.module.gke.google_project_service.gke-api" -> "[root] module.gke_base.module.gke.google_container_node_pool.gke_nodes"
		"[root] module.gke_base.module.gke.google_project_service.gke-api" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.module.gke.google_project_service.gke-api" -> "[root] module.gke_base.module.istio.helm_release.istio_crd"
		"[root] module.gke_base.module.gke.google_project_service.gke-api" -> "[root] module.gke_base.module.istio.null_resource.get_istio"
		"[root] module.gke_base.module.gke.google_project_service.gke-api" -> "[root] module.gke_base.module.istio.null_resource.helm_rolebinding"
		"[root] module.gke_base.module.gke.google_project_service.gke-api" -> "[root] module.gke_base.module.istio.null_resource.wait_istio_crd"
		"[root] module.gke_base.module.gke.google_project_service.gke-api" -> "[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller"
		"[root] module.gke_base.module.gke.google_project_service.gke-api" -> "[root] module.gke_base.module.tiller.kubernetes_service_account.tiller"
		"[root] module.gke_base.module.gke.google_project_service.gke-api" -> "[root] module.gke_base.module.tiller.null_resource.wait_nodepool"
		"[root] module.gke_base.module.gke.google_project_service.gke-api" -> "[root] module.gke_base.provider.google"
		"[root] module.gke_base.module.istio.helm_release.istio" -> "[root] module.gke_base.provider.helm"
		"[root] module.gke_base.module.istio.helm_release.istio_crd" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.module.istio.helm_release.istio_crd" -> "[root] module.gke_base.module.istio.null_resource.wait_istio_crd"
		"[root] module.gke_base.module.istio.helm_release.istio_crd" -> "[root] module.gke_base.provider.helm"
		"[root] module.gke_base.module.istio.null_resource.get_istio" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.module.istio.null_resource.get_istio" -> "[root] module.gke_base.module.istio.helm_release.istio_crd"
		"[root] module.gke_base.module.istio.null_resource.get_istio" -> "[root] module.gke_base.module.istio.null_resource.wait_istio_crd"
		"[root] module.gke_base.module.istio.null_resource.get_istio" -> "[root] provider.null"
		"[root] module.gke_base.module.istio.null_resource.helm_rolebinding" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.module.istio.null_resource.helm_rolebinding" -> "[root] module.gke_base.module.istio.helm_release.istio_crd"
		"[root] module.gke_base.module.istio.null_resource.helm_rolebinding" -> "[root] module.gke_base.module.istio.null_resource.get_istio"
		"[root] module.gke_base.module.istio.null_resource.helm_rolebinding" -> "[root] module.gke_base.module.istio.null_resource.wait_istio_crd"
		"[root] module.gke_base.module.istio.null_resource.helm_rolebinding" -> "[root] provider.null"
		"[root] module.gke_base.module.istio.null_resource.wait_istio_crd" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.module.istio.null_resource.wait_istio_crd" -> "[root] provider.null"
		"[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller" -> "[root] module.gke_base.module.istio.helm_release.istio_crd"
		"[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller" -> "[root] module.gke_base.module.istio.null_resource.get_istio"
		"[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller" -> "[root] module.gke_base.module.istio.null_resource.helm_rolebinding"
		"[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller" -> "[root] module.gke_base.module.istio.null_resource.wait_istio_crd"
		"[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller" -> "[root] module.gke_base.provider.kubernetes"
		"[root] module.gke_base.module.tiller.kubernetes_service_account.tiller" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.module.tiller.kubernetes_service_account.tiller" -> "[root] module.gke_base.module.istio.helm_release.istio_crd"
		"[root] module.gke_base.module.tiller.kubernetes_service_account.tiller" -> "[root] module.gke_base.module.istio.null_resource.get_istio"
		"[root] module.gke_base.module.tiller.kubernetes_service_account.tiller" -> "[root] module.gke_base.module.istio.null_resource.helm_rolebinding"
		"[root] module.gke_base.module.tiller.kubernetes_service_account.tiller" -> "[root] module.gke_base.module.istio.null_resource.wait_istio_crd"
		"[root] module.gke_base.module.tiller.kubernetes_service_account.tiller" -> "[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller"
		"[root] module.gke_base.module.tiller.kubernetes_service_account.tiller" -> "[root] module.gke_base.provider.kubernetes"
		"[root] module.gke_base.module.tiller.null_resource.wait_nodepool" -> "[root] module.gke_base.module.istio.helm_release.istio"
		"[root] module.gke_base.module.tiller.null_resource.wait_nodepool" -> "[root] module.gke_base.module.istio.helm_release.istio_crd"
		"[root] module.gke_base.module.tiller.null_resource.wait_nodepool" -> "[root] module.gke_base.module.istio.null_resource.get_istio"
		"[root] module.gke_base.module.tiller.null_resource.wait_nodepool" -> "[root] module.gke_base.module.istio.null_resource.helm_rolebinding"
		"[root] module.gke_base.module.tiller.null_resource.wait_nodepool" -> "[root] module.gke_base.module.istio.null_resource.wait_istio_crd"
		"[root] module.gke_base.module.tiller.null_resource.wait_nodepool" -> "[root] module.gke_base.module.tiller.kubernetes_cluster_role_binding.tiller"
		"[root] module.gke_base.module.tiller.null_resource.wait_nodepool" -> "[root] module.gke_base.module.tiller.kubernetes_service_account.tiller"
		"[root] module.gke_base.module.tiller.null_resource.wait_nodepool" -> "[root] provider.null"
		"[root] root" -> "[root] module.gke_base.data.google_client_config.google-beta_current"
		"[root] root" -> "[root] module.gke_base.data.google_client_config.google_current"
		"[root] root" -> "[root] module.gke_base.data.google_dns_managed_zone.avenuecode"
		"[root] root" -> "[root] module.gke_base.module.gke-vpc.google_compute_network.vpc"
		"[root] root" -> "[root] module.gke_base.module.gke.google_project_service.cloudresourcemanager-api"
	}
}

I'm kinda lost at the moment.

@ryanm101
Copy link

ryanm101 commented Oct 9, 2019

digraph {
	compound = "true"
	newrank = "true"
	subgraph "root" {
		"[root] data.aws_caller_identity.current" [label = "data.aws_caller_identity.current", shape = "box"]
		"[root] data.aws_region.region" [label = "data.aws_region.region", shape = "box"]
		"[root] module.batch_cloudwatch_triggers.aws_cloudwatch_event_rule.lambda" [label = "module.batch_cloudwatch_triggers.aws_cloudwatch_event_rule.lambda", shape = "box"]
		"[root] module.batch_cloudwatch_triggers.aws_cloudwatch_event_target.lambda" [label = "module.batch_cloudwatch_triggers.aws_cloudwatch_event_target.lambda", shape = "box"]
		"[root] module.batch_cloudwatch_triggers.aws_lambda_permission.cloudwatch_trigger" [label = "module.batch_cloudwatch_triggers.aws_lambda_permission.cloudwatch_trigger", shape = "box"]
		"[root] module.batch_cloudwatch_triggers.var.lambda_arns" [label = "module.batch_cloudwatch_triggers.var.lambda_arns", shape = "note"]
		"[root] module.batch_cloudwatch_triggers.var.lambda_names" [label = "module.batch_cloudwatch_triggers.var.lambda_names", shape = "note"]
		"[root] module.batch_cloudwatch_triggers.var.schedule_expression" [label = "module.batch_cloudwatch_triggers.var.schedule_expression", shape = "note"]
		"[root] module.batch_py_lambda_roles.aws_iam_role.lambda_role" [label = "module.batch_py_lambda_roles.aws_iam_role.lambda_role", shape = "box"]
		"[root] module.batch_py_lambda_roles.aws_iam_role_policy_attachment.logging_policy" [label = "module.batch_py_lambda_roles.aws_iam_role_policy_attachment.logging_policy", shape = "box"]
		"[root] module.batch_py_lambda_roles.output.role_arns" [label = "module.batch_py_lambda_roles.output.role_arns", shape = "note"]
		"[root] module.batch_py_lambda_roles.output.roles" [label = "module.batch_py_lambda_roles.output.roles", shape = "note"]
		"[root] module.batch_py_lambda_roles.var.postfix" [label = "module.batch_py_lambda_roles.var.postfix", shape = "note"]
		"[root] module.batch_py_lambda_roles.var.role_names" [label = "module.batch_py_lambda_roles.var.role_names", shape = "note"]
		"[root] module.batch_pylambda.aws_iam_policy.lambda_s3_policy" [label = "module.batch_pylambda.aws_iam_policy.lambda_s3_policy", shape = "box"]
		"[root] module.batch_pylambda.aws_iam_role_policy_attachment.lambda_s3_policy_attach" [label = "module.batch_pylambda.aws_iam_role_policy_attachment.lambda_s3_policy_attach", shape = "box"]
		"[root] module.batch_pylambda.aws_lambda_function.API_lambda_function" [label = "module.batch_pylambda.aws_lambda_function.API_lambda_function", shape = "box"]
		"[root] module.batch_pylambda.output.arn" [label = "module.batch_pylambda.output.arn", shape = "note"]
		"[root] module.batch_pylambda.output.function_name" [label = "module.batch_pylambda.output.function_name", shape = "note"]
		"[root] module.batch_pylambda.var.bucket" [label = "module.batch_pylambda.var.bucket", shape = "note"]
		"[root] module.batch_pylambda.var.lambdaHandler" [label = "module.batch_pylambda.var.lambdaHandler", shape = "note"]
		"[root] module.batch_pylambda.var.lambda_names" [label = "module.batch_pylambda.var.lambda_names", shape = "note"]
		"[root] module.batch_pylambda.var.lambda_role_arns" [label = "module.batch_pylambda.var.lambda_role_arns", shape = "note"]
		"[root] module.batch_pylambda.var.lambda_src_base" [label = "module.batch_pylambda.var.lambda_src_base", shape = "note"]
		"[root] module.batch_pylambda.var.output_data_base" [label = "module.batch_pylambda.var.output_data_base", shape = "note"]
		"[root] module.batch_pylambda.var.postfix" [label = "module.batch_pylambda.var.postfix", shape = "note"]
		"[root] module.batch_pylambda.var.raw_data_base" [label = "module.batch_pylambda.var.raw_data_base", shape = "note"]
		"[root] module.batch_pylambda.var.runtime" [label = "module.batch_pylambda.var.runtime", shape = "note"]
		"[root] module.batch_pylambda.var.src_file" [label = "module.batch_pylambda.var.src_file", shape = "note"]
		"[root] module.live_cloudwatch_triggers.aws_cloudwatch_event_rule.lambda" [label = "module.live_cloudwatch_triggers.aws_cloudwatch_event_rule.lambda", shape = "box"]
		"[root] module.live_cloudwatch_triggers.aws_cloudwatch_event_target.lambda" [label = "module.live_cloudwatch_triggers.aws_cloudwatch_event_target.lambda", shape = "box"]
		"[root] module.live_cloudwatch_triggers.aws_lambda_permission.cloudwatch_trigger" [label = "module.live_cloudwatch_triggers.aws_lambda_permission.cloudwatch_trigger", shape = "box"]
		"[root] module.live_cloudwatch_triggers.var.lambda_arns" [label = "module.live_cloudwatch_triggers.var.lambda_arns", shape = "note"]
		"[root] module.live_cloudwatch_triggers.var.lambda_names" [label = "module.live_cloudwatch_triggers.var.lambda_names", shape = "note"]
		"[root] module.live_cloudwatch_triggers.var.schedule_expression" [label = "module.live_cloudwatch_triggers.var.schedule_expression", shape = "note"]
		"[root] module.live_py_lambda_roles.aws_iam_role.lambda_role" [label = "module.live_py_lambda_roles.aws_iam_role.lambda_role", shape = "box"]
		"[root] module.live_py_lambda_roles.aws_iam_role_policy_attachment.logging_policy" [label = "module.live_py_lambda_roles.aws_iam_role_policy_attachment.logging_policy", shape = "box"]
		"[root] module.live_py_lambda_roles.output.role_arns" [label = "module.live_py_lambda_roles.output.role_arns", shape = "note"]
		"[root] module.live_py_lambda_roles.output.roles" [label = "module.live_py_lambda_roles.output.roles", shape = "note"]
		"[root] module.live_py_lambda_roles.var.postfix" [label = "module.live_py_lambda_roles.var.postfix", shape = "note"]
		"[root] module.live_py_lambda_roles.var.role_names" [label = "module.live_py_lambda_roles.var.role_names", shape = "note"]
		"[root] module.live_pylambda.aws_iam_policy.lambda_s3_policy" [label = "module.live_pylambda.aws_iam_policy.lambda_s3_policy", shape = "box"]
		"[root] module.live_pylambda.aws_iam_role_policy_attachment.lambda_s3_policy_attach" [label = "module.live_pylambda.aws_iam_role_policy_attachment.lambda_s3_policy_attach", shape = "box"]
		"[root] module.live_pylambda.aws_lambda_function.API_lambda_function" [label = "module.live_pylambda.aws_lambda_function.API_lambda_function", shape = "box"]
		"[root] module.live_pylambda.output.arn" [label = "module.live_pylambda.output.arn", shape = "note"]
		"[root] module.live_pylambda.output.function_name" [label = "module.live_pylambda.output.function_name", shape = "note"]
		"[root] module.live_pylambda.var.bucket" [label = "module.live_pylambda.var.bucket", shape = "note"]
		"[root] module.live_pylambda.var.lambdaHandler" [label = "module.live_pylambda.var.lambdaHandler", shape = "note"]
		"[root] module.live_pylambda.var.lambda_names" [label = "module.live_pylambda.var.lambda_names", shape = "note"]
		"[root] module.live_pylambda.var.lambda_role_arns" [label = "module.live_pylambda.var.lambda_role_arns", shape = "note"]
		"[root] module.live_pylambda.var.lambda_src_base" [label = "module.live_pylambda.var.lambda_src_base", shape = "note"]
		"[root] module.live_pylambda.var.output_data_base" [label = "module.live_pylambda.var.output_data_base", shape = "note"]
		"[root] module.live_pylambda.var.postfix" [label = "module.live_pylambda.var.postfix", shape = "note"]
		"[root] module.live_pylambda.var.raw_data_base" [label = "module.live_pylambda.var.raw_data_base", shape = "note"]
		"[root] module.live_pylambda.var.runtime" [label = "module.live_pylambda.var.runtime", shape = "note"]
		"[root] module.live_pylambda.var.src_file" [label = "module.live_pylambda.var.src_file", shape = "note"]
		"[root] module.s3.aws_s3_bucket.bucket" [label = "module.s3.aws_s3_bucket.bucket", shape = "box"]
		"[root] module.s3.aws_s3_bucket_object.lambda_py_src" [label = "module.s3.aws_s3_bucket_object.lambda_py_src", shape = "box"]
		"[root] module.s3.aws_s3_bucket_policy.bucket_policy" [label = "module.s3.aws_s3_bucket_policy.bucket_policy", shape = "box"]
		"[root] module.s3.output.arn" [label = "module.s3.output.arn", shape = "note"]
		"[root] module.s3.output.bucket_domain_name" [label = "module.s3.output.bucket_domain_name", shape = "note"]
		"[root] module.s3.output.id" [label = "module.s3.output.id", shape = "note"]
		"[root] module.s3.output.pydummyfile" [label = "module.s3.output.pydummyfile", shape = "note"]
		"[root] module.s3.var.bucket_name" [label = "module.s3.var.bucket_name", shape = "note"]
		"[root] module.s3.var.lambda_role_arns" [label = "module.s3.var.lambda_role_arns", shape = "note"]
		"[root] module.s3.var.postfix" [label = "module.s3.var.postfix", shape = "note"]
		"[root] module.s3.var.src_path" [label = "module.s3.var.src_path", shape = "note"]
		"[root] output.batch_py_lambda_roles" [label = "output.batch_py_lambda_roles", shape = "note"]
		"[root] output.batch_py_lambdas" [label = "output.batch_py_lambdas", shape = "note"]
		"[root] output.bucket" [label = "output.bucket", shape = "note"]
		"[root] output.live_py_lambda_roles" [label = "output.live_py_lambda_roles", shape = "note"]
		"[root] output.live_py_lambdas" [label = "output.live_py_lambdas", shape = "note"]
		"[root] output.pydummyfile" [label = "output.pydummyfile", shape = "note"]
		"[root] provider.aws" [label = "provider.aws", shape = "diamond"]
		"[root] provider.aws (close)" [label = "provider.aws (close)", shape = "diamond"]
		"[root] var.batch_expression" [label = "var.batch_expression", shape = "note"]
		"[root] var.batch_py_services" [label = "var.batch_py_services", shape = "note"]
		"[root] var.cost_centre" [label = "var.cost_centre", shape = "note"]
		"[root] var.environment" [label = "var.environment", shape = "note"]
		"[root] var.lambdaHandler" [label = "var.lambdaHandler", shape = "note"]
		"[root] var.lambda_src_base" [label = "var.lambda_src_base", shape = "note"]
		"[root] var.live_expression" [label = "var.live_expression", shape = "note"]
		"[root] var.live_java_services" [label = "var.live_java_services", shape = "note"]
		"[root] var.live_py_services" [label = "var.live_py_services", shape = "note"]
		"[root] var.output_data_base" [label = "var.output_data_base", shape = "note"]
		"[root] var.postfix" [label = "var.postfix", shape = "note"]
		"[root] var.raw_data_base" [label = "var.raw_data_base", shape = "note"]
		"[root] var.timeout" [label = "var.timeout", shape = "note"]
		"[root] var.vpc_security_groups" [label = "var.vpc_security_groups", shape = "note"]
		"[root] var.vpc_subnets" [label = "var.vpc_subnets", shape = "note"]
		"[root] data.aws_caller_identity.current" -> "[root] provider.aws"
		"[root] data.aws_region.region" -> "[root] provider.aws"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] data.aws_caller_identity.current"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] data.aws_region.region"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] module.batch_cloudwatch_triggers.aws_cloudwatch_event_target.lambda"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] module.batch_cloudwatch_triggers.aws_lambda_permission.cloudwatch_trigger"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] module.live_cloudwatch_triggers.aws_cloudwatch_event_target.lambda"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] module.live_cloudwatch_triggers.aws_lambda_permission.cloudwatch_trigger"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] output.batch_py_lambda_roles"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] output.batch_py_lambdas"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] output.bucket"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] output.live_py_lambda_roles"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] output.live_py_lambdas"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] output.pydummyfile"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] var.cost_centre"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] var.live_java_services"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] var.timeout"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] var.vpc_security_groups"
		"[root] meta.count-boundary (EachMode fixup)" -> "[root] var.vpc_subnets"
		"[root] module.batch_cloudwatch_triggers.aws_cloudwatch_event_rule.lambda" -> "[root] module.batch_cloudwatch_triggers.var.lambda_names"
		"[root] module.batch_cloudwatch_triggers.aws_cloudwatch_event_rule.lambda" -> "[root] module.batch_cloudwatch_triggers.var.schedule_expression"
		"[root] module.batch_cloudwatch_triggers.aws_cloudwatch_event_target.lambda" -> "[root] module.batch_cloudwatch_triggers.aws_cloudwatch_event_rule.lambda"
		"[root] module.batch_cloudwatch_triggers.aws_cloudwatch_event_target.lambda" -> "[root] module.batch_cloudwatch_triggers.var.lambda_arns"
		"[root] module.batch_cloudwatch_triggers.aws_lambda_permission.cloudwatch_trigger" -> "[root] module.batch_cloudwatch_triggers.aws_cloudwatch_event_rule.lambda"
		"[root] module.batch_cloudwatch_triggers.aws_lambda_permission.cloudwatch_trigger" -> "[root] module.batch_cloudwatch_triggers.var.lambda_arns"
		"[root] module.batch_cloudwatch_triggers.var.lambda_arns" -> "[root] module.batch_pylambda.output.arn"
		"[root] module.batch_cloudwatch_triggers.var.lambda_names" -> "[root] module.batch_pylambda.output.function_name"
		"[root] module.batch_cloudwatch_triggers.var.schedule_expression" -> "[root] var.batch_expression"
		"[root] module.batch_py_lambda_roles.aws_iam_role.lambda_role" -> "[root] module.batch_py_lambda_roles.var.postfix"
		"[root] module.batch_py_lambda_roles.aws_iam_role.lambda_role" -> "[root] module.batch_py_lambda_roles.var.role_names"
		"[root] module.batch_py_lambda_roles.aws_iam_role.lambda_role" -> "[root] provider.aws"
		"[root] module.batch_py_lambda_roles.aws_iam_role_policy_attachment.logging_policy" -> "[root] module.batch_py_lambda_roles.aws_iam_role.lambda_role"
		"[root] module.batch_py_lambda_roles.output.role_arns" -> "[root] module.batch_py_lambda_roles.aws_iam_role.lambda_role"
		"[root] module.batch_py_lambda_roles.output.roles" -> "[root] module.batch_py_lambda_roles.aws_iam_role.lambda_role"
		"[root] module.batch_py_lambda_roles.var.postfix" -> "[root] var.environment"
		"[root] module.batch_py_lambda_roles.var.postfix" -> "[root] var.postfix"
		"[root] module.batch_py_lambda_roles.var.role_names" -> "[root] var.batch_py_services"
		"[root] module.batch_pylambda.aws_iam_policy.lambda_s3_policy" -> "[root] module.batch_pylambda.var.bucket"
		"[root] module.batch_pylambda.aws_iam_policy.lambda_s3_policy" -> "[root] module.batch_pylambda.var.lambda_names"
		"[root] module.batch_pylambda.aws_iam_policy.lambda_s3_policy" -> "[root] module.batch_pylambda.var.lambda_src_base"
		"[root] module.batch_pylambda.aws_iam_policy.lambda_s3_policy" -> "[root] module.batch_pylambda.var.postfix"
		"[root] module.batch_pylambda.aws_iam_policy.lambda_s3_policy" -> "[root] module.batch_pylambda.var.raw_data_base"
		"[root] module.batch_pylambda.aws_iam_role_policy_attachment.lambda_s3_policy_attach" -> "[root] module.batch_pylambda.aws_iam_policy.lambda_s3_policy"
		"[root] module.batch_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.batch_pylambda.var.bucket"
		"[root] module.batch_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.batch_pylambda.var.lambdaHandler"
		"[root] module.batch_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.batch_pylambda.var.lambda_names"
		"[root] module.batch_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.batch_pylambda.var.lambda_role_arns"
		"[root] module.batch_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.batch_pylambda.var.postfix"
		"[root] module.batch_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.batch_pylambda.var.raw_data_base"
		"[root] module.batch_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.batch_pylambda.var.runtime"
		"[root] module.batch_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.batch_pylambda.var.src_file"
		"[root] module.batch_pylambda.output.arn" -> "[root] module.batch_pylambda.aws_lambda_function.API_lambda_function"
		"[root] module.batch_pylambda.output.function_name" -> "[root] module.batch_pylambda.aws_lambda_function.API_lambda_function"
		"[root] module.batch_pylambda.var.bucket" -> "[root] module.s3.aws_s3_bucket_policy.bucket_policy"
		"[root] module.batch_pylambda.var.bucket" -> "[root] module.s3.output.arn"
		"[root] module.batch_pylambda.var.bucket" -> "[root] module.s3.output.bucket_domain_name"
		"[root] module.batch_pylambda.var.bucket" -> "[root] module.s3.output.id"
		"[root] module.batch_pylambda.var.bucket" -> "[root] module.s3.output.pydummyfile"
		"[root] module.batch_pylambda.var.lambdaHandler" -> "[root] var.lambdaHandler"
		"[root] module.batch_pylambda.var.lambda_names" -> "[root] var.batch_py_services"
		"[root] module.batch_pylambda.var.lambda_role_arns" -> "[root] module.batch_py_lambda_roles.output.role_arns"
		"[root] module.batch_pylambda.var.lambda_src_base" -> "[root] var.lambda_src_base"
		"[root] module.batch_pylambda.var.output_data_base" -> "[root] var.output_data_base"
		"[root] module.batch_pylambda.var.postfix" -> "[root] var.environment"
		"[root] module.batch_pylambda.var.postfix" -> "[root] var.postfix"
		"[root] module.batch_pylambda.var.raw_data_base" -> "[root] var.raw_data_base"
		"[root] module.batch_pylambda.var.src_file" -> "[root] module.s3.output.pydummyfile"
		"[root] module.live_cloudwatch_triggers.aws_cloudwatch_event_rule.lambda" -> "[root] module.live_cloudwatch_triggers.var.lambda_names"
		"[root] module.live_cloudwatch_triggers.aws_cloudwatch_event_rule.lambda" -> "[root] module.live_cloudwatch_triggers.var.schedule_expression"
		"[root] module.live_cloudwatch_triggers.aws_cloudwatch_event_target.lambda" -> "[root] module.live_cloudwatch_triggers.aws_cloudwatch_event_rule.lambda"
		"[root] module.live_cloudwatch_triggers.aws_cloudwatch_event_target.lambda" -> "[root] module.live_cloudwatch_triggers.var.lambda_arns"
		"[root] module.live_cloudwatch_triggers.aws_lambda_permission.cloudwatch_trigger" -> "[root] module.live_cloudwatch_triggers.aws_cloudwatch_event_rule.lambda"
		"[root] module.live_cloudwatch_triggers.aws_lambda_permission.cloudwatch_trigger" -> "[root] module.live_cloudwatch_triggers.var.lambda_arns"
		"[root] module.live_cloudwatch_triggers.var.lambda_arns" -> "[root] module.live_pylambda.output.arn"
		"[root] module.live_cloudwatch_triggers.var.lambda_names" -> "[root] module.live_pylambda.output.function_name"
		"[root] module.live_cloudwatch_triggers.var.schedule_expression" -> "[root] var.live_expression"
		"[root] module.live_py_lambda_roles.aws_iam_role.lambda_role" -> "[root] module.live_py_lambda_roles.var.postfix"
		"[root] module.live_py_lambda_roles.aws_iam_role.lambda_role" -> "[root] module.live_py_lambda_roles.var.role_names"
		"[root] module.live_py_lambda_roles.aws_iam_role.lambda_role" -> "[root] provider.aws"
		"[root] module.live_py_lambda_roles.aws_iam_role_policy_attachment.logging_policy" -> "[root] module.live_py_lambda_roles.aws_iam_role.lambda_role"
		"[root] module.live_py_lambda_roles.output.role_arns" -> "[root] module.live_py_lambda_roles.aws_iam_role.lambda_role"
		"[root] module.live_py_lambda_roles.output.roles" -> "[root] module.live_py_lambda_roles.aws_iam_role.lambda_role"
		"[root] module.live_py_lambda_roles.var.postfix" -> "[root] var.environment"
		"[root] module.live_py_lambda_roles.var.postfix" -> "[root] var.postfix"
		"[root] module.live_py_lambda_roles.var.role_names" -> "[root] var.live_py_services"
		"[root] module.live_pylambda.aws_iam_policy.lambda_s3_policy" -> "[root] module.live_pylambda.var.bucket"
		"[root] module.live_pylambda.aws_iam_policy.lambda_s3_policy" -> "[root] module.live_pylambda.var.lambda_names"
		"[root] module.live_pylambda.aws_iam_policy.lambda_s3_policy" -> "[root] module.live_pylambda.var.lambda_src_base"
		"[root] module.live_pylambda.aws_iam_policy.lambda_s3_policy" -> "[root] module.live_pylambda.var.postfix"
		"[root] module.live_pylambda.aws_iam_policy.lambda_s3_policy" -> "[root] module.live_pylambda.var.raw_data_base"
		"[root] module.live_pylambda.aws_iam_role_policy_attachment.lambda_s3_policy_attach" -> "[root] module.live_pylambda.aws_iam_policy.lambda_s3_policy"
		"[root] module.live_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.live_pylambda.var.bucket"
		"[root] module.live_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.live_pylambda.var.lambdaHandler"
		"[root] module.live_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.live_pylambda.var.lambda_names"
		"[root] module.live_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.live_pylambda.var.lambda_role_arns"
		"[root] module.live_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.live_pylambda.var.postfix"
		"[root] module.live_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.live_pylambda.var.raw_data_base"
		"[root] module.live_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.live_pylambda.var.runtime"
		"[root] module.live_pylambda.aws_lambda_function.API_lambda_function" -> "[root] module.live_pylambda.var.src_file"
		"[root] module.live_pylambda.output.arn" -> "[root] module.live_pylambda.aws_lambda_function.API_lambda_function"
		"[root] module.live_pylambda.output.function_name" -> "[root] module.live_pylambda.aws_lambda_function.API_lambda_function"
		"[root] module.live_pylambda.var.bucket" -> "[root] module.s3.aws_s3_bucket_policy.bucket_policy"
		"[root] module.live_pylambda.var.bucket" -> "[root] module.s3.output.arn"
		"[root] module.live_pylambda.var.bucket" -> "[root] module.s3.output.bucket_domain_name"
		"[root] module.live_pylambda.var.bucket" -> "[root] module.s3.output.id"
		"[root] module.live_pylambda.var.bucket" -> "[root] module.s3.output.pydummyfile"
		"[root] module.live_pylambda.var.lambdaHandler" -> "[root] var.lambdaHandler"
		"[root] module.live_pylambda.var.lambda_names" -> "[root] var.live_py_services"
		"[root] module.live_pylambda.var.lambda_role_arns" -> "[root] module.live_py_lambda_roles.output.role_arns"
		"[root] module.live_pylambda.var.lambda_src_base" -> "[root] var.lambda_src_base"
		"[root] module.live_pylambda.var.output_data_base" -> "[root] var.output_data_base"
		"[root] module.live_pylambda.var.postfix" -> "[root] var.environment"
		"[root] module.live_pylambda.var.postfix" -> "[root] var.postfix"
		"[root] module.live_pylambda.var.raw_data_base" -> "[root] var.raw_data_base"
		"[root] module.live_pylambda.var.src_file" -> "[root] module.s3.output.pydummyfile"
		"[root] module.s3.aws_s3_bucket.bucket" -> "[root] module.s3.var.bucket_name"
		"[root] module.s3.aws_s3_bucket.bucket" -> "[root] module.s3.var.postfix"
		"[root] module.s3.aws_s3_bucket.bucket" -> "[root] provider.aws"
		"[root] module.s3.aws_s3_bucket_object.lambda_py_src" -> "[root] module.s3.aws_s3_bucket.bucket"
		"[root] module.s3.aws_s3_bucket_object.lambda_py_src" -> "[root] module.s3.var.src_path"
		"[root] module.s3.aws_s3_bucket_policy.bucket_policy" -> "[root] module.s3.aws_s3_bucket.bucket"
		"[root] module.s3.aws_s3_bucket_policy.bucket_policy" -> "[root] module.s3.var.lambda_role_arns"
		"[root] module.s3.output.arn" -> "[root] module.s3.aws_s3_bucket.bucket"
		"[root] module.s3.output.bucket_domain_name" -> "[root] module.s3.aws_s3_bucket.bucket"
		"[root] module.s3.output.id" -> "[root] module.s3.aws_s3_bucket.bucket"
		"[root] module.s3.output.pydummyfile" -> "[root] module.s3.aws_s3_bucket_object.lambda_py_src"
		"[root] module.s3.var.lambda_role_arns" -> "[root] module.batch_py_lambda_roles.output.role_arns"
		"[root] module.s3.var.lambda_role_arns" -> "[root] module.live_py_lambda_roles.output.role_arns"
		"[root] module.s3.var.postfix" -> "[root] var.environment"
		"[root] module.s3.var.postfix" -> "[root] var.postfix"
		"[root] module.s3.var.src_path" -> "[root] var.lambda_src_base"
		"[root] output.batch_py_lambda_roles" -> "[root] module.batch_py_lambda_roles.aws_iam_role_policy_attachment.logging_policy"
		"[root] output.batch_py_lambda_roles" -> "[root] module.batch_py_lambda_roles.output.role_arns"
		"[root] output.batch_py_lambda_roles" -> "[root] module.batch_py_lambda_roles.output.roles"
		"[root] output.batch_py_lambdas" -> "[root] module.batch_pylambda.aws_iam_role_policy_attachment.lambda_s3_policy_attach"
		"[root] output.batch_py_lambdas" -> "[root] module.batch_pylambda.output.arn"
		"[root] output.batch_py_lambdas" -> "[root] module.batch_pylambda.output.function_name"
		"[root] output.batch_py_lambdas" -> "[root] module.batch_pylambda.var.output_data_base"
		"[root] output.bucket" -> "[root] module.s3.aws_s3_bucket_policy.bucket_policy"
		"[root] output.bucket" -> "[root] module.s3.output.arn"
		"[root] output.bucket" -> "[root] module.s3.output.bucket_domain_name"
		"[root] output.bucket" -> "[root] module.s3.output.id"
		"[root] output.bucket" -> "[root] module.s3.output.pydummyfile"
		"[root] output.live_py_lambda_roles" -> "[root] module.live_py_lambda_roles.aws_iam_role_policy_attachment.logging_policy"
		"[root] output.live_py_lambda_roles" -> "[root] module.live_py_lambda_roles.output.role_arns"
		"[root] output.live_py_lambda_roles" -> "[root] module.live_py_lambda_roles.output.roles"
		"[root] output.live_py_lambdas" -> "[root] module.live_pylambda.aws_iam_role_policy_attachment.lambda_s3_policy_attach"
		"[root] output.live_py_lambdas" -> "[root] module.live_pylambda.output.arn"
		"[root] output.live_py_lambdas" -> "[root] module.live_pylambda.output.function_name"
		"[root] output.live_py_lambdas" -> "[root] module.live_pylambda.var.output_data_base"
		"[root] output.pydummyfile" -> "[root] module.s3.output.pydummyfile"
		"[root] provider.aws (close)" -> "[root] data.aws_caller_identity.current"
		"[root] provider.aws (close)" -> "[root] data.aws_region.region"
		"[root] provider.aws (close)" -> "[root] module.batch_cloudwatch_triggers.aws_cloudwatch_event_target.lambda"
		"[root] provider.aws (close)" -> "[root] module.batch_cloudwatch_triggers.aws_lambda_permission.cloudwatch_trigger"
		"[root] provider.aws (close)" -> "[root] module.batch_py_lambda_roles.aws_iam_role_policy_attachment.logging_policy"
		"[root] provider.aws (close)" -> "[root] module.batch_pylambda.aws_iam_role_policy_attachment.lambda_s3_policy_attach"
		"[root] provider.aws (close)" -> "[root] module.live_cloudwatch_triggers.aws_cloudwatch_event_target.lambda"
		"[root] provider.aws (close)" -> "[root] module.live_cloudwatch_triggers.aws_lambda_permission.cloudwatch_trigger"
		"[root] provider.aws (close)" -> "[root] module.live_py_lambda_roles.aws_iam_role_policy_attachment.logging_policy"
		"[root] provider.aws (close)" -> "[root] module.live_pylambda.aws_iam_role_policy_attachment.lambda_s3_policy_attach"
		"[root] root" -> "[root] meta.count-boundary (EachMode fixup)"
		"[root] root" -> "[root] provider.aws (close)"
	}
}

Same issue and neither 1 or 2 step destroys works, Terraform 12.9.

Error: Cycle: module.s3.aws_s3_bucket_object.lambda_py_src (destroy), module.batch_pylambda.aws_lambda_function.API_lambda_function[0] (destroy), module.live_pylambda.aws_lambda_function.API_lambda_function[0] (destroy), module.live_pylambda.aws_iam_role_policy_attachment.lambda_s3_policy_attach[0] (destroy), module.live_pylambda.aws_iam_policy.lambda_s3_policy[0] (destroy), module.s3.aws_s3_bucket.bucket (destroy), module.batch_pylambda.var.bucket, output.batch_py_lambdas, module.batch_pylambda.aws_iam_role_policy_attachment.lambda_s3_policy_attach[0] (destroy), module.batch_pylambda.aws_iam_policy.lambda_s3_policy[0] (destroy), module.s3.aws_s3_bucket_policy.bucket_policy (destroy), module.live_pylambda.var.bucket, output.live_py_lambdas

I am using counts inside modules that are dependent on arrays being passed in.

The strange thing is this was working fine for a time as i did a lot of testing and tearing down.

@ryanm101
Copy link

ryanm101 commented Oct 9, 2019

I got it to destroy with:

terraform destroy -var-file=dev.tfvars.json -target=module.batch_cloudwatch_triggers
terraform destroy -var-file=dev.tfvars.json -target=module.batch_pylambda
terraform destroy -var-file=dev.tfvars.json -target=module.live_cloudwatch_triggers
terraform destroy -var-file=dev.tfvars.json -target=module.live_pylambda
terraform destroy -var-file=dev.tfvars.json

@aceferreira
Copy link

Same error even after v0.12.13

@tbondarchuk
Copy link

Seems to be fixed in 0.12.15.

Tested on @meyertime example:

locals {
    l = data.template_file.d.rendered
}

data "template_file" "d" {
    template = "true"
}

resource "null_resource" "a" {
    count = local.l ? 1 : 0
}

0.12.13 two-step destroy fails with cycle error, 0.12.15 two-steps works without any issues.

more complicated code works as well.

@edli2
Copy link

edli2 commented Nov 15, 2019

That's a great news! I just tested with version 0.12.14 and 0.12.15. Both of them are working with two-way destroy me now.

@teamterraform
Copy link
Contributor

Thanks for confirming that this behavior has improved in 0.12.14, @aliusmiles and @edli2. It's likely that either #22937 or #22976 was responsible for the changed behavior.

We know that there are still some remaining cases that can lead to cycles, so if you find yourself with a similar error message or situation after upgrading to Terraform 0.12.13 or later please open a new issue and complete the issue template so that we can gather a fresh set of reproduction steps against the improved graph construction behavior. The changes linked above have invalidated the debugging work that everyone did above in this issue by changing the graph shape, so we're going to lock this issue just to reinforce that any situations with similar symptoms will need to be reproduced and debugged again in a new issue against the latest versions of Terraform.

Thanks for all the help in digging into this issue, everyone!

@hashicorp hashicorp locked as resolved and limited conversation to collaborators Nov 16, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug core v0.12 Issues (primarily bugs) reported against v0.12 releases
Projects
None yet
Development

No branches or pull requests