Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running an empty destroy with an interpolated output in module fails #17862

Closed
dvishniakov opened this issue Apr 13, 2018 · 16 comments
Closed

Running an empty destroy with an interpolated output in module fails #17862

dvishniakov opened this issue Apr 13, 2018 · 16 comments

Comments

@dvishniakov
Copy link

dvishniakov commented Apr 13, 2018

Seems like another scenario when TF fail to run destroy. Reproducible on empty setup even without creating resources.
Since it's reproducible without provider credentials, is it possible to add tests?
#17768 fixed code but didn't add any tests

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request

Note: this block was copy-pasted from terraform-provider-aws

Terraform Version

Affected: 
0.11.4, 0.11.5
0.11.6, 0.11.7 (output differs a little bit from 0.11.4, 0.11.5)

Terraform Configuration Files

main.tf

provider "aws" {
  region  = "us-east-1"
  version = "~> 1.14"
}
terraform {
  required_version = ">= 0.11.0"
}
module "cluster" {
  source = "./cluster"
}
output "main_cluster_output" {
  value = "${module.cluster.cluster_output}" # produces an error
}
output "main_app_output" {
  value = "${module.cluster.app_output}"
}

# The same, below - works

resource "aws_ecs_cluster" "default_cluster" {
  name = "cluster_name"
}
output "cluster_output" {
  value = {
    "cluster_id"   = "${aws_ecs_cluster.default_cluster.id}"
    "cluster_name" = "${aws_ecs_cluster.default_cluster.name}"
  }
}

cluster/cluster-main.tf

resource "aws_ecs_cluster" "default_cluster" {
  name = "cluster_name"
}
resource "aws_ecr_repository" "default_ecr" {
  name = "test-bug"
}
output "cluster_output" {
  value = {
    "cluster_id" = "${aws_ecs_cluster.default_cluster.id}"

    # "repository_url" = "${aws_ecr_repository.default_ecr.repository_url}" # if you uncomment this line, error from it hides default_cluster errors
    "cluster_name" = "${aws_ecs_cluster.default_cluster.name}"
  }
}
output "app_output" {
  value = {
    "repository_url" = "${aws_ecr_repository.default_ecr.repository_url}"
  }
}

Debug Output

https://gist.github.com/dvishniakov/58ab1fef3126665d85c63f13803b3b05

Expected Behavior

No error output, successful output, exit with code 0

Actual Behavior

Error: Error applying plan:

TF 0.11.4, 0.11.5,

3 error(s) occurred:

  • module.cluster.output.cluster_output: variable "default_cluster" is nil, but no error was reported
  • output.cluster_output: variable "default_cluster" is nil, but no error was reported
  • module.cluster.output.app_output: variable "default_ecr" is nil, but no error was reported

TF 0.11.6, 0.11.7

2 error(s) occurred:

  • module.cluster.output.cluster_output: variable "default_cluster" is nil, but no error was reported
  • module.cluster.output.app_output: variable "default_ecr" is nil, but no error was reported

Steps to Reproduce

  1. terraform init
  2. terraform plan -input=false --destroy -out=terraform.plan
  3. terraform apply -input=false terraform.plan

References

#17691, probably partially fixed by @jbardin in #17768

@dvishniakov
Copy link
Author

@apparentlymart FYI

@holyketzer
Copy link

Have the same issue on v0.11.5:

resource "aws_subnet" "mysubnet" {
  vpc_id     = "${var.vpc_id}"
  cidr_block = "10.43.${var.cidr_range}.0/24"

  tags = {
    Name = "${var.tag}"
  }
}

output "subnet_id" {
  value = "${aws_subnet.mysubnet.id}"
}

Error occurs during destroy If apply was failed.
Does anyone have a workaround, I tried this:

output "subnet_id" {
  value = "${aws_subnet.mysubnet ? aws_subnet.mysubnet.id : ""}"
}

but it doesn't work with Error reading config for output subnet_id: aws_subnet.mysubnet: resource variables must be three parts: TYPE.NAME.ATTR in

@holyketzer
Copy link

holyketzer commented May 28, 2018

Cool, after couple of hours I found the workaround:

Inspired by #16681 (comment)

We don't need several aws_subnet objects, but we can interpret it as an array with one element!

So it works:

resource "aws_subnet" "mysubnet" {
  count      = 1
  vpc_id     = "${var.vpc_id}"
  cidr_block = "10.43.${var.cidr_range}.0/24"

  tags = {
    Name = "${var.tag}"
  }
}

output "subnet_id" {
  value = "${element(concat(aws_instance.mysubnet.*.public_ip, list("")), 0)}"
}

Don't forget to fix references to this resources in another places in my case from "${aws_subnet. mysubnet.id}" to "${aws_subnet.mysubnet.0.id}"

@jantman
Copy link

jantman commented Jul 25, 2018

I'm seeing this as well on 0.11.7 with:

* provider.archive: version = "~> 1.0"
* provider.aws: version = "~> 1.28"
* provider.consul: version = "~> 2.1"
* provider.datadog: version = "~> 1.0"
* provider.null: version = "~> 1.0"
* provider.template: version = "~> 1.0"

In my case the initial destroy failed after actually deleting everything; I'm left with an empty state file but every destroy fails with these errors for interpolated outputs:

* module.bento-githook-proxy.output.hook_url: Resource 'aws_api_gateway_rest_api.rest_api' does not have attribute 'id' for variable 'aws_api_gateway_rest_api.rest_api.id'
* module.bento_iam_role.output.ip_arn: variable "bento" is nil, but no error was reported
* module.bento_iam_role.output.arn: variable "bento" is nil, but no error was reported

@dvishniakov
Copy link
Author

@jantman have you tried TF_WARN_OUTPUT_ERRORS=1 terraform destroy?

@durkode
Copy link

durkode commented Aug 4, 2018

+1 for fixing this issue. If a single issue occurs during our delete process our environment is left in a half destroyed yet undestroyable state. The TF_WARN_OUTPUT_ERRORS=1 fix works, but I feel it is a hack masking the underlying problem

@sulabhk
Copy link

sulabhk commented Aug 30, 2018

The TF_WARN_OUTPUT_ERRORS variable does not see to work with 0.11.8 on Linux 64 bit Please refer this link as well https://github.com/hashicorp/terraform/pull/17768#issuecomment-417281234

@dee-kryvenko
Copy link

dee-kryvenko commented Sep 11, 2018

I just found another use case related to that:

provider "random" {
  version = "= 2.0.0"
}

resource "random_id" "test_suffix" {
  byte_length = 2
}

module "some_awesome_module" {
  source = "../.."

  foo = "bar-${random_id.test_suffix.hex}"
}

I'm getting:

       * module.some_awesome_module.var.foo: variable "test_suffix" is nil, but no error was reported

TF_WARN_OUTPUT_ERRORS obviously not going to make any difference as in this case it has nothing to do with the output. Same applies to random_string. It is reproducible on both 0.11.8 and 0.11.3.

The issue was there for a while preventing implementing integration testing with kitchen-terraform and inspec, even for some basic stuff such as simple use case using random provider. cc @ncs-alane - maybe there some workaround can be done on the kitchen side?

@ncs-alane
Copy link

Hey @llibicpep,

If you would like to open an issue against Kitchen-Terraform, we may be able to identity a workflow or configuration to avoid your particular problem.

@dee-kryvenko
Copy link

Created newcontext-oss/kitchen-terraform#271

@hpuc
Copy link

hpuc commented Oct 11, 2018

We have the same problem with Terraform v0.11.8 on Linux 64bit. The output looks a little bit different:

  • module.compute.output.service_endpoint: Resource 'aws_lb.ext-alb' does not have attribute 'dns_name' for variable 'aws_lb.ext-alb.dns_name'
  • module.compute.output.bastion_host_public_ip: Resource 'aws_instance.bastion_host' does not have attribute 'public_ip' for variable 'aws_instance.bastion_host.public_ip'

With TF_WARN_OUTPUT_ERRORS=1, the problem disappears but it should not happen in the first place IMHO.

@vlad2
Copy link

vlad2 commented Oct 15, 2018

Hello,

Any updates on this bug? It's quite a problem to not be able to destroy resources

dahlke pushed a commit to hashicorp-modules/hashistack-azure that referenced this issue Dec 20, 2018
@bernadinm
Copy link

This is still a problem with variables:

$ TF_WARN_OUTPUT_ERRORS=1 terraform destroy --auto-approve

Error: Error applying plan:

1 error(s) occurred:

* module.dcos.local.cluster_name: local.cluster_name: variable "id" is nil, but no error was reported

@displague
Copy link

displague commented Mar 7, 2019

@bernadinm I overcame a very similar sounding problem in 0.11 using this approach:

locals {
  result = {
    command = ""
  }
  kubeadm_join_results = "${concat(data.external.kubeadm_join.*.result, list(local.result))}"
  kubeadm_join_command = "${lookup(local.kubeadm_join_results["0"], "command", "")}"
}

 output "kubeadm_join_command" {
  depends_on = ["null_resource.masters_provisioner"]
  value = "${local.kubeadm_join_command}"
 }

Full description here: https://groups.google.com/d/msg/terraform-tool/y9H4rAVOcLA/G5iftvErBwAJ

@hashibot
Copy link
Contributor

Hello! 🤖

This issue relates to an older version of Terraform that is no longer in active development, and because the area of Terraform it relates to has changed significantly since the issue was opened we suspect that the issue is either fixed or that the circumstances around it have changed enough that we'd need an updated issue report in order to reproduce and address it.

If you're still seeing this or a similar issue in the latest version of Terraform, please do feel free to open a new bug report! Please be sure to include all of the information requested in the template, even if it might seem redundant with the information already shared in this issue, because the internal details relating to this problem are likely to be different in the current version of Terraform.

Thanks!

@ghost
Copy link

ghost commented Sep 27, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Sep 27, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.