Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

file(path) - flag for lazy evaluation of file's existence #10878

Closed
stephanlindauer opened this issue Dec 21, 2016 · 24 comments
Closed

file(path) - flag for lazy evaluation of file's existence #10878

stephanlindauer opened this issue Dec 21, 2016 · 24 comments

Comments

@stephanlindauer
Copy link

Affected Resource(s)

  • file(path)

Expected Behavior

I'd like to have a flag that I can set so Terraform doesn't abort terraform apply if the referenced file doesn't exist (yet), while they will be available at the point in time, when they are actually needed (because I run a local_exec before that). In my current setup ( https://github.com/stephanlindauer/terra-aws-core-kube ), I need to put dummy files to satisfy this requirement. It would be nice to have an optional parameter that enables lazy evaluation of whether that file exists at the point in time where it's actually needed.
file(path,false) or something like this.

Actual Behavior

terraform apply fails

Steps to Reproduce

reference a file that doesn't exist with file(path), then terraform apply

@apparentlymart
Copy link
Contributor

Hi @stephanlindauer. Thanks for creating this feature request!

The idiom of using a local-exec to write a file to disk to later read with file is generally a workaround for some other missing functionality in Terraform, so I'd be interested to delve a little more into what you're trying to do here and whether there's a higher-level Terraform feature that would serve the need better.

Looking at your repo, one example I see is hitting the discovery endpoint on etcd. It would seem reasonable to me for Terraform to have a resource to interact with the etcd discovery service directly, removing the need for you to run curl and redirect to a file.

The other thing I see in your repo is a script that generates TLS certificates. We added the TLS provider to Terraform to allow certificates to be generated inline without the need to shell out to other programs, though I do see that you have some extra steps to interact with kubectl to configure all of those certs; would this use-case be served by Terraform having its own Kubernetes provider, I wonder?

In 0.8 we added an "escape hatch" to allow running external programs to gather data without the need to temporarily stash the data into an on-disk file, in the form of the external data source. Unfortunately it seems like a poor fit for both of your use-cases as-is because they are both stateful things: do some work once and retain the result for future runs. It was my intent to in future create an external resource too, which would then allow stateful things to be represented; I just wanted to see how/whether the data source would be used first before getting into that more complex scenario.

In general I'd prefer to see Terraform add features that allow us to avoid the need to write temporary files to disk rather than create features to make temporary files smoother to use. Perhaps the above features are not the right ones, but I think we can find a cleaner way to model your use-cases here than local-exec+file(..).

@stephanlindauer
Copy link
Author

hi martin, thanks for your extensive feedback.

i think there are probably a lot of very similar use-cases that have nothing to do with certs or kubernetes. for me having more flexible options for local-exec+file(..) makes terraform more equipped for situations where it doesn't (yet or ever) supply an out-of-the-box feature for that exact use-case.
using build-in features is a good thing if it exactly fits the requirements, if not, its always nice to have a richer set of options to "freestyle".

so thanks for looking into use-case and potentially putting it into consideration for future features.

@munnerz
Copy link

munnerz commented May 22, 2017

I'd like to add my support for this.

I have a template_dir resource, and then a null_resource with local_exec in, and a count of the number of items I want to do something with (each of which exist in template_dir). This means the null_resource runs once per item I care about within the templare_dir (some of those files may be ignored, depending on users tfvars).

I would like to only run the null_resource stage if the particular file within the template_dir has changed, however currently I rely upon template_dir.name.id to detect changes to files, which unfortunately also takes account of the file metadata as it takes the sha sum of a tar file of each directory contained within it (see: https://github.com/hashicorp/terraform/blob/master/builtin/providers/template/resource_template_dir.go#L179).

An alternative to this, given I know the name of the file in question that I want to check the hash of, is for me to prepend the destination_dir of the template_dir to the files name, and using the sha256 of this file as a trigger for my null_resource. This works great, except when the resource doesn't exist in the first place, as Terraform will fail.

If we were able to set a flag as this issue proposed, I'd be able to deal with this case easily.

Alternatively for my single use-case, I'm not 100% of how evaluation in Terraform works, but something like the following may work if there were to be a file_exists function:

triggers {
    something = "${file_exists("${template_dir.dir.destination_dir}/${element(var.active, count.index)}") ? file("${template_dir.dir.destination_dir}/${element(var.active, count.index)}") : "" }"
}

Although that does depend on Terraform not evaluating file("${template_dir.dir.destination_dir}/${element(var.active, count.index)}") if file_exists returns false.

@apparentlymart
Copy link
Contributor

In the mean time Terraform has got the local provider which currently has a resource for creating a file on local disk.

Creating a companion local_file data source would be the most robust way to solve this problem, I think. That way it can participate in the dependency graph so that it can be scheduled properly with respect to the resource that contains the provisioner that creates the file in question.

With the benefit of hindsight, the file interpolation function may have been a mistake since functions don't participate in the graph, but it at least remains a convenient way to get files that are statically available alongside the configuration, such as templates.

@so0k
Copy link

so0k commented Aug 31, 2017

@apparentlymart - we are seeing some very strange behaviour with the local_file resource and template_file data source.

We are trying to generate the same template count times and concatenate the rendered output into a local_file as follows:

data "template_file" "namespaces" {
  count = "${length(var.namespaces)}"
  template = "${file("${path.module}/resources/namespace.tpl")}"
  vars {
    namespace = "${element(var.namespaces, count.index)}"
  }
}
resource "local_file" "bootstrap_namespaces" {
  content  = "${join("---\n", data.template_file.namespaces.*.rendered)}"
  filename = "${var.output_dir}/namespaces.yaml"
}
terraform version 0.10.2

on the first terraform apply, the file would be created correctly - however, on the 2nd terraform apply - the file would be deleted. Every odd run the file would be created and every even run it would be deleted. Are there existing issues to track that observe this behaviour?

@so0k
Copy link

so0k commented Aug 31, 2017

oops, I wasn't able to reproduce the behaviour in a test project outside of our current project.. this is my sample project so maybe it's something else, I'll clarify when I found out the cause

@so0k
Copy link

so0k commented Sep 1, 2017

Sorry to hijacking this thread (although it seems related to lazy eval). We were able to reproduce the behaviour I described above... the cause was interaction between template_dir and local_file

  1. template_dir resource generated rendered templates in target directory, but one of the files needed to be the concatenated result of source templates based on a count, so we handled that file separately
  2. local_file generated that additional file into the template_dir target directory

The result was that on the first run, template_dir generated the target directory and local_file added the additional file (there's a race condition here actually, but we can force the dependency)

on the second run, refresh tainted the template_dir resource, but the local_file was untainted, so it re-rendered the template_dir without the local_file in it - clearing the local file (forcing dependencies didn't prevent this, how are resources tainted down the dependency chain?)

on the third run, refresh tainted the local_file, but the template_dir was fine, so the local_file was created again - which would cause the next run to act like the second run described above.

rinse, repeat.

So, I'm not sure if a lazy eval flag would help with this scenario?

To fix it, we had to live with handling specially rendered files differently and separate the target directories. After some restructuring, that works for us (but may not work for everyone).

Edit: We found another fix, we use a template_file as input for one of the variables to the template_dir and put the value inside the expected file in the template_dir - no need for a special case local_file causing mayhem

@apparentlymart
Copy link
Contributor

apparentlymart commented Sep 1, 2017

Hi @so0k,

What you described here sounds like a different problem. Sorry for the weirdness.

The template_dir data source is not strictly compliant with the expectations for a data source since, for pragmatic reasons, it creates persistent data on disk rather than just loading and returning data. Terraform is assuming it is a well-behaved data source while in practice it's a bit of a cheat.

The different approach you described of keeping the two resources separate is the approach I would recommend here. In an ideal world this data source would not create anything on disk at all, but that's not really possible within current constraints, so it's best to act as if it is a pure data source, not expecting its result to persist between runs.

If you'd like to dig into this some more I think it would be best to make a new issue in the template provider's repository so we can keep this issue focused.

@so0k
Copy link

so0k commented Sep 1, 2017

Thank you @apparentlymart - again, sorry for polluting this thread with a different problem, somehow my keywords for my issue turned up this thread, so hopefully others with the same issue who land here can resolve their issue through my explanation. Also note we found an alternative fix (which I edited in to the comment above). I will now stop posting in this thread :)

@piotr-napadlek
Copy link

Hi, any progress on this one? In our project we rely on some jar existence and their checksums to push them to ecr (with the help of null_resource local_exec).
Everything is ok when we build and deploy in a pipeline, but the problem starts when we want to clean and destroy the infrastructure. Terraform complains that the file does not exist. Somehow it feels strange that terraform tries to destroy a resource that never existed (null_resource). I would need to check if the file exists, something like mentioned above:

triggers {
    something = "${file_exists("${template_dir.dir.destination_dir}/${element(var.active, count.index)}") ? file("${template_dir.dir.destination_dir}/${element(var.active, count.index)}") : "" }"
}

Any ideas for a workaround?

@digitalkaoz
Copy link

triggers {
   something = "${! file_exists("foo.zip")}"
}

would be awesome!

@apparentlymart apparentlymart added config and removed core labels Feb 6, 2018
@apparentlymart
Copy link
Contributor

Hi all,

Sorry I didn't spot this issue when we were releasing it, but the local_file data source I mentioned above has since been implemented, in local provider v1.1.0.

This data source gives a new way to read files that is a node in the dependency graph and thus it can have a dependency on some other resource that creates the file.

Unfortunately we're not totally out of the woods yet due to the need to implement something like the proposal in #17034 before this would be fully usable. In the mean time it's possible to use depends_on within the data "local_file" block but it will create a "perma-diff" for the reasons discussed in #17034:

resource "null_resource" "example" {
  provisioner "local-exec" {
    command = "echo hello >${path.module}/hello.txt"
  }
}

data "local_file" "example" {
  filename = "${path.module}/hello.txt"
  depends_on = ["null_resource.example"]
}

output "example" {
  value = "${data.local_file.example.content}"
}

The above may work for you if you are willing to tolerate the "perma-diff" it will create for data.local_file.example. That's generally annoying in any sort of automated workflow though, and so we are planning to do something like #17034 once we get done with our current focus on improving the configuration language syntax and move on to addressing some issues with the CLI workflow.

@matti
Copy link

matti commented Apr 16, 2018

@apparentlymart Doesn't that example explode if "${path.module}/hello.txt" is removed?

I managed finally to do this without external data source by first creating an empty file with null_file resource, then running the provisioner and then reading data with local_file datasource and storing the output in resource triggers (and then lifecycle ignoring them). The "perma-diff" does happen on subsequent applies if the file is missing, but atleast now I get provisioner output as requested here #6830

@nhi-vanye
Copy link

I have another specific use case regarding TLS.

Using TLS provider to create the certificates used to deploy a CoreOS environment with embedded consul & nomad.

After nomad is functional, it then registers some jobs with nomad.

This all worked - but then we wanted to run the consul and nomad clusters under TLS.

The TLS certificates and created and deployed to the hosts as part of provisioning process, but the nomad provider needs a path to the CA file. At that point the nomad provider is failing because the ca file is not yet present.

Error: Error running plan: 1 error(s) occurred:

* provider.nomad: failed to configure Nomad API: tls: failed to find any PEM data in certificate input

I don't have a workaround yet (except disabling TLS)

@joestump
Copy link

Running into this with provisioning Lambda ZIP files as well. I need to use file() with source_code_hash to trigger updates.

@af6140
Copy link

af6140 commented Jun 6, 2018

This is a really desired feature. But step back and take a look, maybe we need null_resource to be able to run even in plan mode in some case to prepare all needed local files.

Or we need global before/after hooks that in both plan and apply mode.

@af6140
Copy link

af6140 commented Jun 6, 2018

My use case is that I need to download a file from remote resources and provide the local file to aws lambda function resource. Ideally, in plan mode, the file can be downloaded ( and verified), since it's an external resource.

@cablespaghetti
Copy link

Same use case as above. Currently I'm wrapping terraform in a shell script which downloads all the Lambda .jar files using Maven. I'd really like to run the maven command inside terraform but I haven't found a way that will work. :(

@gaui
Copy link

gaui commented Oct 31, 2018

Apparently this is scheduled for 0.12 - but is there any workaround for 0.11 train?

@msmans
Copy link

msmans commented Dec 19, 2018

Not sure if something similar is mentioned anywhere, but my workaround for now, is to make file() depend on the null_resource through interpolation. Everything works fine after this. Here's an example:

resource "null_resource" "ssh_key_pair" {
  provisioner "local-exec" {
    command = <<EOF
set -ex
ssh-keygen ... -f "${path.module}/id_rsa"
gcloud kms encrypt ... --ciphertext-file="${path.module}/id_rsa.enc"
rm -f "${path.module}/id_rsa"
EOF       
  }
}

resource "kubernetes_secret" "ssh_key" {
  data {
    id_rsa.enc = "${file(replace("${path.module}/id_rsa.enc*${null_resource.ssh_key_pair.id}", "/[*].*/", ""))}"
  }
}

Basically, I'm appending the ID of the null resource to force the file() argument to be computed, and then I'm stripping it off with replace().

@m4h3
Copy link

m4h3 commented Jan 4, 2019

Thank you @msmans . This is effectively working for me and it's quite readable in terms of TF workarounds !

@agrzegorczyk-leonsoftware
Copy link

agrzegorczyk-leonsoftware commented Jan 10, 2019

Thank you too, @msmans
Based on your solution I managed to develop a bit cleaner one, using null_data_source.
This is for lambda deployment with triggered, dynamic lambda package build:

locals {
  lambda-2fa-package-file = "${path.module}/lambda-2fa.zip"
}

data "external" "lambda-2fa-src-hash" {
  program = ["bash", "${path.module}/2fa/get-src-sha256.sh"]
}

resource "null_resource" "lambda-2fa-builder" {
  triggers = {
   src_hash = "${data.external.lambda-2fa-src-hash.result["sha256"]}"
  }

  provisioner "local-exec" {
    working_dir = "${path.module}/2fa/"
    command = "bash ./make-lambda-package.sh"
  }
}

# workarout to sync file creation
data "null_data_source" "lambda-2fa-builder-sync" {
  inputs {
    file = "${local.lambda-2fa-package-file}"
    trigger = "${null_resource.lambda-2fa-builder.id}" # this is for sync only
  }
}

resource "aws_lambda_function" "2fa" {
  runtime          = "python3.6"
  function_name    = "${local.lambda_2fa_name}"
  filename         = "${local.lambda-2fa-package-file}"
  role             = "${aws_iam_role.lambda-2fa.arn}"
  handler          = "main.lambda_handler"
  # there is lazy/late evaluated 'file'
  # this pattern is forced by another issue: https://github.com/hashicorp/terraform/issues/17173#issuecomment-360119040
  source_code_hash = "${base64sha256(file(data.null_data_source.lambda-2fa-builder-sync.outputs["file"]))}"
}```

@teamterraform
Copy link
Contributor

Hi all!

Terraform 0.12 now includes fileexists, as was discussed earlier in this issue.

As mentioned on the documentation page, this can now be used to conditionally access a file:

fileexists("custom-section.sh") ? file("custom-section.sh") : local.default_content

Over the years this issue seems to have grown to include a number of other things, but it seems like the remaining gap is represented by #17034, so we're going to close this one out to consolidate. If you are working with files that are generated as a side-effect of terraform apply then the local_file data source is still best way to deal with that, since then Terraform use the normal dependency graph to discover when the file is safe to read. The lack of #17034 still makes it awkward to represent that dependency, but we'll track that over in that issue.

Thanks for the great discussion here, and sorry for the long delay since the last response to it.

@ghost
Copy link

ghost commented Aug 19, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Aug 19, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests