-
Notifications
You must be signed in to change notification settings - Fork 9.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Module does not refresh data sources unless applied with a target #26109
Comments
I think I face the same issue. When changing the instance count from 2 to 4, the data "null_data_source" "nodes" {
count = var.nodes_count
inputs = {
id = openstack_compute_instance_v2.instance[count.index].id
internal_ip = openstack_compute_instance_v2.instance[count.index].access_ip_v4
floating_ip = var.assign_floating_ip ? openstack_networking_floatingip_v2.floating_ip[count.index].address : ""
}
}
output "nodes" {
value = zipmap(openstack_compute_instance_v2.instance[*].name, data.null_data_source.nodes[*].outputs)
}
Note that when using a interpolating value in the data source it triggers a read (but needs confirmation to continue). |
Same thing here: I have an existing data aws_iam_policy_document "deploy_to_sub_account" {
for_each = aws_organizations_account.sub_accounts
policy_id = "DeployToSubAccount-${each.value.id}"
statement {
sid = "AssumeRoleDeployment"
effect = "Allow"
actions = ["sts:AssumeRole"]
resources = ["arn:aws:iam::${each.value.id}:role/role_deployment"]
}
}
resource aws_iam_policy "deploy_to_sub_account" {
for_each = aws_organizations_account.sub_accounts
name = "deploy_to_sub_account-${each.value.name}"
path = "/"
description = "Allow to take the role_deployment on sub-account ${each.value.id}-${each.value.name}"
policy = data.aws_iam_policy_document.deploy_to_sub_account[each.key].json
} as you can see, the
Where the 3 attributes mentioned are the one that are already in the state. I'm actually trying to add one (I have added one element to the |
I think I found an ugly workaround. The problem is in the dependency resolution (resources that needs the new data element). So the solution is to do a targetted apply to create just the new data element (and its upstream dependencies) In my case
After that, I can apply the rest |
@gbataille Yeah, that's precisely what I stated in the subject of the issue - if you apply it with a target - it will work after that. Although I do not think this is the right way to go... |
I had not understood it like that. Agreed that's not the way to go, but it unblocked me |
@favoretti Thanks for reporting this issue! We are working on changes related to this, and while I'm not sure that this use case will be fixed, I'd like to be able to understand this issue. I've tried to reproduce it with local-only providers, and I haven't been able to do so. Can you help? Here's my configuration: variable "get_secrets" {
type = set(string)
default = ["foo"]
}
data "null_data_source" "secrets" {
for_each = var.get_secrets
inputs = {
secret = sha256(each.value)
}
}
output "foo" {
value = data.null_data_source.secrets["foo"].outputs.secret
} This works as expected: $ terraform-0.13.2 apply -auto-approve
data.null_data_source.secrets["foo"]: Refreshing state...
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
foo = 2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae If I then add a new output to the configuration like so: output "bar" {
value = data.null_data_source.secrets["bar"].outputs.secret
} And run an apply including $ terraform-0.13.2 apply -auto-approve -var 'get_secrets=["foo", "bar"]'
data.null_data_source.secrets["bar"]: Refreshing state...
data.null_data_source.secrets["foo"]: Refreshing state... [id=static]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
bar = fcde2b2edba56bf408601fb721fe9b5c338d10ee429ea04fae5511b68fbf8fb9
foo = 2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae Can you change this simple example to be more like yours, so that it reproduces the issue you're seeing? Failing that, can you provide a more minimal example of the configuration you're using which exhibits the bug? |
I'll try to create a dummy scenario to reproduce it in the coming day or two unless someone beats me to it. |
Hi @favoretti I have a feeling that this is being caused by the fact that (up until 0.14) data sources were read during refresh whenever possible, so they would only reflect the existing state before any resources were applied. Using target in your case was causing something in the configuration to be shown as unknown, and forcing the data source to be read again during apply. The The next 0.14 release should greatly improve the ability to more accurately evaluate data sources during plan, possibly allowing for this to work if a reference were added from Thanks! |
@jbardin I am running into the same problem @gbataille had: #26109 (comment) Are you sure this you want to write this off as "works as designed"? Or do you want to open a new ticket about it because it is different than the OP's issue? @gbataille and I are having a problem where the data source is a pure data source, not referencing an existing resource, and it is being referenced from the resource that needs it, so it should trigger an update. |
It could be debated that the concept of data sources being read during refresh was bug in the initial design, but that is one of the reasons we have removed the refresh phase altogether in 0.14. Having an independent data source and managed resource that represent the same actual resource has never been fully supported, because the data source always needed to be read, and hence the configuration fully evaluated, before anything is planned or applied. This is mostly speculation here, because the issue was left without a clear reproduction, and no debugging information. If you have a way to reproduce your issue, please feel free to open a new one. It would be great if you could also try this on the latest 0.14 release as well, since that will help eliminate many common known issues. Thanks! |
@jbardin I experienced this in a full-blown complicated project, but have not been able to reproduce it in a small test case yet. If you have an idea why that might be, please let me know. My case was exactly like @gbataille reported in #26109 (comment) except I was going from 2 to 4 instances instead of from 3 to 4. Going from 1 to 2 in the successful test case, I see
In the complicated real case, the lines corresponding to
are missing from the |
@Nuru, I don't doubt you are experiencing an issue, and can definitely sympathize with the complexity of troubleshooting these. Unfortunately the information provided here doesn't narrow down the possibilities in any significant way. The vast majority of questions with this type of behavior are configuration issues, often combined with a misunderstanding of details in how data sources work. If a data source is not being read when expected, there is a dependency of some sort deferring that read. The logs may provide more info, but we usually need that paired with the full configuration to see what is actually being fed into the data source configuration to know for sure. Hopefully the improvements in 0.14 will alleviate a lot of this confusion because data sources will always evaluated with the planning information available, which was not previously possible. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Terraform version
Given the following module piece:
Used in a plan like:
The data source in
get_secrets
list will just be refreshed once.If I add more values to it, let's say
clientsecret-portal-dev
and try to use it later as:This will fail, saying that
kv_fdx_external
just contains the initial number of elements.Use-case - allow use of manually added secrets in terraform code.
What am I doing wrong?
The text was updated successfully, but these errors were encountered: