-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
terraform: Error during apply - diffs didn't match during apply #21012
Comments
Hi @Aussie007! Sorry for this error, and thanks for reporting it. Could you please share the configuration for This suggests a bug in whatever other resource type has changed its outputs between the two phases, since a provider is required to provide an accurate plan in order for Terraform to complete successfully. Therefore by looking at your configuration I'm hoping to learn which provider contains the resource type in question (I expect it's some other resource type in the AWS provider, but want to confirm first) so that we can file this issue against whichever provider has that bug. If there are no references in your Thanks! |
No problem. I use a "data" resource to get the available AWS AZs in the current region: data "aws_availability_zones" "available" {} This is then referenced by the ASG: resource "aws_autoscaling_group" "as_node_group" {
name_prefix = "${var.region_prefix}-${var.icao_code}-worker-nodes-node-group-"
availability_zones = ["${data.aws_availability_zones.available.names[0]}", "${data.aws_availability_zones.available.names[1]}", "${data.aws_availability_zones.available.names[2]}"]
desired_capacity = 3
max_size = 5
min_size = 2
health_check_type = "EC2"
health_check_grace_period = 0
vpc_zone_identifier = ["${aws_subnet.subnet_private_01.id}", "${aws_subnet.subnet_private_02.id}", "${aws_subnet.subnet_private_03.id}"]
launch_configuration = "${aws_launch_configuration.as_lc_node_group.id}"
tags = [
{
key = "Name"
value = "${var.region_prefix}-${var.icao_code}-worker-nodes"
propagate_at_launch = true
},
{
key = "kubernetes.io/cluster/${var.region_prefix}-${var.icao_code}"
value = "owned"
propagate_at_launch = true
}
]
lifecycle {
create_before_destroy = true
}
depends_on = ["aws_eks_cluster.eks"]
} Let me know if you need any further information. Cheers! |
Just more detail - when I run "terraform apply" a second time, I have no error/issue and all configurations are applied fine. Cheers! |
Hi @Aussie007! Thanks for the extra context. We've not seen this issue arise with data resources before, so this appears to be a new problem. I'm going to keep this in the Terraform Core repository (this one) for now until we can learn more about what is going on here, since based only on the error message it seems like somehow the data result changed between plan and apply, and yet that isn't supposed to be possible. |
I hit a similar error with a VM on Azure:
I can submit all of the additional gory details here, or in another (new?) issue? |
Hi @jlucktay! If your configuration for If the latter, I'd suggest opening an issue about it in that provider. (Note that it's the provider that |
Hi @apparentlymart, I'm hitting similar issue with terraform-provider-nsxt, when removing a nested object on a resource that hits ForceNew. As @Aussie007 mentioned above, second apply succeeds. Data sources are not involved in my case though. |
Hi @annakhm, Since in your case data sources are not involved, that seems like a different problem. I'd suggest opening an issue in the |
The test is failing due to what seems to be issue in core terraform hashicorp/terraform#21012 This patch is a workaround to avoid test failure while the issue is investigated.
Hi @apparentlymart, thanks for quick reply - does it make sense that second apply succeeds in case the issue is in the provider? |
Yes, this issue will generally tend to resolve itself on second apply regardless of root cause, because it results from something unexpected happening during the apply phase, but on the second run that unexpected thing already happened and so the plan will be correct on the first try. |
Just got this:
|
Hi all, I do appreciate your taking the time to look for similar issues when reporting further occurrences of "diffs didn't match during apply", but unfortunately in this particular case it seems that there are at least two different classes of bug that are reflected by this same error, and so adding more on here without all of the context requested in our issue template doesn't give enough information to determine where the problem lies and thus to triage into the appropriate codebase. The best thing to do when you get an error of this type is to open an issue in the repository for the provider involved. If you see If it's not clear from reading your configuration which resource is being used to resolve that value, then the second best thing to do is to open a new bug report issue and include all of the information requested in the template. We ask for that information because it gives us what we need in order to triage the problem and see whether it's a bug in a specific provider or a general bug in Terraform. The error message alone unfortunately does not give enough information for full triage. (The situation will change here once Terraform 0.12 is released. We've moved the check that displays this error into a different codepath so that it has more information available and can report the specific resource type and provider that the problem originated from. Hopefully it should be much easier to triage this class of problem into the appropriate codebase with the information provided in that new error message.) |
Just providing an update in this. I have updated Terraform and my AWS provider:
I still have the issue, but it is appears to be reported slightly differently (possibly due to the error handling in Terraform v0.12.x
This is now suggesting the issue is within the provider. Do you want me to close this issue and open a new one in the AWS provider Github? Regards |
Hi @Aussie007, Thanks for the update. Thanks! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Terraform Version
Expected Behavior
The terraform apply should have completed with no errors
Actual Behavior
Received an error during apply:
Steps to Reproduce
terraform apply
Additional Context
Using AWS Provider v2.6.0
I have an AWS Auto Scaling Group with a "depends_on" to an AWS EKS Cluster
The text was updated successfully, but these errors were encountered: