Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

init with 1.10 should work without -reconfigure after upgrade from 1.8 or 1.9 #36174

Open
schollii opened this issue Dec 7, 2024 · 6 comments
Labels
backend/s3 bug new new issue not yet triaged

Comments

@schollii
Copy link

schollii commented Dec 7, 2024

Terraform Version

1.10.1

Terraform Configuration Files

terraform {
  backend "s3" {
    bucket  = "your-bucket"
    region  = "us-east-1"
    encrypt = true

    dynamodb_table = "your-stack-locks"
    key            = "your-stack/terraform.tfstate"
  }
}

Debug Output

$ TF_LOG=trace terraform init
2024-12-07T21:49:39.474Z [INFO]  Terraform version: 1.10.1
2024-12-07T21:49:39.474Z [DEBUG] using github.com/hashicorp/go-tfe v1.70.0
2024-12-07T21:49:39.474Z [DEBUG] using github.com/hashicorp/hcl/v2 v2.23.0
2024-12-07T21:49:39.474Z [DEBUG] using github.com/hashicorp/terraform-svchost v0.1.1
2024-12-07T21:49:39.474Z [DEBUG] using github.com/zclconf/go-cty v1.15.1-0.20241111215639-63279be090d7
2024-12-07T21:49:39.474Z [INFO]  Go runtime version: go1.23.3
2024-12-07T21:49:39.475Z [INFO]  CLI args: []string{"terraform", "init"}
2024-12-07T21:49:39.475Z [TRACE] Stdout is a terminal of width 291
2024-12-07T21:49:39.475Z [TRACE] Stderr is a terminal of width 291
2024-12-07T21:49:39.475Z [TRACE] Stdin is a terminal
2024-12-07T21:49:39.475Z [DEBUG] Attempting to open CLI config file: /home/devops/.terraformrc
2024-12-07T21:49:39.475Z [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2024-12-07T21:49:39.476Z [DEBUG] ignoring non-existing provider search directory terraform.d/plugins
2024-12-07T21:49:39.476Z [DEBUG] ignoring non-existing provider search directory /home/devops/.terraform.d/plugins
2024-12-07T21:49:39.476Z [DEBUG] ignoring non-existing provider search directory /home/devops/.local/share/terraform/plugins
2024-12-07T21:49:39.476Z [DEBUG] ignoring non-existing provider search directory /usr/local/share/terraform/plugins
2024-12-07T21:49:39.477Z [DEBUG] ignoring non-existing provider search directory /usr/share/terraform/plugins
2024-12-07T21:49:39.478Z [INFO]  CLI command args: []string{"init"}
Initializing the backend...
2024-12-07T21:49:39.484Z [TRACE] Meta.Backend: built configuration for "s3" backend with hash value 3549647693
2024-12-07T21:49:39.484Z [TRACE] Meta.Backend: working directory was previously initialized for "s3" backend
2024-12-07T21:49:39.484Z [TRACE] backendConfigNeedsMigration: failed to decode cached config; migration codepath must handle problem: unsupported attribute "assume_role_duration_seconds"
2024-12-07T21:49:39.484Z [TRACE] Meta.Backend: backend configuration has changed (from type "s3" to type "s3")
Initializing modules...
2024-12-07T21:49:39.484Z [TRACE] ModuleInstaller: installing child modules for . into .terraform/modules
2024-12-07T21:49:39.487Z [DEBUG] Module installer: begin eks_main
2024-12-07T21:49:39.503Z [TRACE] ModuleInstaller: Module installer: eks_main <nil> already installed in .terraform/modules/eks_main
2024-12-07T21:49:39.503Z [DEBUG] Module installer: begin eks_main.cicd_bastion
2024-12-07T21:49:39.505Z [TRACE] ModuleInstaller: Module installer: eks_main.cicd_bastion <nil> already installed in /home/devops/kimera-cli/terraform/eks-main/cicd-bastion
2024-12-07T21:49:39.505Z [DEBUG] Module installer: begin eks_main.cloudwatch_agent_irsa
2024-12-07T21:49:39.507Z [TRACE] ModuleInstaller: Module installer: eks_main.cloudwatch_agent_irsa <nil> already installed in /home/devops/kimera-cli/terraform/eks-main/irsa
2024-12-07T21:49:39.507Z [DEBUG] Module installer: begin eks_main.eks_cluster
2024-12-07T21:49:39.552Z [TRACE] ModuleInstaller: Module installer: eks_main.eks_cluster 20.8.5 already installed in .terraform/modules/eks_main.eks_cluster
2024-12-07T21:49:39.553Z [DEBUG] Module installer: begin eks_main.eks_cluster.eks_managed_node_group
2024-12-07T21:49:39.575Z [TRACE] ModuleInstaller: Module installer: eks_main.eks_cluster.eks_managed_node_group <nil> already installed in .terraform/modules/eks_main.eks_cluster/modules/eks-managed-node-group
2024-12-07T21:49:39.575Z [DEBUG] Module installer: begin eks_main.eks_cluster.eks_managed_node_group.user_data
2024-12-07T21:49:39.579Z [TRACE] ModuleInstaller: Module installer: eks_main.eks_cluster.eks_managed_node_group.user_data <nil> already installed in .terraform/modules/eks_main.eks_cluster/modules/_user_data
2024-12-07T21:49:39.579Z [DEBUG] Module installer: begin eks_main.eks_cluster.fargate_profile
2024-12-07T21:49:39.583Z [TRACE] ModuleInstaller: Module installer: eks_main.eks_cluster.fargate_profile <nil> already installed in .terraform/modules/eks_main.eks_cluster/modules/fargate-profile
2024-12-07T21:49:39.583Z [DEBUG] Module installer: begin eks_main.eks_cluster.kms
2024-12-07T21:49:39.591Z [TRACE] ModuleInstaller: Module installer: eks_main.eks_cluster.kms 2.1.0 already installed in .terraform/modules/eks_main.eks_cluster.kms
2024-12-07T21:49:39.591Z [DEBUG] Module installer: begin eks_main.eks_cluster.self_managed_node_group
2024-12-07T21:49:39.623Z [TRACE] ModuleInstaller: Module installer: eks_main.eks_cluster.self_managed_node_group <nil> already installed in .terraform/modules/eks_main.eks_cluster/modules/self-managed-node-group
2024-12-07T21:49:39.623Z [DEBUG] Module installer: begin eks_main.eks_cluster.self_managed_node_group.user_data
2024-12-07T21:49:39.624Z [TRACE] ModuleInstaller: Module installer: eks_main.eks_cluster.self_managed_node_group.user_data <nil> already installed in .terraform/modules/eks_main.eks_cluster/modules/_user_data
2024-12-07T21:49:39.624Z [DEBUG] Module installer: begin eks_main.eks_cluster_auth
2024-12-07T21:49:39.625Z [TRACE] ModuleInstaller: Module installer: eks_main.eks_cluster_auth 20.8.5 already installed in .terraform/modules/eks_main.eks_cluster_auth/modules/aws-auth
2024-12-07T21:49:39.625Z [DEBUG] Module installer: begin eks_main.eks_cluster_autoscaler
2024-12-07T21:49:39.636Z [TRACE] ModuleInstaller: Module installer: eks_main.eks_cluster_autoscaler 2.2.0 already installed in .terraform/modules/eks_main.eks_cluster_autoscaler
2024-12-07T21:49:39.636Z [DEBUG] Module installer: begin eks_main.gen_deployment_config_files
2024-12-07T21:49:39.639Z [TRACE] ModuleInstaller: Module installer: eks_main.gen_deployment_config_files 0.5.1 already installed in .terraform/modules/eks_main.gen_deployment_config_files
2024-12-07T21:49:39.639Z [DEBUG] Module installer: begin eks_main.gen_testing_config_files
2024-12-07T21:49:39.642Z [TRACE] ModuleInstaller: Module installer: eks_main.gen_testing_config_files 0.5.1 already installed in .terraform/modules/eks_main.gen_testing_config_files
2024-12-07T21:49:39.642Z [DEBUG] Module installer: begin eks_main.get_cluster_helm_configs
2024-12-07T21:49:39.643Z [TRACE] ModuleInstaller: Module installer: eks_main.get_cluster_helm_configs <nil> already installed in .terraform/modules/eks_main.get_cluster_helm_configs
2024-12-07T21:49:39.643Z [DEBUG] Module installer: begin eks_main.vpc
2024-12-07T21:49:39.724Z [TRACE] ModuleInstaller: Module installer: eks_main.vpc 5.8.1 already installed in .terraform/modules/eks_main.vpc
2024-12-07T21:49:39.724Z [DEBUG] Module installer: begin eks_main.vpc_endpoints
2024-12-07T21:49:39.731Z [TRACE] ModuleInstaller: Module installer: eks_main.vpc_endpoints 5.8.1 already installed in .terraform/modules/eks_main.vpc_endpoints/modules/vpc-endpoints
2024-12-07T21:49:39.731Z [TRACE] modsdir: writing modules manifest to .terraform/modules/modules.json
╷
│ Error: Backend configuration changed
│ 
│ A change in the backend configuration has been detected, which may require migrating existing state.
│ 
│ If you wish to attempt automatic migration of the state, use "terraform init -migrate-state".
│ If you wish to store the current configuration with no changes to the state, use "terraform init -reconfigure".

Expected Behavior

terraform init should just work with 1.10 after upgrading from 1.8 and 1.9. I'm guessing upgrading to 1.10 from any earlier before 1.8 will have same issue but I did not check.

Actual Behavior

It aborts with a confusing and unsettling error message:

│ Error: Backend configuration changed
│ 
│ A change in the backend configuration has been detected, which may require migrating existing state.
│ 
│ If you wish to attempt automatic migration of the state, use "terraform init -migrate-state".
│ If you wish to store the current configuration with no changes to the state, use "terraform init -reconfigure".

When I saw that at first, I wasn't sure which one I should pick, since obviously I had not changed anything to the backend config that could guide me in choosing the correct action.

After web search I was lucky to find #36150. Turns out -reconfigure is required, in both directions: pre-1.10 to 1.10, and back from 1.10 to prior version.

Since 1.10 is a minor release, I'm reporting the requirement for -reconfigure as a bug:

  • Our ci/cd is broken: every one and every system that uses terraform has to be upgraded to 1.10. This never used to be the case for minor releases, eg I could be running 1.8 on my system and 1.9 in ci/cd, or vice versa.
  • So we have to manually run the init with -reconfigure from a machine so that the state in s3 gets updated, then the terraform that runs in ci/cd has to also be upgraded to 1.10 otherwise it will give the same init error (and we can't skip the init -- the plan will fail saying init is required).
  • If there is any issue with upgrade to 1.10 and we have to revert to 1.9 or 1.8, we will also require a manual -reconfigure.

From that other issue #36150, this seems to happen because tf is doing some cleanup of state for obsolete args. I am all for cleanups, but terraform 1.10 should not do this automatically as part of a minor release, it should be optional so we can do this as a separate action. If this were a major release, sure. But on a minor release, this SHOULD BE CONSIDERED A BUG. Instead a new command line switch eg, so that we can do this cleanup when we are ready, eg we should be able to wait for the next major release even if that happens in 2 years. Make the cleanup mandatory only for major release (tf 2.0).

Steps to Reproduce

This is already documented in the other ticket, but I'll repeat here:

  1. create folder with main.tf, put anything you want in it, and the backend pointing to s3 per above
  2. intstall tf 1.8 or 1.9
  3. tf init
  4. now upgrade to tf 1.10 (eg, tfswitch --latest-stable 1.10)
  5. tf init: see the confusing error

Additional Context

No response

References

@schollii schollii added bug new new issue not yet triaged labels Dec 7, 2024
@jbardin
Copy link
Member

jbardin commented Dec 9, 2024

Hi @schollii,

Thanks for filing the issue. Yes, that error message isn't great, because it's trying to cover a generic situation for any backend implementation and doesn't know why a particular backend may have changed it's stored data structures.

The storage backends are developed independently from the core codebase, although they still need to be bundled into the CLI binary. This poses a bit of a problem with coordinating upgrade cycles, and is something we want to be able to decouple. This is why the state storage backends are not included in v1 compatibility guarantees, allowing the backend developers to continue development without being tied forever to the CLI release cycle.

The S3 backend is also called out in the upgrade guide as requiring -reconfigure during the upgrade process for this version. I don't know if it would be possible for the AWS team to complete the update process to the new libraries and configuration without eventually needing -reconfigure, but it's not unreasonable to assume that some backend at some point would require that, which is why we don't limit the process.

I'm not sure what a resolution here would be at this point, since v1.10 is already released with the change in data structures, going back would take yet another release and another multiple rounds of -reconfigure.

@Gladskih
Copy link

Gladskih commented Dec 9, 2024

I have faced a similar issue in Amazon China. It works well with 1.10.0 but "Backend configuration changed" with 1.10.1.
I have no time to investigate deeper, so I am writing it here just for your information as a possibly important detail.

@crw
Copy link
Contributor

crw commented Dec 9, 2024

Thanks for the report @schollii. I do appreciate the feedback and will deliver this to the AWS provider team at HashiCorp, as well as our product management team. Per @jbardin's comment above, there is not anything to be done about this specific reported instance other than relay it as process feedback for the future, so I am going to close this as a bug. Please feel free to let me know if you think there is some other resolution that is possible.

@crw crw closed this as not planned Won't fix, can't repro, duplicate, stale Dec 9, 2024
@schollii
Copy link
Author

schollii commented Dec 9, 2024

Yes i was thinking that 1.10.2 would not automatically cleanup the backend state, and add an extra option for when this is desired. Then people would know to skip 1.10.0 and 1.10.1 if the -reconfigure required in 1.10.1 is a problem.

Tbh I'm guessing that this kind of change is likely to happen again, so a way for you to deprecate something in the backend, and give users time to switch, is likely to be useful more than just this once.

@crw crw reopened this Dec 10, 2024
@jbardin
Copy link
Member

jbardin commented Dec 10, 2024

At this point the config structure can't be changed back immediately, because that would force all users who have already upgraded to reconfigure during a patch release, then again during the next point release. I think all we can do right now is try to make sure the backend developers are more vocal about pending deprecations. Better deprecation processes alone won't provide a solution here either though, because the change in internal backend structure is what forced the reconfiguration, not whether you have already upgraded your configuration to remove deprecated attributes.

I think it really comes down to having backends extracted from core entirely so that their development and upgrade cycles are not tied together. Aside from the other technical reasons we want that, it would better match user's expectations around version upgrades similar to that of providers.

@crw
Copy link
Contributor

crw commented Dec 10, 2024

Linking to #5877

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend/s3 bug new new issue not yet triaged
Projects
None yet
Development

No branches or pull requests

4 participants