Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform plan fails with "Invalid provider configuration was supplied" warning when changing cluster subnets #1707

Closed
gpothier opened this issue Nov 29, 2021 · 6 comments · Fixed by #1680

Comments

@gpothier
Copy link

gpothier commented Nov 29, 2021

Description

When adding a new subnet, or replacing one of the existing ones with a new one, or removing a subnet, Terraform fails to plan. It seems the provider configuration is not properly initialized, and it tries to access the k8s API server at localhost instead of the proper host when refreshing the aws_auth configmap. Reordering subnets do not cause the issue. The new subnet itself was created by terraform in a previous apply, without any problem.

EDIT: I now created a completely stripped down test case to easily reproduce the issue: https://github.com/gpothier/terraform-aws-eks-issue-1707. I particular, it doesn't use a submodule anymore, everything is in root. This is a pretty much by-the-book usage of the EKS module.

Versions

  • Terraform:
    Terraform v1.0.11
  • Provider(s):
  • provider registry.terraform.io/hashicorp/aws v3.67.0
  • provider registry.terraform.io/hashicorp/cloudinit v2.2.0
  • provider registry.terraform.io/hashicorp/helm v2.4.1
  • provider registry.terraform.io/hashicorp/kubernetes v2.6.1
  • provider registry.terraform.io/hashicorp/local v2.1.0
  • provider registry.terraform.io/hashicorp/null v3.1.0
  • provider registry.terraform.io/hashicorp/random v3.1.0
  • provider registry.terraform.io/hashicorp/template v2.2.0
  • provider registry.terraform.io/terraform-aws-modules/http v2.4.1
  • Module:

Reproduction

Steps to reproduce the behavior:

EDIT:
Clone this project: https://github.com/gpothier/terraform-aws-eks-issue-1707
Plan & apply a first time, then change the value of autoscaling_azs in terraform.tfvars from 2 to 3, and attempt to plan again. It will fail. It also fails if initially applying with 3 subnets and then changing to 2.

Expected behavior

Terraform should be able to produce a plan regardless of subnets used in cluster

Actual behavior

Terraform fails to produce a plan

Terminal Output Screenshot(s)

Terraform plan final output:

╷
│ Error: the server is currently unable to handle the request (get configmaps aws-auth)
│ 
│   with module.couchdb_cluster.module.eks_cluster.kubernetes_config_map.aws_auth[0],
│   on .terraform/modules/couchdb_cluster.eks_cluster/aws_auth.tf line 63, in resource "kubernetes_config_map" "aws_auth":
│   63: resource "kubernetes_config_map" "aws_auth" {
│ 
╵

And this is somewhere in the middle with TL_LOG=TRACE:

-----------------------------------------------------: timestamp=2021-11-29T16:16:10.047-0300
2021-11-29T16:16:10.058-0300 [DEBUG] provider.terraform-provider-kubernetes_v2.6.1_x5: 2021-11-29T16:16:10.058-0300 [TRACE] [PrepareProviderConfig][Request]
2021-11-29T16:16:10.058-0300 [DEBUG] provider.terraform-provider-kubernetes_v2.6.1_x5: %s
2021-11-29T16:16:10.058-0300 [DEBUG] provider.terraform-provider-kubernetes_v2.6.1_x5: : EXTRA_VALUE_AT_END={0xc0007623f0}
2021-11-29T16:16:10.059-0300 [WARN]  ValidateProviderConfig from "provider[\"registry.terraform.io/hashicorp/kubernetes\"].couchdb" changed the config value, but that value is unused
2021-11-29T16:16:10.059-0300 [TRACE] GRPCProvider: ConfigureProvider
2021-11-29T16:16:10.059-0300 [DEBUG] provider.terraform-provider-kubernetes_v2.6.1_x5: 2021/11/29 16:16:10 [WARN] Invalid provider configuration was supplied. Provider operations likely to fail: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
2021-11-29T16:16:10.059-0300 [DEBUG] provider.terraform-provider-kubernetes_v2.6.1_x5: 2021/11/29 16:16:10 [DEBUG] Enabling HTTP requests/responses tracing
2021-11-29T16:16:10.060-0300 [DEBUG] provider.terraform-provider-kubernetes_v2.6.1_x5: 2021-11-29T16:16:10.060-0300 [ERROR] [Configure]: Failed to load config:="&{0xc00192c820 0xc00091e240 <nil> 0xc0000ee480 {0 0} 0xc00192ba60}"
2021-11-29T16:16:10.060-0300 [TRACE] vertex "provider[\"registry.terraform.io/hashicorp/kubernetes\"].couchdb": visit complete
2021-11-29T16:16:10.060-0300 [TRACE] vertex "module.main_cluster.module.iam_assumable_role_admin.data.aws_iam_policy_document.assume_role_with_oidc": expanding dynamic subgraph
2021-11-29T16:16:10.060-0300 [TRACE] Executing graph transform *terraform.ResourceCountTransformer
2021-11-29T16:16:10.060-0300 [TRACE] ResourceCountTransformer: adding module.main_cluster.module.iam_assumable_role_admin.data.aws_iam_policy_document.assume_role_with_oidc[0] as *terraform.NodePlannableResourceInstance
2021-11-29T16:16:10.060-0300 [TRACE] Completed graph transform *terraform.ResourceCountTransformer with new graph:
  module.main_cluster.module.iam_assumable_role_admin.data.aws_iam_policy_document.assume_role_with_oidc[0] - *terraform.NodePlannableResourceInstance
  ------
@gpothier gpothier changed the title "Invalid provider configuration was supplied" warning when changing cluster subnets Terraform plan fails with "Invalid provider configuration was supplied" warning when changing cluster subnets Nov 29, 2021
@github-actions
Copy link

github-actions bot commented Jan 1, 2022

This issue has been automatically marked as stale because it has been open 30 days
with no activity. Remove stale label or comment or this issue will be closed in 10 days

@antonbabenko
Copy link
Member

This issue has been resolved in version 18.0.0 🎉

@gpothier
Copy link
Author

gpothier commented Jan 7, 2022

I can confirm that with v18.0.1, terraform plan completes without error; however, the apply fails because the cluster must be replaced, and it tries to create the cluster before destroying:

Cluster already exists with name: terraform-issue-1707-eks

Should I open a new issue, or reopen this one?

@bryantbiggs
Copy link
Member

@gpothier its unclear what issue you are facing and what desired outcome you are trying to achieve. if you are still facing an issue on v18.x, I would suggest opening a new issue

@gpothier
Copy link
Author

gpothier commented Jan 7, 2022

ok, created issue #1752.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 15, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
3 participants