Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

17.24.0 - Error: configmaps "aws-auth" already exists #1702

Closed
qxmips opened this issue Nov 24, 2021 · 8 comments · Fixed by #1680
Closed

17.24.0 - Error: configmaps "aws-auth" already exists #1702

qxmips opened this issue Nov 24, 2021 · 8 comments · Fixed by #1680

Comments

@qxmips
Copy link

qxmips commented Nov 24, 2021

Description

eks module v17.24.0 fails to module.eks.kubernetes_config_map.aws_auth[0]

Error: configmaps "aws-auth" already exists
with module.eks.kubernetes_config_map.aws_auth[0]
on .terraform/modules/eks/aws_auth.tf line 63, in resource "kubernetes_config_map" "aws_auth":
resource "kubernetes_config_map" "aws_auth" {

Versions

  • Terraform: 1.0.11
  • Provider(s):
    Terraform v1.0.11
    on linux_amd64
  • provider registry.terraform.io/hashicorp/aws v3.66.0
  • provider registry.terraform.io/hashicorp/cloudinit v2.2.0
  • provider registry.terraform.io/hashicorp/kubernetes v2.6.1
  • provider registry.terraform.io/hashicorp/local v2.1.0
  • provider registry.terraform.io/hashicorp/random v3.1.0
  • provider registry.terraform.io/hashicorp/template v2.2.0
  • provider registry.terraform.io/hashicorp/tls v3.1.0
  • provider registry.terraform.io/terraform-aws-modules/http v2.4.1
  • Module:

Reproduction

Steps to reproduce the behavior:
using was cloud workspace
terrafrom plan
terraform apply

Code Snippet to Reproduce


module "eks" {
  source                          = "terraform-aws-modules/eks/aws"
  version                         = "17.24.0"
  cluster_name                    = local.cluster_name
  cluster_version                 = var.cluster_version
  vpc_id                          = local.vpc_id
  subnets                         = setunion(local.private_subnets_ids, local.public_subnets_ids)
  write_kubeconfig                = var.write_kubeconfig
  enable_irsa                     = true
  manage_worker_iam_resources     = true
  cluster_endpoint_private_access = var.enable_cluster_endpoint_private_access
  cluster_endpoint_public_access  = var.enable_cluster_endpoint_public_access
  cluster_enabled_log_types       = var.cluster_enabled_log_types
  cluster_log_retention_in_days   = var.cluster_log_retention_in_days
  cluster_delete_timeout          = "60m"
  map_users                       = var.map_users
  map_roles                       = concat(var.map_roles, [local.cluster_admins])

  node_groups_defaults = {
    desired_capacity = 2
    max_capacity     = 3
    min_capacity     = 1
    instance_types   = var.instance_types
    key_name         = local.worker_key
    capacity_type    = "SPOT"

    update_config = {
      max_unavailable_percentage = 50
    }
  }

  node_groups = {

    "a" = {
      subnets     = data.aws_subnet_ids.private_subnets
      name_prefix = "spot-"
    }
}
}

Terminal Output Screenshot(s)

Error: configmaps "aws-auth" already exists
with module.eks.kubernetes_config_map.aws_auth[0]
on .terraform/modules/eks/aws_auth.tf line 63, in resource "kubernetes_config_map" "aws_auth":
resource "kubernetes_config_map" "aws_auth" {

Additional context

the same code works with previous module versions.

@daroga0002
Copy link
Contributor

does this is issue for existing cluster or created from scratch?

@qxmips
Copy link
Author

qxmips commented Nov 24, 2021

@daroga0002 creating from the scratch

@lerrigatto
Copy link

I am facing a similar issue, where my apply fails due to timeout and I cannot re-apply as the aws-auth cm already exist.

@daroga0002
Copy link
Contributor

I tried replicate via example:
https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/managed_node_groups

but I dont get this error. Could you try use this example and replicate this on your side?

@github-actions
Copy link

This issue has been automatically marked as stale because it has been open 30 days
with no activity. Remove stale label or comment or this issue will be closed in 10 days

@github-actions github-actions bot added the stale label Dec 26, 2021
@sleterrier
Copy link

sleterrier commented Dec 31, 2021

@daroga0002: please correct me if mistaken but 56e93d7 (shipped in v17.24) has removed the explicit dependency between the custom kubernetes_config_map.aws_auth and the module.node_group. Therefore, if NodeGroup(s) node(s) join the cluster before the cluster endpoint becomes responsive (due to kubernetes_config_map.aws_auth being dependent on data.http.wait_for_cluster[0]), the aws-auth configMap is created and populated by the managed NodeGroup before Terraform gets to it.

I suspect the race condition does not get triggered for most because the cluster endpoint resolves almost immediately after the EKS cluster is created, while the NodeGroup creation takes minutes. But for those creating private EKS clusters, name resolution convergence might take longer (i.e multiple minutes) and lead to the above.

@antonbabenko
Copy link
Member

This issue has been resolved in version 18.0.0 🎉

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 15, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
5 participants