Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to create kubernetes config map when using the alias in kubernets provider #2862

Closed
shettypriy opened this issue Dec 29, 2023 · 8 comments

Comments

@shettypriy
Copy link

shettypriy commented Dec 29, 2023

Versions

  • Module version [Required]: 19.16.0

  • Terraform version:
    1.6.4

  • Provider version(s):

"hashicorp/kubernetes"
version = "2.23.0"

Reproduction Code [Required]

provider "kubernetes" {
alias = "atlantiseks"
host = module.atlantis_eks_cluster.cluster_endpoint
cluster_ca_certificate = base64decode(module.atlantis_eks_cluster.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.atlantis.token
}

Steps to reproduce the behavior:

Default Yes

Unable to create the kubernetes_config_map_v1_data.aws_auth[0] resource for the EKS cluster when using alias in the kubernetes provider block

Logs :
It just shows creating but it does not create the aws config map

module.atlantis_eks_cluster.kubernetes_config_map_v1_data.aws_auth[0]: Creating...
module.atlantis_eks_cluster.kubernetes_config_map_v1_data.aws_auth[0]: Still creating... [10s elapsed]
module.atlantis_eks_cluster.kubernetes_config_map_v1_data.aws_auth[0]: Still creating... [20s elapsed]
module.atlantis_eks_cluster.kubernetes_config_map_v1_data.aws_auth[0]: Still creating... [30s elapsed]


│ Error: The configmap "aws-auth" does not exist

│ with module.atlantis_eks_cluster.kubernetes_config_map_v1_data.aws_auth[0],
│ on .terraform/modules/atlantis_eks_cluster/main.tf line 554, in resource "kubernetes_config_map_v1_data" "aws_auth":
│ 554: resource "kubernetes_config_map_v1_data" "aws_auth" {

@bryantbiggs
Copy link
Member

Please provide a reproduction

@shettypriy
Copy link
Author

shettypriy commented Dec 29, 2023

We are trying to setup a EKS cluster, using the following provider block where in we are using the alias in the kubernetes provider .

provider "kubernetes" {
alias = "atlantiseks"
host = module.atlantis_eks_cluster.cluster_endpoint
cluster_ca_certificate = base64decode(module.atlantis_eks_cluster.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.atlantis.token
}

Following is the logs when apply is done

module.atlantis_eks_cluster.kubernetes_config_map_v1_data.aws_auth[0]: Creating...
module.atlantis_eks_cluster.kubernetes_config_map_v1_data.aws_auth[0]: Still creating... [10s elapsed]
module.atlantis_eks_cluster.kubernetes_config_map_v1_data.aws_auth[0]: Still creating... [20s elapsed]
module.atlantis_eks_cluster.kubernetes_config_map_v1_data.aws_auth[0]: Still creating... [30s elapsed]

╷
│ Error: The configmap "aws-auth" does not exist
│
│ with module.atlantis_eks_cluster.kubernetes_config_map_v1_data.aws_auth[0],
│ on .terraform/modules/atlantis_eks_cluster/main.tf line 554, in resource "kubernetes_config_map_v1_data" "aws_auth":
│ 554: resource "kubernetes_config_map_v1_data" "aws_auth" {

I am using the alias block in the cluster creation module as below

module "atlantis_eks_cluster" {
  source = "terraform-aws-modules/eks/aws"
  providers = {
    kubernetes = kubernetes.atlantiseks
  }
  version         = "19.16.0"
  cluster_name    = local.atlantis_cluster_name
  cluster_version = var.cluster_version
  subnet_ids      = ["***********", "*******************"]
  vpc_id          = local.vpc_id

  cluster_addons = {
    vpc-cni = {
      most_recent              = true
      service_account_role_arn = module.atlantis_vpc_cni_irsa_role.iam_role_arn
    }
    aws-ebs-csi-driver = {
      most_recent              = true
      service_account_role_arn = module.atlantis_ebs_csi_irsa_role.iam_role_arn
    }

  }

  manage_aws_auth_configmap = true

  aws_auth_node_iam_role_arns_non_windows = [module.atlantis_eks_managed_node_group.iam_role_arn]
  aws_auth_roles = [
    {
      rolearn  = "arn:aws:iam::*************:role/************************"
      username = "admin:{{SessionName}}"
      groups   = ["system:masters"]
    }
  ]

resource "aws_launch_template" "atlantis_launch_template" {
  name                   = "${var.env}-atlantis-eks-cluster-${var.aws_region_alias}"
  provider               = "kubernetes.atlantiseks"
  vpc_security_group_ids = [module.atlantis_eks_cluster.cluster_primary_security_group_id]

  block_device_mappings {
    device_name = "/dev/xvda"

    ebs {
      volume_size = 21
      volume_type = "gp3"
    }
  }

  image_id      = "******************"
  instance_type = "c4.2xlarge"

  user_data = base64encode(
  )

  tag_specifications {
    resource_type = "instance"
  }
}

module "atlantis_eks_managed_node_group" {
  source = "terraform-aws-modules/eks/aws//modules/eks-managed-node-group"
  providers = {
    kubernetes = kubernetes.atlantiseks
  }
  version                    = "19.16.0"
  name                       = "${var.env}-atlantis-eks-cluster-${var.aws_region_alias}-node-group"
  cluster_name               = module.atlantis_eks_cluster.cluster_name
  use_name_prefix            = false
  iam_role_name              = "${var.env}-atlantis-eks-cluster-${var.aws_region_alias}-node-group-role"
  iam_role_use_name_prefix   = false
  enable_bootstrap_user_data = true
  subnet_ids                 = ["*********", "***************"]

  // The following variables are necessary if you decide to use the module outside of the parent EKS module context.
  // Without it, the security groups of the nodes are empty and thus won't join the cluster.
  cluster_primary_security_group_id = module.atlantis_eks_cluster.cluster_primary_security_group_id
  vpc_security_group_ids            = [module.atlantis_eks_cluster.cluster_primary_security_group_id]

  min_size     = 1
  max_size     = 1
  desired_size = 1

  #disabling the launch template created by the module
  create_launch_template = false

  #enabling the above created custom launch template atlantis_launch_template
  use_custom_launch_template = true

  launch_template_id      = aws_launch_template.atlantis_launch_template.id
  launch_template_name    = aws_launch_template.atlantis_launch_template.name
  launch_template_version = aws_launch_template.atlantis_launch_template.latest_version

  labels = {
    app = "atlantis"
  }

}

module "atlantis_vpc_cni_irsa_role" {
  source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
  providers = {
    kubernetes = kubernetes.atlantiseks
  }
  role_name = join("-", [var.env, "atlantis-eks-cluster", var.aws_region_alias, "vpc-cni", "irsa-role"])

  attach_vpc_cni_policy = true
  vpc_cni_enable_ipv4   = true

  oidc_providers = {
    main = {
      provider_arn               = module.atlantis_eks_cluster.oidc_provider_arn
      namespace_service_accounts = ["kube-system:aws-node"]
    }
  }

}

module "atlantis_ebs_csi_irsa_role" {
  source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
  providers = {
    kubernetes = kubernetes.atlantiseks
  }
  role_name             = join("-", [var.env, "atlantis-eks-cluster", var.aws_region_alias, "ebs-csi", "irsa-role"])
  attach_ebs_csi_policy = true

  oidc_providers = {
    ex = {
      provider_arn               = module.atlantis_eks_cluster.oidc_provider_arn
      namespace_service_accounts = ["kube-system:ebs-csi-controller-sa"]
    }
  }

}

##service account IAM role for aws load balancer controller####
module "atlantis_aws_load_balancer_controller_irsa_role" {
  source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
  providers = {
    kubernetes = kubernetes.atlantiseks
  }
  role_name                              = join("-", [var.env, "atlantis-eks-cluster", var.aws_region_alias, "load-balancer-controller", "irsa-role"])
  attach_load_balancer_controller_policy = true

  oidc_providers = {
    ex = {
      provider_arn               = module.atlantis_eks_cluster.oidc_provider_arn
      namespace_service_accounts = ["kube-system:aws-load-balancer-controller"]
    }
  }

}

@chipshadd
Copy link

chipshadd commented Dec 29, 2023

For some reason, the aws-auth config map does not exist in your set up. The manage_aws_auth_configmap variable merely manages the data within the configmap and assumes it already exists (or waits for it to get created).

You can have the module generate the aws-auth configmap by passing in create_aws_auth_configmap = true alongside manage_aws_auth_configmap = true.

@shettypriy
Copy link
Author

shettypriy commented Dec 29, 2023

Thank you for the suggestion

I added create_aws_auth_configmap = true and applied the changes but it gave the error

Error: Post "https://*****************/api/v1/namespaces/kube-system/configmaps": dial tcp 10.75.120.249:443: i/o timeout

For the above erorr, I followed the fix mentioned here #2369 (comment)

then commented create_aws_auth_configmap = true, since I was getting the below error.

Error: configmaps "aws-auth" already exists

Planned and applied again after commenting create_aws_auth_configmap = true and was able to create the auth config succesfully

@chipshadd
Copy link

I've ran into many issues with having the module managing the aws-auth config maps. On initial creation everything is peachy, but running updates to it or even updates to other parts of a cluster results in intermittent issues. I think this is due to the kubernetes provider pulling data from the module's output. From what I understand providers cannot have dependencies on other resources and at the time of execution, terraform is unable to properly populate the config data of the kubernetes provider with the cluster data from the module.

I split out the aws_auth configmap tasks into a separate template and am manually managing it via a separate tf apply command.

@sz9751210
Copy link

I use the following settings and can run successfully.

provider "kubernetes" {
  host                   = data.terraform_remote_state.eks.outputs.cluster_endpoint
  cluster_ca_certificate = base64decode(data.terraform_remote_state.eks.outputs.cluster_certificate_authority_data)
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    args        = ["eks", "get-token", "--cluster-name", var.cluster_name]
    command     = "aws"
  }
}

@bryantbiggs
Copy link
Member

I don't think this is a module issue - however, we are close to shipping v20.0 (#2858) which replaces the use of the aws-auth configmap with cluster access entries which makes this entire experience easier less problematic

Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 26, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants