Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do I update aws-auth without using kubectl patch? #1802

Closed
soostdijck opened this issue Jan 20, 2022 · 20 comments
Closed

How do I update aws-auth without using kubectl patch? #1802

soostdijck opened this issue Jan 20, 2022 · 20 comments

Comments

@soostdijck
Copy link

Description

I'm looking for a way to use the kubernetes provider to update aws-auth. I need to add some roles to allow users to access the cluster via the console and a few other things. However the TF kubernetes provider doesn't support any way that I know of to update aws-auth after creation. I see examples using kubectl patch but I'm deploying using terraform cloud which is giving me issues

TF cloud means that the cluster is created via a role that is assumed by the TF cloud workspace. These TF cloud instances are very limited they do not contain anything besides terraform, there is no kubectl, awscli or shell present and I can't add it.

How can I patch aws-auth using the kubernetes provider?

Versions

  • Terraform: 1.1.3
  • Provider(s): kubernetes
  • Module:

Reproduction

Steps to reproduce the behavior:
Use a TF cloud workspace to create the EKS cluster, then try to update aws-auth after the cluster is created.

Code Snippet to Reproduce

locals {
  aws_auth_configmap_hcl = yamldecode(module.eks.aws_auth_configmap_yaml)
}

resource "kubernetes_config_map" "aws_auth" {
  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
    labels = merge(
      {
        "app.kubernetes.io/managed-by" = "Terraform"
      }
    )
  }

  data = {
    mapRoles = jsonencode(
      concat(
        [
          {
            rolearn  = local.aws_user_role
            username = "LZAdministratorsRole"
            groups = [
              "system:masters",
            ]
          },
          {
            rolearn  = local.aws_terraform_role
            username = "LZTerraformRole"
            groups = [
              "system:masters",
            ]
          }
        ],
        yamldecode(local.aws_auth_configmap_hcl.data.mapRoles)
      )
    )
  }
  depends_on = [module.eks]
}

Actual behavior

Error: configmaps "aws-auth" already exists
with module.main.kubernetes_config_map.aws_auth
on ../auth.tf line 9, in resource "kubernetes_config_map" "aws_auth":

resource "kubernetes_config_map" "aws_auth" {
@martijnvdp
Copy link
Contributor

martijnvdp commented Jan 20, 2022

You could use the code from 17.24 here icw http wait,
The only provider i found that can edit/patch existing resources with terraform is this kubectl provider kubectl provider and deploy the config map as a kubectl_manifest.

@soostdijck
Copy link
Author

Doesn't the cluster already have a aws-auth installed by the time that it reports healthy and http wait kicks in?

@martijnvdp
Copy link
Contributor

martijnvdp commented Jan 21, 2022

No i think ithe aws-auth gets created when the node group is deployed, Im using it without issues but i see its not totaly faulproof #1702

@soostdijck
Copy link
Author

Indeed, so in this case V18 of the module will create a cluster, and a node group, only then will it return eks.aws_auth_configmap_yaml. At which point you're too late to edit aws-auth.

@soostdijck
Copy link
Author

soostdijck commented Jan 24, 2022

Not much progress in the meantime. Hashicorp support suggested putting kubectl into the TF cloud runner, but that isn't helping. I came up with this, but I still can't access the cluster that's created by V18 of the module. Error is below:

data "http" "wait_for_cluster" {
  url            = format("%s/healthz", module.eks.cluster_endpoint)
  ca_certificate = base64decode(data.aws_eks_cluster.this.certificate_authority[0].data)
  timeout        = 1200

  depends_on = [
    module.eks
  ]
}

resource "null_resource" "kubectl_hack" {
  # change trigger to run every time
  triggers = {
    build_number = "${timestamp()}"
  }

  # get sts token for the infra role and make profile
  provisioner "local-exec" {
    command = <<EOF
    export $(printf "TMP_AWS_ACCESS_KEY_ID=%s TMP_AWS_SECRET_ACCESS_KEY=%s TMP_AWS_SESSION_TOKEN=%s" \
      $(aws sts assume-role \
      --role-arn ${var.lz_role_arn} \
      --role-session-name MySession \
      --query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
      --output text)) &&
    aws configure set aws_access_key_id $TMP_AWS_ACCESS_KEY_ID --profile tfclown &&
    aws configure set aws_secret_access_key $TMP_AWS_SECRET_ACCESS_KEY --profile tfclown &&
    aws configure set aws_session_token $TMP_AWS_SESSION_TOKEN --profile tfclown
EOF
  }
  # download kubectl
  provisioner "local-exec" {
    command = "curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl"
  }

  provisioner "local-exec" {
    command = "aws eks --region ${var.region} update-kubeconfig --name ${var.cluster_name} --profile tfclown"
  }

  provisioner "local-exec" {
    command = "cat /home/terraform/.kube/config && ./kubectl --v=4 --kubeconfig='/home/terraform/.kube/config' get no"  
  }
  depends_on = [data.http.wait_for_cluster]
}

Which errors with:

Executing: ["/bin/sh" "-c" "cat /home/terraform/.kube/config && ./kubectl --v=4 --kubeconfig='/home/terraform/.kube/config' get no"]
  apiVersion: v1
  clusters:
  - cluster:
      certificate-authority-data: REDACTED
      server: https://REDACT.gr7.eu-west-1.eks.amazonaws.com
    name: arn:aws:eks:eu-west-1:REDACT
  contexts:
  - context:
      cluster: arn:aws:eks:eu-west-1:REDACT
      user: arn:aws:eks:eu-west-1:REDACT
    name: arn:aws:eks:eu-west-1:REDACT
  current-context: arn:aws:eks:eu-west-1:REDACT
  kind: Config
  preferences: {}
  users:
  - name: arn:aws:eks:eu-west-1REDACT
    user:
      exec:
        apiVersion: client.authentication.k8s.io/v1alpha1
        args:
        - --region
        - eu-west-1
        - eks
        - get-token
        - --cluster-name
        - REDACT
        command: aws
        env:
        - name: AWS_PROFILE
          value: tfclown
  I0124 15:08:49.328576    1630 cached_discovery.go:121] skipped caching discovery info due to Unauthorized
  I0124 15:08:50.054150    1630 cached_discovery.go:121] skipped caching discovery info due to Unauthorized
  I0124 15:08:50.054172    1630 shortcut.go:89] Error loading discovery information: Unauthorized
  I0124 15:08:50.793215    1630 cached_discovery.go:121] skipped caching discovery info due to Unauthorized
  I0124 15:08:51.517880    1630 cached_discovery.go:121] skipped caching discovery info due to Unauthorized
  I0124 15:08:52.240278    1630 cached_discovery.go:121] skipped caching discovery info due to Unauthorized
  I0124 15:08:52.240547    1630 helpers.go:219] server response object: [{
    "metadata": {},
    "status": "Failure",
    "message": "Unauthorized",
    "reason": "Unauthorized",
    "code": 401
  }]
  error: You must be logged in to the server (Unauthorized)

@mzupan
Copy link

mzupan commented Jan 24, 2022

I just used the kubectl provider

locals {
  aws_auth_roles = concat(
    [
      {
        rolearn  = aws_iam_role.worker.arn
        username = "system:node:{{EC2PrivateDNSName}}"
        groups   = ["system:bootstrappers", "system:nodes"]
      },
    ],
    var.eks_roles
  )
}

resource "kubectl_manifest" "aws_auth" {
  yaml_body = <<YAML
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/managed-by: Terraform
  name: aws-auth
  namespace: kube-system
data:
  mapAccounts: |
    []
  mapRoles: |
    ${indent(4, yamlencode(local.aws_auth_roles))}
  mapUsers: |
    []
YAML

  depends_on = [
    module.eks
  ]
}

@soostdijck
Copy link
Author

soostdijck commented Jan 24, 2022

I just used the kubectl provider

Which one? There are 9 out there
Another issue is that I need to assume a role, because TF cloud uses a build account, which then assumes a role into a account where the cluster is created. Of course the AWS provider understands this fine, but I'm not sure about a kubectl provider.

@mzupan
Copy link

mzupan commented Jan 24, 2022

@soostdijck https://registry.terraform.io/providers/gavinbunney/kubectl/latest

Sets up just like the kubernetes provider

provider "kubectl" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

I haven't looked at it in awhile but i think it's using the kubectl go library so it doesn't need to shell out

@martijnvdp
Copy link
Contributor

martijnvdp commented Jan 25, 2022

I have it something like this (mostly copy pasted from 17.24)

locals {
  map_roles=[ {
 "groups": [ "system:masters" ], "userarn": "arn:aws:iam::66666666666:user/user1", "username": "user1" }, { "groups": [ "system:masters" ], "userarn": "arn:aws:iam::66666666666:user/user2", "username": "user2" } ]
}

data "http" "wait_for_cluster" {

  ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
  timeout        = 300
  url            = format("%s/healthz", module.eks.cluster_endpoint)
}

resource "kubernetes_config_map" "aws_auth" {

  metadata {
    name      = "aws-auth"
    namespace = "kube-system"

    labels = {
      "app.kubernetes.io/managed-by" = "Terraform"
      "terraform.io/module"          = "terraform-aws-modules.eks.aws"
    }
  }

  data = {
    mapRoles = yamlencode(concat([
      {
        rolearn  = replace(var.node_iam_role_arn, replace("/", "/^//", ""), "")
        username = "system:node:{{EC2PrivateDNSName}}"
        groups   = ["system:bootstrappers", "system:nodes"]
      }],
    local.map_roles))
  }

  depends_on = [data.http.wait_for_cluster]
}

@RuBiCK
Copy link

RuBiCK commented Jan 26, 2022

There are different ways to generate the config map but how do you apply it if the module has already created it? It returns an error because it exist.

@soostdijck
Copy link
Author

I've solved it for my situation yesterday. Here's the code that I ended up using. Hope it helps someone:

# Every TF module can have it's own requirements
terraform {
  required_providers {
    kubectl = {
      source = "gavinbunney/kubectl"
      version = ">=1.13.1"
    }
    http = {
      source  = "terraform-aws-modules/http"
      version = "2.4.1"
   }
  }
}

data "aws_eks_cluster" "this" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "this" {
  name = module.eks.cluster_id
}

data "http" "wait_for_cluster" {
  url            = format("%s/healthz", module.eks.cluster_endpoint)
  ca_certificate = base64decode(data.aws_eks_cluster.this.certificate_authority[0].data)
  timeout        = 1200

  depends_on = [
    module.eks
  ]
}

provider "kubectl" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.this.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.this.token
  load_config_file       = false
}

locals {
  aws_user_role = "arn:aws:iam::1234:role/AWSReservedSSO_AWSAdministratorAccess_ce861bcf52b0eabc"
  aws_terraform_role = "arn:aws:iam::1234:role/terraform-role-dev"

  aws_auth_configmap_yaml = <<-EOT
  ${chomp(module.eks.aws_auth_configmap_yaml)}
      - rolearn: ${local.aws_user_role}
        username: "LZadministratorsRole"
        groups:
          - system:masters
      - rolearn: ${local.aws_terraform_role}
        username: "LZTerraformRole"
        groups:
          - system:masters
  EOT
      # - rolearn: ${module.eks_managed_node_group.iam_role_arn}
      #   username: system:node:{{EC2PrivateDNSName}}
      #   groups:
      #     - system:bootstrappers
      #     - system:nodes
      # - rolearn: ${module.self_managed_node_group.iam_role_arn}
      #   username: system:node:{{EC2PrivateDNSName}}
      #   groups:
      #     - system:bootstrappers
      #     - system:nodes
      # - rolearn: ${module.fargate_profile.fargate_profile_arn}
      #   username: system:node:{{SessionName}}
      #   groups:
      #     - system:bootstrappers
      #     - system:nodes
      #     - system:node-proxier
}

resource "kubectl_manifest" "aws_auth" {
  yaml_body = <<YAML
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/managed-by: Terraform
  name: aws-auth
  namespace: kube-system
${local.aws_auth_configmap_yaml}
YAML

depends_on = [data.http.wait_for_cluster]
}

@bryantbiggs
Copy link
Member

There are different ways to generate the config map but how do you apply it if the module has already created it? It returns an error because it exist.

To be clear, the module does not create the config map. If you use EKS managed node groups or Fargate profiles, then AWS actually creates the config map behind the scenes when the node group/profiles are created

@s4cpuser1
Copy link

@soostdijck how did this solution work for you ? The HTTP Data provider does not support CA certificate or timeout parameters
https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http

@soostdijck
Copy link
Author

soostdijck commented Feb 10, 2022

@soostdijck how did this solution work for you ? The HTTP Data provider does not support CA certificate or timeout parameters https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http

Works great, simply use this http provider:
https://registry.terraform.io/providers/terraform-aws-modules/http/2.4.1

Keep in mind that you can specify providers per module (you can have multiple terraform { required_providers {}} statements. If you specified the http provider outside your module it will simply assume that you want the hashicorp http provider that doesn't support timeout

@bryantbiggs
Copy link
Member

fyi - #1901 (comment)

@aidanmelen
Copy link

aidanmelen commented Mar 6, 2022

I wrote a small module called eks-auth to bridge the gap. Here is a code snippet from the complete example:

module "eks" {
  source = "terraform-aws-modules/eks/aws"
  # insert the 15 required variables here
}

module "eks_auth" {
  source = "aidanmelen/eks-auth/aws"
  eks    = module.eks

  map_roles = [
    {
      rolearn  = "arn:aws:iam::66666666666:role/role1"
      username = "role1"
      groups   = ["system:masters"]
    },
  ]

  map_users = [
    {
      userarn  = "arn:aws:iam::66666666666:user/user1"
      username = "user1"
      groups   = ["system:masters"]
    },
    {
      userarn  = "arn:aws:iam::66666666666:user/user2"
      username = "user2"
      groups   = ["system:masters"]
    },
  ]

  map_accounts = [
    "777777777777",
    "888888888888",
  ]
}

@lu911
Copy link

lu911 commented Mar 10, 2022

@soostdijck Thnak you for your help.

locals {
  aws_auth_configmap_data = yamlencode({
    "data": {
      mapRoles: yamlencode(concat(yamldecode(yamldecode(module.eks.aws_auth_configmap_yaml).data.mapRoles), local.map_roles))
      mapUsers: yamlencode(local.map_users)
      mapAccounts = yamlencode(local.map_accounts)
    }
  })
}

resource "kubectl_manifest" "aws_auth" {
  yaml_body = <<YAML
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/managed-by: Terraform
  name: aws-auth
  namespace: kube-system
${local.aws_auth_configmap_data}
YAML

depends_on = [data.http.wait_for_cluster]
}

@aidanmelen
Copy link

aidanmelen commented Mar 22, 2022

I just used the kubectl provider

Which one? There are 9 out there Another issue is that I need to assume a role, because TF cloud uses a build account, which then assumes a role into a account where the cluster is created. Of course the AWS provider understands this fine, but I'm not sure about a kubectl provider.

Terraform providers are written in Golang. The kubectl provider uses go libraries to make the same API libraries used by the kubectl command line tool. You should be able to use the kubectl provider for remote operations in Terraform Cloud or in CI/CD pipelines without requiring the host to have kubectl installed.

This is the terraform-provider-kubectl you want. The rest are forked from the original.

@farrukh90
Copy link

I have it something like this (mostly copy pasted from 17.24)

locals {
  map_roles=[ {
 "groups": [ "system:masters" ], "userarn": "arn:aws:iam::66666666666:user/user1", "username": "user1" }, { "groups": [ "system:masters" ], "userarn": "arn:aws:iam::66666666666:user/user2", "username": "user2" } ]
}

data "http" "wait_for_cluster" {

  ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
  timeout        = 300
  url            = format("%s/healthz", module.eks.cluster_endpoint)
}

resource "kubernetes_config_map" "aws_auth" {

  metadata {
    name      = "aws-auth"
    namespace = "kube-system"

    labels = {
      "app.kubernetes.io/managed-by" = "Terraform"
      "terraform.io/module"          = "terraform-aws-modules.eks.aws"
    }
  }

  data = {
    mapRoles = yamlencode(concat([
      {
        rolearn  = replace(var.node_iam_role_arn, replace("/", "/^//", ""), "")
        username = "system:node:{{EC2PrivateDNSName}}"
        groups   = ["system:bootstrappers", "system:nodes"]
      }],
    local.map_roles))
  }

  depends_on = [data.http.wait_for_cluster]
}

do you have the complete code? you are using variables, and do you also have the module.eks?

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 12, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants