Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Would you mind clarifying how to add additional Roles/Users to the AWS AUTH during EKS provisioning #1901

Closed
vovkanaz opened this issue Feb 24, 2022 · 21 comments · Fixed by #1999

Comments

@vovkanaz
Copy link

Is your request related to a problem? Please describe.

Could not add Roles arn:: to the aws_auth configmap during installation process hence there are no any ability to manage EKS cluster after provisioning
###Unauthorized

Describe the solution you'd like.

Add ability to manage aws_auth configmap and roles, as it was on a previous versions or maybe provide some information how to figure out this issue related to the current module version 18.7...

Describe alternatives you've considered.

We've successfully figure this with a help of the local_exec and kubectl_patch, but this approach working well only from local machine not from for example gitlab-runner container. (because there are no any kubectl etc... inside it)

Additional context

Our team walked through change-log carefully, but still doesn't understood how to manage additional iam users/roles for aws_auth.

Hopefully, you will help us to find out what is going on and resolve this issue, thanks in advance.

@stv-io
Copy link

stv-io commented Feb 25, 2022

If it's any help, one approach which we're using, using 18.7.2 of the module, and gitlab runners:

  • provision the cluster, generate the kubeconfig using local and yamlencode, exposing it as a sensitive output
  • after the apply, capture the kubeconfig - terraform output -raw kubeconfig - and export it as a job artefact
  • in a subsequent job, (same pipeline) manage and overwrite the aws-auth configmap using a kubectl container, with the kubeconfig exported from terraform earlier

@leftyb
Copy link

leftyb commented Feb 25, 2022

Hi,
I am having same issue, finally I am going to use similar approach as @stv-io .
I can see that in the past there was a variable that could be set : map_roles which somehow it seems that it has been removed.

For now I am creating a template to update the configmap using the outputs of the EKS module.

@dejwsz
Copy link

dejwsz commented Feb 25, 2022

Look at my comment there: #1744 (comment), maybe it will help a little ;)

@bryantbiggs
Copy link
Member

I will kindly point you to this issue which would provide better native Terraform support for your request hashicorp/terraform-provider-kubernetes#723

or you can look at a ("hacky") alternative that has been provided in the examples directory

# we have to combine the configmap created by the eks module with the externally created node group/profile sub-modules
aws_auth_configmap_yaml = <<-EOT
${chomp(module.eks.aws_auth_configmap_yaml)}
- rolearn: ${module.eks_managed_node_group.iam_role_arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: ${module.self_managed_node_group.iam_role_arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: ${module.fargate_profile.fargate_profile_pod_execution_role_arn}
username: system:node:{{SessionName}}
groups:
- system:bootstrappers
- system:nodes
- system:node-proxier
EOT
}
resource "null_resource" "patch" {
triggers = {
kubeconfig = base64encode(local.kubeconfig)
cmd_patch = "kubectl patch configmap/aws-auth --patch \"${local.aws_auth_configmap_yaml}\" -n kube-system --kubeconfig <(echo $KUBECONFIG | base64 --decode)"
}
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = self.triggers.kubeconfig
}
command = self.triggers.cmd_patch
}
}

@aidanmelen
Copy link

aidanmelen commented Mar 1, 2022

I wrote a small module called eks-auth to bridge the gap. Here is a code snippet from the complete example:

module "eks" {
  source = "terraform-aws-modules/eks/aws"
  # insert the 15 required variables here
}

module "eks_auth" {
  source = "aidanmelen/eks-auth/aws"
  eks      = module.eks

  map_roles = [
    {
      rolearn  = "arn:aws:iam::66666666666:role/role1"
      username = "role1"
      groups   = ["system:masters"]
    },
  ]

  map_users = [
    {
      userarn  = "arn:aws:iam::66666666666:user/user1"
      username = "user1"
      groups   = ["system:masters"]
    },
    {
      userarn  = "arn:aws:iam::66666666666:user/user2"
      username = "user2"
      groups   = ["system:masters"]
    },
  ]

  map_accounts = [
    "777777777777",
    "888888888888",
  ]
}

@bryantbiggs
Copy link
Member

Ref: aws/containers-roadmap#185

@bryantbiggs
Copy link
Member

for anyone following here, check out aws/containers-roadmap#185 - this will solve a number of issues and hopefully get quickly propagated into the Terraform AWS provider (bonus points - someone head over to the provider repo and file a ticket to have day 1 support for it)

@dmi-clopez
Copy link

What we did to manage aws-auth is the following:

We created a local that merges the base aws-auth document with our custom changes (credit to @dejwsz for the snippet):

  base_auth_configmap = yamldecode(module.eks_cluster.aws_auth_configmap_yaml)

  updated_auth_configmap_data = {
    data = {
      mapRoles = yamlencode(
        distinct(concat(
          yamldecode(local.base_auth_configmap.data.mapRoles), var.map_roles, )
      ))
      mapUsers = yamlencode(var.map_users)
    }
  }

Then, we created a Terraform object to manage the configmap

resource "kubernetes_config_map" "aws_auth" {

  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
  }

  data = {
    mapAccounts = "[]"
    mapRoles    = local.updated_auth_configmap_data.data.mapRoles
    mapUsers    = local.updated_auth_configmap_data.data.mapUsers
  }
}

We tried to use the local-exec solution but it broke our CI/CD workflow (mostly because the hashicorp/terraform container doesn't have bash installed, and we didn't really want to add a step just to install it), so we came out with this solution.

@aidanmelen
Copy link

@dmi-clopez are you importing the merged kubernetes_config_map resource? Won't terraform fail to apply since the aws-auth configMap will already exist in the cluster?

@dmi-clopez
Copy link

@aidanmelen That's correct for existing clusters. The aws-auth configmap does not exist by default for new clusters, so we just create it there.

@bryantbiggs
Copy link
Member

bryantbiggs commented Mar 7, 2022

@aidanmelen That's correct for existing clusters. The aws-auth configmap does not exist by default for new clusters, so we just create it there.

FYI - it will exist by default on new clusters if using EKS managed node groups or Fargate profiles - those features automatically create the config map and inject their roles into it

@visla-xugeng
Copy link

visla-xugeng commented Mar 7, 2022

I keep receiving this error, when I try to apply the (resource "kubernetes_config_map" "aws_auth") mentioned by @dmi-clopez . Besides this one, when I tried to a new namespace for the eks cluster (resource "kubernetes_namespace"), I got similar timeout error.

Error: Post "https://87E3689A67ECECC0A11111111111.sk1.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/configmaps": dial tcp 10.100.94.191:443: i/o timeout

I cannot access to the eks cluster through kubectl, so I cannot troubleshoot further. Any thoughts? Thanks,

@bryantbiggs
Copy link
Member

the IAM role/user who provisioned the cluster has access to the cluster

@visla-xugeng
Copy link

visla-xugeng commented Mar 8, 2022

@bryantbiggs @dmi-clopez
I got a wired issue here: I cannot update the aws-auth configmap on my side through kubernetes_config_map
1: the eks cluster will create aws-auth cm when I create an eks cluster with managed-node-groups. All of node-group roles are injected into my aws-auth map.

kubectl -n kube-system get cm aws-auth -oyaml
apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::111111111111:role/us-east-1-test-default_mng
      username: system:node:{{EC2PrivateDNSName}}
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::111111111111:role/us-east-1-test-system_mng
      username: system:node:{{EC2PrivateDNSName}}
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::111111111111:role/us-east-1-test-monitoring_mng
      username: system:node:{{EC2PrivateDNSName}}
kind: ConfigMap
metadata:
  creationTimestamp: "2022-03-08T18:43:09Z"
  name: aws-auth
  namespace: kube-system
  resourceVersion: "1150"
  uid: dddddddddddddddddd

2: I tried @dmi-clopez solution to map other roles and users into aws-auth. However, only users are injected into the aws-auth configmap. kubernetes_config_map cannot inject any other new roles into aws-auth, at the same time, the original node-group roles get removed completed.

Below is what I got after applied the resource "kubernetes_config_map" "aws_auth"

kubectl -n kube-system get cm aws-auth -oyaml
apiVersion: v1
data:
  mapAccounts: '[]'
  mapRoles: |
    - "groups":
      - "system:bootstrappers"
      - "system:nodes"
      "rolearn": null
      "username": "system:node:{{EC2PrivateDNSName}}"
  mapUsers: |
    - "groups":
      - "system:masters"
      "userarn": "arn:aws:iam::1111111:user/user01"
      "username": "user01"
    - "groups":
      - "us-east-1-ops-developers"
      "userarn": "arn:aws:iam::1111111:user/test01"
      "username": "test01"
kind: ConfigMap
metadata:
  creationTimestamp: "2022-03-08T05:07:04Z"
  labels:
    app.kubernetes.io/managed-by: Terraform
    terraform.io/module: terraform-aws-modules.eks.aws
  name: aws-auth
  namespace: kube-system
  resourceVersion: "7332"
  uid: xxxxxx

@sufyanadam
Copy link

Previous module versions provided support for managing the aws-auth configmap via the Kubernetes Terraform provider using the now deprecated aws-iam-authenticator; these are no longer included in the module.

This is too bad, it was so nice to be able to provision a cluster and all the team members who should be able to access it in one shot with terraform apply and just defining an array of map_users.

Please consider putting that back as it was significantly better development experience than what we have to do now with v18+

@bryantbiggs
Copy link
Member

Previous module versions provided support for managing the aws-auth configmap via the Kubernetes Terraform provider using the now deprecated aws-iam-authenticator; these are no longer included in the module.

This is too bad, it was so nice to be able to provision a cluster and all the team members who should be able to access it in one shot with terraform apply and just defining an array of map_users.

Please consider putting that back as it was significantly better development experience than what we have to do now with v18+

aws/containers-roadmap#185

@gfrid
Copy link

gfrid commented Mar 24, 2022

this is very simple, you only add new roles/users that you need, then the patch command adds it to existing config map of EC2 Roles. Be sure to download the updated kubeconfig

locals {

kubeconfig = yamlencode({
apiVersion = "v1"
kind = "Config"
current-context = "terraform"
clusters = [{
name = module.eks.cluster_id
cluster = {
certificate-authority-data = module.eks.cluster_certificate_authority_data
server = module.eks.cluster_endpoint
}
}]
contexts = [{
name = "terraform"
context = {
cluster = module.eks.cluster_id
user = "terraform"
}
}]
users = [{
name = "terraform"
user = {
token = data.aws_eks_cluster_auth.this.token
}
}]
})

aws_auth_configmap_yaml = <<-EOT
${chomp(module.eks.aws_auth_configmap_yaml)}
- rolearn: arn:aws:iam::${var.account_id}:role/your-role
username: Admin
groups:
- system:masters
- rolearn: arn:aws:iam::${var.account_id}:role/Admin
username: Admin
groups:
- system:masters
EOT
}

resource "null_resource" "patched_kube_file" {

provisioner "local-exec" {
command = "aws eks update-kubeconfig --region us-east-1 --name ${var.cluster_name} --profile xxxxxx"
}
}

@pen-pal
Copy link
Contributor

pen-pal commented Mar 29, 2022

I keep getting this error while trying to carry out the upgrade process.
the module I am using also is used to create ingress resources at the same time once the eks cluster is up and running along with externalsecrets manager as well in my case

│ Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
│ Error: Failed to get RESTMapper client
│ cannot create discovery client: no client config
│ Error: Failed to get RESTMapper client
│ cannot create discovery client: no client config
│ Error: Failed to get RESTMapper client
│ cannot create discovery client: no client config
│ Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

@aidanmelen
Copy link

aidanmelen commented Mar 30, 2022

I am trying to generate some awareness about a PR for patch support in the terraform-kubernetes-provider. Please give it a 👍

With this change, we can apply/patch the aws-auth configmap using the kubernetes_manifest resource, even if the aws-auth configmap already exists.

Here is a example of the terraform that I used to test:

provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = "docker-desktop"
}

resource "kubernetes_manifest" "aws-auth-configmap" {
  manifest = yamldecode(
    <<-EOT
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: aws-auth
      namespace: kube-system
    data:
      mapRoles: |
        - rolearn: arn:aws:iam::111111111111:role/DemoEKS-NodeInstanceRole
          username: system:node:{{EC2PrivateDNSName}}
          groups:
            - system:bootstrappers
            - system:nodes
        - rolearn: arn:aws:iam::111111111111:role/TeamRole
          username: TeamRole
          groups:
          - system:masters
      mapUsers: |
        - userarn: arn:aws:iam::111111111111:user/sukumar-test-test
          username: sukumar
          groups:
            - system:masters
    EOT
  )

  field_manager {
    force_conflicts = true 
  }
}

@antonbabenko
Copy link
Member

This issue has been resolved in version 18.20.0 🎉

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 10, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet