-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I update aws-auth without using kubectl patch? #1802
Comments
You could use the code from 17.24 here icw http wait, |
Doesn't the cluster already have a |
No i think ithe aws-auth gets created when the node group is deployed, Im using it without issues but i see its not totaly faulproof #1702 |
Indeed, so in this case V18 of the module will create a cluster, and a node group, only then will it return |
Not much progress in the meantime. Hashicorp support suggested putting kubectl into the TF cloud runner, but that isn't helping. I came up with this, but I still can't access the cluster that's created by V18 of the module. Error is below:
Which errors with:
|
I just used the
|
Which one? There are 9 out there |
@soostdijck https://registry.terraform.io/providers/gavinbunney/kubectl/latest Sets up just like the kubernetes provider
I haven't looked at it in awhile but i think it's using the kubectl go library so it doesn't need to shell out |
I have it something like this (mostly copy pasted from 17.24) locals {
map_roles=[ {
"groups": [ "system:masters" ], "userarn": "arn:aws:iam::66666666666:user/user1", "username": "user1" }, { "groups": [ "system:masters" ], "userarn": "arn:aws:iam::66666666666:user/user2", "username": "user2" } ]
}
data "http" "wait_for_cluster" {
ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
timeout = 300
url = format("%s/healthz", module.eks.cluster_endpoint)
}
resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
labels = {
"app.kubernetes.io/managed-by" = "Terraform"
"terraform.io/module" = "terraform-aws-modules.eks.aws"
}
}
data = {
mapRoles = yamlencode(concat([
{
rolearn = replace(var.node_iam_role_arn, replace("/", "/^//", ""), "")
username = "system:node:{{EC2PrivateDNSName}}"
groups = ["system:bootstrappers", "system:nodes"]
}],
local.map_roles))
}
depends_on = [data.http.wait_for_cluster]
} |
There are different ways to generate the config map but how do you apply it if the module has already created it? It returns an error because it exist. |
I've solved it for my situation yesterday. Here's the code that I ended up using. Hope it helps someone: # Every TF module can have it's own requirements
terraform {
required_providers {
kubectl = {
source = "gavinbunney/kubectl"
version = ">=1.13.1"
}
http = {
source = "terraform-aws-modules/http"
version = "2.4.1"
}
}
}
data "aws_eks_cluster" "this" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "this" {
name = module.eks.cluster_id
}
data "http" "wait_for_cluster" {
url = format("%s/healthz", module.eks.cluster_endpoint)
ca_certificate = base64decode(data.aws_eks_cluster.this.certificate_authority[0].data)
timeout = 1200
depends_on = [
module.eks
]
}
provider "kubectl" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.this.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.this.token
load_config_file = false
}
locals {
aws_user_role = "arn:aws:iam::1234:role/AWSReservedSSO_AWSAdministratorAccess_ce861bcf52b0eabc"
aws_terraform_role = "arn:aws:iam::1234:role/terraform-role-dev"
aws_auth_configmap_yaml = <<-EOT
${chomp(module.eks.aws_auth_configmap_yaml)}
- rolearn: ${local.aws_user_role}
username: "LZadministratorsRole"
groups:
- system:masters
- rolearn: ${local.aws_terraform_role}
username: "LZTerraformRole"
groups:
- system:masters
EOT
# - rolearn: ${module.eks_managed_node_group.iam_role_arn}
# username: system:node:{{EC2PrivateDNSName}}
# groups:
# - system:bootstrappers
# - system:nodes
# - rolearn: ${module.self_managed_node_group.iam_role_arn}
# username: system:node:{{EC2PrivateDNSName}}
# groups:
# - system:bootstrappers
# - system:nodes
# - rolearn: ${module.fargate_profile.fargate_profile_arn}
# username: system:node:{{SessionName}}
# groups:
# - system:bootstrappers
# - system:nodes
# - system:node-proxier
}
resource "kubectl_manifest" "aws_auth" {
yaml_body = <<YAML
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/managed-by: Terraform
name: aws-auth
namespace: kube-system
${local.aws_auth_configmap_yaml}
YAML
depends_on = [data.http.wait_for_cluster]
} |
To be clear, the module does not create the config map. If you use EKS managed node groups or Fargate profiles, then AWS actually creates the config map behind the scenes when the node group/profiles are created |
@soostdijck how did this solution work for you ? The HTTP Data provider does not support CA certificate or timeout parameters |
Works great, simply use this http provider: Keep in mind that you can specify providers per module (you can have multiple |
fyi - #1901 (comment) |
I wrote a small module called eks-auth to bridge the gap. Here is a code snippet from the complete example: module "eks" {
source = "terraform-aws-modules/eks/aws"
# insert the 15 required variables here
}
module "eks_auth" {
source = "aidanmelen/eks-auth/aws"
eks = module.eks
map_roles = [
{
rolearn = "arn:aws:iam::66666666666:role/role1"
username = "role1"
groups = ["system:masters"]
},
]
map_users = [
{
userarn = "arn:aws:iam::66666666666:user/user1"
username = "user1"
groups = ["system:masters"]
},
{
userarn = "arn:aws:iam::66666666666:user/user2"
username = "user2"
groups = ["system:masters"]
},
]
map_accounts = [
"777777777777",
"888888888888",
]
} |
@soostdijck Thnak you for your help.
|
Terraform providers are written in Golang. The kubectl provider uses go libraries to make the same API libraries used by the This is the terraform-provider-kubectl you want. The rest are forked from the original. |
do you have the complete code? you are using variables, and do you also have the module.eks? |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
I'm looking for a way to use the kubernetes provider to update aws-auth. I need to add some roles to allow users to access the cluster via the console and a few other things. However the TF kubernetes provider doesn't support any way that I know of to update aws-auth after creation. I see examples using
kubectl patch
but I'm deploying using terraform cloud which is giving me issuesTF cloud means that the cluster is created via a role that is assumed by the TF cloud workspace. These TF cloud instances are very limited they do not contain anything besides terraform, there is no kubectl, awscli or shell present and I can't add it.
How can I patch aws-auth using the kubernetes provider?
Versions
Reproduction
Steps to reproduce the behavior:
Use a TF cloud workspace to create the EKS cluster, then try to update
aws-auth
after the cluster is created.Code Snippet to Reproduce
Actual behavior
The text was updated successfully, but these errors were encountered: