-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Would you mind clarifying how to add additional Roles/Users to the AWS AUTH during EKS provisioning #1901
Comments
If it's any help, one approach which we're using, using 18.7.2 of the module, and gitlab runners:
|
Hi, For now I am creating a template to update the configmap using the outputs of the EKS module. |
Look at my comment there: #1744 (comment), maybe it will help a little ;) |
I will kindly point you to this issue which would provide better native Terraform support for your request hashicorp/terraform-provider-kubernetes#723 or you can look at a ("hacky") alternative that has been provided in the examples directory terraform-aws-eks/examples/complete/main.tf Lines 301 to 336 in 9a99689
|
I wrote a small module called eks-auth to bridge the gap. Here is a code snippet from the complete example: module "eks" {
source = "terraform-aws-modules/eks/aws"
# insert the 15 required variables here
}
module "eks_auth" {
source = "aidanmelen/eks-auth/aws"
eks = module.eks
map_roles = [
{
rolearn = "arn:aws:iam::66666666666:role/role1"
username = "role1"
groups = ["system:masters"]
},
]
map_users = [
{
userarn = "arn:aws:iam::66666666666:user/user1"
username = "user1"
groups = ["system:masters"]
},
{
userarn = "arn:aws:iam::66666666666:user/user2"
username = "user2"
groups = ["system:masters"]
},
]
map_accounts = [
"777777777777",
"888888888888",
]
} |
for anyone following here, check out aws/containers-roadmap#185 - this will solve a number of issues and hopefully get quickly propagated into the Terraform AWS provider (bonus points - someone head over to the provider repo and file a ticket to have day 1 support for it) |
What we did to manage We created a local that merges the base base_auth_configmap = yamldecode(module.eks_cluster.aws_auth_configmap_yaml)
updated_auth_configmap_data = {
data = {
mapRoles = yamlencode(
distinct(concat(
yamldecode(local.base_auth_configmap.data.mapRoles), var.map_roles, )
))
mapUsers = yamlencode(var.map_users)
}
}
Then, we created a Terraform object to manage the resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
mapAccounts = "[]"
mapRoles = local.updated_auth_configmap_data.data.mapRoles
mapUsers = local.updated_auth_configmap_data.data.mapUsers
}
} We tried to use the local-exec solution but it broke our CI/CD workflow (mostly because the |
@dmi-clopez are you importing the merged |
@aidanmelen That's correct for existing clusters. The |
FYI - it will exist by default on new clusters if using EKS managed node groups or Fargate profiles - those features automatically create the config map and inject their roles into it |
I keep receiving this error, when I try to apply the (resource "kubernetes_config_map" "aws_auth") mentioned by @dmi-clopez . Besides this one, when I tried to a new namespace for the eks cluster (resource "kubernetes_namespace"), I got similar timeout error.
I cannot access to the eks cluster through kubectl, so I cannot troubleshoot further. Any thoughts? Thanks, |
the IAM role/user who provisioned the cluster has access to the cluster |
@bryantbiggs @dmi-clopez
2: I tried @dmi-clopez solution to map other roles and users into aws-auth. However, only users are injected into the aws-auth configmap. Below is what I got after applied the
|
This is too bad, it was so nice to be able to provision a cluster and all the team members who should be able to access it in one shot with Please consider putting that back as it was significantly better development experience than what we have to do now with v18+ |
|
this is very simple, you only add new roles/users that you need, then the patch command adds it to existing config map of EC2 Roles. Be sure to download the updated kubeconfig locals { kubeconfig = yamlencode({ aws_auth_configmap_yaml = <<-EOT resource "null_resource" "patched_kube_file" { provisioner "local-exec" { |
I keep getting this error while trying to carry out the upgrade process.
|
I am trying to generate some awareness about a PR for patch support in the terraform-kubernetes-provider. Please give it a 👍 With this change, we can apply/patch the Here is a example of the terraform that I used to test: provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "docker-desktop"
}
resource "kubernetes_manifest" "aws-auth-configmap" {
manifest = yamldecode(
<<-EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::111111111111:role/DemoEKS-NodeInstanceRole
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::111111111111:role/TeamRole
username: TeamRole
groups:
- system:masters
mapUsers: |
- userarn: arn:aws:iam::111111111111:user/sukumar-test-test
username: sukumar
groups:
- system:masters
EOT
)
field_manager {
force_conflicts = true
}
} |
This issue has been resolved in version 18.20.0 🎉 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Is your request related to a problem? Please describe.
Could not add Roles arn:: to the aws_auth configmap during installation process hence there are no any ability to manage EKS cluster after provisioning
###Unauthorized
Describe the solution you'd like.
Add ability to manage aws_auth configmap and roles, as it was on a previous versions or maybe provide some information how to figure out this issue related to the current module version 18.7...
Describe alternatives you've considered.
We've successfully figure this with a help of the local_exec and kubectl_patch, but this approach working well only from local machine not from for example gitlab-runner container. (because there are no any kubectl etc... inside it)
Additional context
Our team walked through change-log carefully, but still doesn't understood how to manage additional iam users/roles for aws_auth.
Hopefully, you will help us to find out what is going on and resolve this issue, thanks in advance.
The text was updated successfully, but these errors were encountered: