-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws-auth configmap changes after adding a node pool removing existing roles #2873
Comments
This exact thing is happening here all the time, if any day I execute a plan, I'll get tons of resources being recreated/updated. |
FYI: It will be nearly impossible to help troubleshoot without a reproduction
Sounds like you are using an explicit |
Hey, Bryan. No, I'm not, all policies are being recreated, etc. This is my spec:
Do you see anything wrong? |
I don't see anything that stands out, but I can't deploy what you have provided and I don't know what you are seeing in a the plan diff so its hard to say 🤷🏽♂️ |
I'll give you an example, later today or tomorrow. |
hello @bryantbiggs ! Does the screenshot attached + the code I provided when opened the issue helps in the troubleshooting or you need more info? |
no because the code is not deployable and the screenshot doesn't show the full diff adding or removing a nodegroup doesn't remove the configmap, nor does it erase the contents of the configmap. what I suspect you are seeing is just merely the computed value diff that isn't fully rendered since the values aren't known until the change has been applied - as can be seen by the |
@bryantbiggs here you go: I've added a clusterrole, and then an eks-addon has been updated. Look at the plan:
|
99.99% of that has nothing to do with this module - I see one resource change related to the cluster, which looks like its just pulling the latest patch version: # module.eks_cluster.aws_eks_addon.this["coredns"] will be updated in-place
~ resource "aws_eks_addon" "this" {
~ addon_version = "v1.10.1-eksbuild.6" -> "v1.10.1-eksbuild.7"
id = "songfinch-production-us-east-1:coredns"
tags = {
"environment" = "production"
}
# (8 unchanged attributes hidden)
# (1 unchanged block hidden)
} |
OMG, my fault. Sorry for the noise |
closing this for now - we're close to shipping v20.0 (#2858) which replaces the use of the aws-auth configmap with cluster access entries and improves this entire experience |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
Adding a new nodeGroup to an existing EKS cluster changes the aws-auth in an undesired way, it removed existing roles for the configmap:
Versions
Module version [Required]: 19.21.0
Terraform version: 1.6.6
Reproduction Code
Steps to reproduce the behavior:
Create a new nodeGroup inside the eks module, run terraform plan.
I'm using terraform workspaces.
Expected behavior
A new nodeGroup should be created and the aws-auth should be changed to add the required roles for the new nodeGroup.
Actual behavior
aws-auth is changed in an undesired way, it removes existing roles from the configmap.
Terminal Output Screenshot(s)
Attached above ☝️
The text was updated successfully, but these errors were encountered: