-
-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't tear down cluster after provisioning. #58
Comments
Is there a reason why you aren't using |
Because I have other things in my terraform workspace that I do not want to destroy. Also when trying to run terraform destroy to just destroy the module, I am unable to because Terraform Cloud is connected to VCS and says that it is unable to destroy because the changes must be committed to a VCS to apply. |
Instead of commenting out the module, have you tried setting the |
Experimenting with it now... |
OK, so I got a cluster up and going with the enabled flag, but when I go and set it to disabled I get this:
I assume this is because the provider is being deleted as well and has no reference to active resources anymore. |
It looks like the problem lies on this line. terraform-aws-eks-cluster/auth.tf Line 72 in 79d7bf7
Because enabled is set to false, its no longer referencing the created cluster. |
Currently, I cant tear down this cluster without doing it manually. |
Any progress on this? its really a pain having to manually destroy the cluster and leaving my terraform in a broken state. |
Deleting the resource $ [terraform|terragrunt] state list | grep kubernetes_config_map
$ [terraform|terragrunt] state rm [resource name output from previous command] |
Thanks, I'll check it out. |
I ran into this as well, but for a slightly different reason. We set the EKS cluster endpoint to be Private only, and used During a
Deleting the specific resource from state as mentioned above by @vkhatri worked for us and let us continue with the destroy. |
I've had several different issues with destroying the cluster, this is the latest. I'm trying to move a test cluster from one region to another and this is failing. Not the highest priority, for me at least, but one I'd like to try to help resolve. I know some of my cluster was destroyed, but now I can't even do a plan, so I'll have to check on the current state to try to discern the resources that remain. Having used GKE I'm not a fan of the AWS EKS permissions requiring some interplay between IAM and K8s RBAC. Based on the error that is what looks like the issue. If it helps, I can still "manage" the cluster using kubectl. So, some remnants remain. For the record this is my error: Also for the record, the aws-auth configmap still exists but is "empty". Here are the contents:
There definitely seems to be some dependency issue between TF kubernetes resources and EKS that shows up, not just here, but in other TF modules managing EKS. |
Apologies for the confusion, but I clearly don't understand the interplay between IAM and the aws-auth configmap. I just created a new cluster using 0.30.2 of cloudposse/eks-cluster/aws and then, without doing anything else, destroyed it. After a lot of TF log output describing what was destroyed the end result is: Error: Unauthorized I do now think the underlying problem is with the hashicorp/kubernetes provider. Nonetheless, I can't reliably destroy a cluster using TF even without adding any other resources (e.g. node groups). What am I doing wrong? |
I'm happy to provide all of the TF files I have. There is nothing proprietary in them if that helps troubleshoot this. In my TFE workspace I'm only setting a few variables like region, AZs, and then the standard CloudPosse context variables. |
All the above seems related the issues related using |
Describe the Bug
After provisioning an EKS Cluster with this module, you cannot then tear it down by commenting out the module.
Expected Behavior
A clear and concise description of what you expected to happen.
Steps to Reproduce
Screenshots
Environment (please complete the following information):
Terraform Cloud
The text was updated successfully, but these errors were encountered: