-
Notifications
You must be signed in to change notification settings - Fork 985
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
terraform plan - dial tcp 127.0.0.1:80: connect: connection refused #1102
Comments
@shanehughes1990 this appears to be happening because the cluster is being configured in the same apply operation as your other resources (described in the docs here as an unstable setup - https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#stacking-with-managed-kubernetes-cluster-resources) |
So okay, because it has to delete the namespace first before recreating the master that's the cause of it? Why does it only happen on terraform 14 aswell, it works as I expect it too on 1.13.5. Is there a way around this as using terraform cloud in this sense does not work it fails? Would a destroy resources before changing gke setup be the solution? |
@shanehughes1990 the difficulty you're describing with varying behaviour between versions is why we discourage having cluster configuration in the same apply as other resources. On your local machine, you could use -target to do more directed applies, but in terraform cloud you would need to configure run triggers in order to have this separation. We are aware of this general issue and tracking use cases and progress in hashicorp/terraform#4149. |
You can learn more on how to use run triggers by following the learn guide |
I'll close this issue for now - please reopen if you continue to face difficulty. |
Please re-open. This happens for us too. We were happily working away in TF13 with multiple applies, destroys, updates, etc. all working perfectly fine with Infrastructure and K8S and Helm in sub-modules with dependencies between one another in a composite module. We didn't have any issues. However, as soon as we moved to TF14 - boom! It all stopped working, and we get the exact same message as mentioned above. I am afraid that targeted applies will greatly increase the deployment time for us - it does feel like a "go-to" response for these types of challenges in various bug reports, it would be great to simply get them resolved. Evidently, something has broken between TF13 and TF14. It should also be noted that if we do a destroy, then everything works as expected and TF can successfully connect to K8S to destroy the required resources. This only occurs during a plan against already applied infrastructure. |
We have since moved all in cluster resources to a different state and everything works as intended. That would be my suggestion to you aswell is just make sure you have no kubernetes resources in the same state as the resources to build the cluster |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
This issue was originally opened by @shanehughes1990 as hashicorp/terraform#27363. It was migrated here as a result of the provider split. The original body of the issue is below.
Terraform Version
have also tried
Terraform Configuration Files
main.tf
ingress.tf
sqlproxy.tf
providers.tf
versions.tf
module main.tf
module outputs.tf
module variables.tf
Debug Output
https://gist.github.com/shanehughes1990/12f787bbcd7f22d2ca034e68195ce47e
Crash Output
Expected Behavior
After terraform has applied the config, everything comes up as expected, when attempting to change the enable_private_cluster to true, enable_custom_networking to true, or any other change in the gke_master module, terraform errors, when it should spit out a valid plan with the changes.
Actual Behavior
terraform errors on
terraform plan
Steps to Reproduce
Hard for me to explain how to reproduce, as these are all private modules, but I guess try bringing up a cluster, some namespaces, and change something in the google_container_cluster resource and try terraform plan again.
Additional Context
This DOES NOT happen on terraform 0.13.5, terraform works as intended.
Currently testing on Ubuntu 20
This "will" be running in terraform cloud, (State is saved there), Was working out all the bugs locally before I let terraform cloud take over.
Running on GKE, RELEASE branch
References
The text was updated successfully, but these errors were encountered: