-
Notifications
You must be signed in to change notification settings - Fork 982
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for running port-forward while Terraform is operating #812
Comments
This feature request has been open for a while and seems to have attracted a significant number of 👍 so I'd like to do two things:
edit: We also have to figure out what problems we are going to see when setting up a tunnel, blocking until it's ready, and then passing along the details to the provider block of the downstream resources. |
From a user POV, I think it would be best if the port forwarding is active from whenever it is possible to create the port-forwarding to the end of a Terraform operation. The problem I can foresee with its implementation is that this does not really fit into the lifecycle of a Terraform resource. Not sure how it can be implemented with the SDK tools we have. |
Hello everyone, Use case I want to apply kubernetes resources to private clusters that are available over proxy currently best solution is probably to split terraform definition to stages first setting up bastion and then executing second stage with e.g HTTPS_PROXY=localhost:8888 terraform plan however this overly complicates whole terraform project structure. provider "kubernetes" {
version = ">= 1.11.0"
load_config_file = false
host = "https://${module.cluster.cluster_endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(module.cluster.cluster_ca_certificate)
bastion_host = localhost
bastion_port = 8888
} Which would essentially allow to execute particular provider with proxy without affecting the other providers. This alternative would be relatively easy to implement - setting env variable for kubernetes provider if these options are passed to it. @dpkirchner I'm not sure if I like the idea of defining proxy as a resource where it's not really a resource but just some sort of hack. Do you have examples where something similar was done in other providers? |
@hrvolapeter I did see one in another provider but unfortunately I can't find it now. It wasn't exactly the same but IIRC it created a connection that persisted throughout execution. I agree it's not a resource, and definitely a hack, but ultimately it's necessary that whatever it is can be depended on by other resources or modules so we don't try to create plans or apply changes without the tunnel. By the way, in my specific use case, most of my resources don't speak HTTP/S and use community providers, so for it to work for me personally it'd need to support plain-ol TCP/TLS. This might be best implemented as a third type of runtime-only "resource" that can be depended on, but I dunno. |
I've implemented this example combining external provider with python script to set-up ssh bastion and combining it with bastion_host setting for which I create PR to kubernetes and helm provider. The same pattern can be used also for TCP proxies |
any update on this ?
|
I never did find the other provider that had something analogous, so I kinda just forgot about this request. I ended up no longer using the providers that use these non-https methods so I don't have a use case any longer. |
How I solved it is using a null resource with local-exec provisioner: resource "null_resource" "localstack_port_forwarding" {
provisioner "local-exec" {
command = "kubectl port-forward svc/${helm_release.localstack.name} 4566:4566 --namespace ${kubernetes_namespace.localstack.metadata.0.name}"
}
} Note it requires the |
null resource will only be created on
even when setting in addition, since the port-forward command never exits, the null resource gets stuck on creating... |
Any news on this? I am interested too. It could be useful to interact with "protected services" in a k8s cluster. |
@amirtal-cp you are right, I noticed that too. I ended up using the command manually. |
This would be a great feature to have, I think like most people here I had to hacky (workable) solution. Went with using terragrunt before and after hooks. If anyone is looking to use this option, add something like this to your terragrunt.hcl and have it run a script like: terraform {
source = "${get_parent_terragrunt_dir()}/_shared/_common"
before_hook "before_hook" {
commands = ["apply", "plan"]
execute = ["${get_parent_terragrunt_dir()}/_shared/start_service.sh"]
# get_env("KUBES_ENDPOINT", "somedefaulturl") you can get env vars or pass in params as needed to the script
}
after_hook "after_hook" {
commands = ["apply", "plan"]
execute = ["${get_parent_terragrunt_dir()}/_shared/stop_service.sh"]
run_on_error = true
}
} the start script would look something like this: #!/usr/bin/env bash
## run whatever auth you need to connect
## example in this case consul
port=8500
service="service/consul-server"
echo -en 'kubectl port-forward $service $port:$port -n some-namespace\n disown' >> run_service.sh && chmod +x run_service.sh
nohup bash run_service.sh </dev/null >/dev/null 2>&1 &
sleep 30s
# sleep was to give it a chance to finish establishing the conneciont before terragrunt/terraform starts running the stop script you'd just need to have something to lookup the process and kill it. |
I did some experimenting to see if I could get something like this to work and unfortunately there wont be a way to open a port-forward in-process in the Kubernetes provider that will be reliably accessible to a downstream dependency. The way Terraform works is:
And this can happen sequentially or in parallel for each provider, depending on the config. This means that if we started a goroutine inside this provider with the port-forward, there's no guarantee Terraform wont stop the process before it gets to the downstream provider that needs the port-forward. Moreover, Terraform doesn't as of yet support any sort of hooks for providers to do any "post/after" plan/apply cleanup, so if we had the provider start a port-forward as a new process we wouldn't be able to terminate it – killing the process when the provider process gets stopped would have the same problem as above. Ultimately this means that the port-forward would need to be started within the process of the downstream dependency, i.e if you are using a provider for SQL the port-forward needs to be set up inside that plugin process. That either means we have to add k8s port-forward support to any provider that might need it (dragging in k8s dependencies into projects that ideally don't need them) or have a way for Terraform itself to orchestrate opening the port-forward in some generic fashion that could be applied to any provider. So I'm going to mark this issue as upstream, as this is essentially something we wont be able to solve in this provider. |
I'm going to close this issue as this is something we will never implement in this provider, and is currently not possible with Terraform without changes to core. There is a lot of reactions to this issue so I would recommend opening an issue on the Terraform core repo similar to the request for |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Description
It would be helpful to have a way to temporarily enable port-forwarding, perhaps as some sort of data resource, that would allow us to temporarily expose services in Kubernetes so that Terraform may send requests over that local port.
For example, let's say you're running Spinnaker in your Kubernetes cluster (as is typical). In order to use the Spinnaker provider, you need to be able to connect to Spinnaker's API gateway on port 8084. You could leave that port open 24/7, but it would likely be safer to only access it over an authenticated tunnel ala
kubectl port-forward svc/spin-gate 8084
, and only when necessary.I'm sure this would be generally useful for other providers. SQL, for example.
Currently, I run the port-forward command in a loop in the background while terraform runs. Sometimes the tunnel is successfully creaetd before Terraform gets to the resources that require the tunnel, but most of the time it doesn't, and then I have to run terraform again to finish the entire plan.
Potential Terraform Configuration
The data resource would need to be able to export the local port. If
local_port
isn't set perhaps the resource could allocate a random port.The data resource would need to block until the service is reporting ready/referring to live pods (or some timeout is hit).
References
This is distinct from the existing provisioner + connection tunneling as it's for resources, not running specific commands on a remote server.
Example of port-forward use
Port forward API
Example of someone programmatically enabling port forwarding
The text was updated successfully, but these errors were encountered: