Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for running port-forward while Terraform is operating #812

Closed
dpkirchner opened this issue Apr 6, 2020 · 15 comments
Closed

Support for running port-forward while Terraform is operating #812

dpkirchner opened this issue Apr 6, 2020 · 15 comments

Comments

@dpkirchner
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

It would be helpful to have a way to temporarily enable port-forwarding, perhaps as some sort of data resource, that would allow us to temporarily expose services in Kubernetes so that Terraform may send requests over that local port.

For example, let's say you're running Spinnaker in your Kubernetes cluster (as is typical). In order to use the Spinnaker provider, you need to be able to connect to Spinnaker's API gateway on port 8084. You could leave that port open 24/7, but it would likely be safer to only access it over an authenticated tunnel ala kubectl port-forward svc/spin-gate 8084, and only when necessary.

I'm sure this would be generally useful for other providers. SQL, for example.

Currently, I run the port-forward command in a loop in the background while terraform runs. Sometimes the tunnel is successfully creaetd before Terraform gets to the resources that require the tunnel, but most of the time it doesn't, and then I have to run terraform again to finish the entire plan.

Potential Terraform Configuration

data "kubernetes_service" "svc" { # or resource
  metadata {
    name      = "svc"
    namespace = "ns"
  }
}

data "kubernetes_port_forward" "svc" {
  namespace    = kubernetes_service.svc.metadata[0].namespace
  service_name = kubernetes_service.svc.metadata[0].name
  service_port = 8084 # or maybe strings for named ports
  local_port   = 9000 # optional
}

The data resource would need to be able to export the local port. If local_port isn't set perhaps the resource could allocate a random port.

The data resource would need to block until the service is reporting ready/referring to live pods (or some timeout is hit).

References

This is distinct from the existing provisioner + connection tunneling as it's for resources, not running specific commands on a remote server.

Example of port-forward use

Port forward API

Example of someone programmatically enabling port forwarding

@aareet aareet added acknowledged Issue has undergone initial review and is in our work queue. size/XXL labels Jun 3, 2020
@jrhouston
Copy link
Collaborator

jrhouston commented Nov 18, 2020

This feature request has been open for a while and seems to have attracted a significant number of 👍 so I'd like to do two things:

  1. Find out how people are working around this at the moment. Has anyone made something like this work by running kubectl port-forward inside a null resource, for example?

  2. Collect any additional proposals for how this feature should work. I would propose that perhaps this should be implemented as a resource rather than a data source, as Terraform would be creating a tunnel not just fetching some data.

edit: We also have to figure out what problems we are going to see when setting up a tunnel, blocking until it's ready, and then passing along the details to the provider block of the downstream resources.

@lawliet89
Copy link
Contributor

lawliet89 commented Nov 19, 2020

From a user POV, I think it would be best if the port forwarding is active from whenever it is possible to create the port-forwarding to the end of a Terraform operation.

The problem I can foresee with its implementation is that this does not really fit into the lifecycle of a Terraform resource. Not sure how it can be implemented with the SDK tools we have.

@hrvolapeter
Copy link
Contributor

hrvolapeter commented Jan 7, 2021

Hello everyone,
I was pointed here from slack where I was proposing a bit different solution

Use case I want to apply kubernetes resources to private clusters that are available over proxy currently best solution is probably to split terraform definition to stages first setting up bastion and then executing second stage with e.g HTTPS_PROXY=localhost:8888 terraform plan however this overly complicates whole terraform project structure.
I was thinking if the community would be willing to accept a new option for provider e.g

provider "kubernetes" {
  version          = ">= 1.11.0"
  load_config_file = false
  host                   = "https://${module.cluster.cluster_endpoint}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(module.cluster.cluster_ca_certificate)
  bastion_host           = localhost
  bastion_port           = 8888
}

Which would essentially allow to execute particular provider with proxy without affecting the other providers.
This can be then combined with a null_resource creating proxy connection gcloud compute ssh cluster-bastion --project cluster-x --zone europe-west1-c -- -L 8888:localhost:8888 . Something similar is done for provisioners

This alternative would be relatively easy to implement - setting env variable for kubernetes provider if these options are passed to it.

@dpkirchner I'm not sure if I like the idea of defining proxy as a resource where it's not really a resource but just some sort of hack. Do you have examples where something similar was done in other providers?

@dpkirchner
Copy link
Author

@hrvolapeter I did see one in another provider but unfortunately I can't find it now. It wasn't exactly the same but IIRC it created a connection that persisted throughout execution. I agree it's not a resource, and definitely a hack, but ultimately it's necessary that whatever it is can be depended on by other resources or modules so we don't try to create plans or apply changes without the tunnel.

By the way, in my specific use case, most of my resources don't speak HTTP/S and use community providers, so for it to work for me personally it'd need to support plain-ol TCP/TLS.

This might be best implemented as a third type of runtime-only "resource" that can be depended on, but I dunno.

@hrvolapeter
Copy link
Contributor

I've implemented this example combining external provider with python script to set-up ssh bastion and combining it with bastion_host setting for which I create PR to kubernetes and helm provider. The same pattern can be used also for TCP proxies

jenkins-x/terraform-google-jx@647deca#diff-dc46acf24afd63ef8c556b77c126ccc6e578bc87e3aa09a931f33d9bf2532fbbR66

@adrien-barret
Copy link

any update on this ?
@dpkirchner put a good syntax idea

data "kubernetes_port_forward" "svc" { namespace = kubernetes_service.svc.metadata[0].namespace service_name = kubernetes_service.svc.metadata[0].name service_port = 8084 # or maybe strings for named ports local_port = 9000 # optional }

@dpkirchner
Copy link
Author

I never did find the other provider that had something analogous, so I kinda just forgot about this request. I ended up no longer using the providers that use these non-https methods so I don't have a use case any longer.

@feluelle
Copy link

How I solved it is using a null resource with local-exec provisioner:

resource "null_resource" "localstack_port_forwarding" {
  provisioner "local-exec" {
    command = "kubectl port-forward svc/${helm_release.localstack.name} 4566:4566 --namespace ${kubernetes_namespace.localstack.metadata.0.name}"
  }
}

Note it requires the kubectl to be installed locally ofc. So it is not the same as using a provider. But anyway I like this solution, because it seemlessly let me create localstack helm chart and aws local resources that depend on the port being forwarded.

@chkp-amirtal
Copy link

local-exec option does not provide a complete solution:

null resource will only be created on apply, not on plan.
providers requiring access to the forwarded resources will fail when running plan.
e.g.

module.eks.module.grafana.grafanaauth_api_key.api_key: Refreshing state... [id=1]

Error: Get "http://admin:***@127.0.0.1:8000/api/auth/keys?includeExpired=true": dial tcp 127.0.0.1:8000: connect: connection refused

even when setting trigger = timestamp, local-exec is not triggered when running plan.

in addition, since the port-forward command never exits, the null resource gets stuck on creating...

@fabiomarinetti
Copy link

Any news on this? I am interested too. It could be useful to interact with "protected services" in a k8s cluster.

@feluelle
Copy link

feluelle commented Jan 4, 2022

@amirtal-cp you are right, I noticed that too. I ended up using the command manually.

@john-owens
Copy link

This would be a great feature to have, I think like most people here I had to hacky (workable) solution. Went with using terragrunt before and after hooks.
The null_resource and local exec didn't work for me either due to needing to connect in plan and I ran into issues where the connection wasn't getting closed - causing the job to timeout.

If anyone is looking to use this option, add something like this to your terragrunt.hcl and have it run a script like:

terraform {
  source = "${get_parent_terragrunt_dir()}/_shared/_common"
  before_hook "before_hook" {
    commands     = ["apply", "plan"]
    execute      = ["${get_parent_terragrunt_dir()}/_shared/start_service.sh"]
    # get_env("KUBES_ENDPOINT", "somedefaulturl") you can get env vars or pass in params as needed to the script
  }

  after_hook "after_hook" {
    commands     = ["apply", "plan"]
    execute      = ["${get_parent_terragrunt_dir()}/_shared/stop_service.sh"]
    run_on_error = true
  }
}

the start script would look something like this:

#!/usr/bin/env bash
## run whatever auth you need to connect 
## example in this case consul
port=8500
service="service/consul-server"
echo -en 'kubectl port-forward $service $port:$port -n some-namespace\n disown' >> run_service.sh && chmod +x run_service.sh
nohup bash run_service.sh </dev/null >/dev/null 2>&1 &

sleep 30s
# sleep was to give it a chance to finish establishing the conneciont before terragrunt/terraform starts running

the stop script you'd just need to have something to lookup the process and kill it.

@jrhouston
Copy link
Collaborator

jrhouston commented Apr 8, 2022

I did some experimenting to see if I could get something like this to work and unfortunately there wont be a way to open a port-forward in-process in the Kubernetes provider that will be reliably accessible to a downstream dependency.

The way Terraform works is:

  1. Start provider plugin process
  2. Make the necessary protocol requests for resources that provider exposes
  3. Stop the plugin process when it's done.

And this can happen sequentially or in parallel for each provider, depending on the config. This means that if we started a goroutine inside this provider with the port-forward, there's no guarantee Terraform wont stop the process before it gets to the downstream provider that needs the port-forward.

Moreover, Terraform doesn't as of yet support any sort of hooks for providers to do any "post/after" plan/apply cleanup, so if we had the provider start a port-forward as a new process we wouldn't be able to terminate it – killing the process when the provider process gets stopped would have the same problem as above.

Ultimately this means that the port-forward would need to be started within the process of the downstream dependency, i.e if you are using a provider for SQL the port-forward needs to be set up inside that plugin process. That either means we have to add k8s port-forward support to any provider that might need it (dragging in k8s dependencies into projects that ideally don't need them) or have a way for Terraform itself to orchestrate opening the port-forward in some generic fashion that could be applied to any provider.

So I'm going to mark this issue as upstream, as this is essentially something we wont be able to solve in this provider.

@jrhouston
Copy link
Collaborator

I'm going to close this issue as this is something we will never implement in this provider, and is currently not possible with Terraform without changes to core. There is a lot of reactions to this issue so I would recommend opening an issue on the Terraform core repo similar to the request for SSH tunnel support: hashicorp/terraform#8367

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 12, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

10 participants