Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If ~/.kube/config specifies insecure-skip-tls-verify: true, Kubernetes provider cannot override it #189

Closed
RobinsonWM opened this issue Oct 10, 2018 · 5 comments

Comments

@RobinsonWM
Copy link

Summary

It appears that if your ~/.kube/config specifies a cluster with insecure-skip-tls-verify: true, then it is not possible to use Terraform to manage a different Kubernetes cluster and also validate the TLS certificate. Setting insecure = false does not appear to override the setting from ~/.kube/config.

There's an obvious workaround (update your ~/.kube/config file,) but it's not initially obvious that the problem is caused by the Kubernetes provider being unable to override the setting in your ~/.kube/config file.

Terraform Version

Terraform v0.11.8
+ provider.google v1.16.2
+ provider.kubernetes v1.2.0
+ provider.random v2.0.0

Affected Resource(s)

kubernetes provider

Terraform Configuration Files

terraform {
  required_version = "~> 0.11.8"
}

provider "kubernetes" {
  version                = "~> 1.2"
  host                   = "36.4.3.2"
  username               = "user"
  password               = "pw"
  cluster_ca_certificate = "${file("ca.crt")}"
  insecure               = false
}

resource "kubernetes_namespace" "namespace" {
  metadata {
    name = "namespace"
  }
}

kubeconfig

My ~/.kube/config has a single cluster, and it has insecure-skip-tls-verify set to true. This is not the cluster I am using Terraform to manage; it just happens to be in my configuration.

apiVersion: v1
clusters:
- cluster:
    server: https://192.168.0.100:8443
    insecure-skip-tls-verify: true
  name: minikube
contexts:
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: C:\Users\wes.robinson\.minikube\client.crt
    client-key: C:\Users\wes.robinson\.minikube\client.key

Debug Output

https://gist.github.com/RobinsonWM/8f927ee586ba51c89809ebcd782fcbdc

Expected Behavior

It should have authenticated to my k8s cluster and created a namespace.

Actual Behavior

It gave an error message and stopped before authenticating to k8s. This error is coming from the Kubernetes client Go library because Terraform passed a Cluster CA certificate, but it also passed the Insecure flag to request that the certificate not be validated:

>terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


Error: Error refreshing state: 1 error(s) occurred:

* provider.kubernetes: Failed to configure: specifying a root certificates file with the insecure flag is not allowed

Steps to Reproduce

  1. Configure your ~/.kube/config to look like the one above - specifically, a single cluster that has insecure-skip-tls-verify: true
  2. Create your Terraform configuration like mine, specifically with a cluster_ca_certificate and with insecure set to false
  3. Run terraform plan or terraform apply

Important Factoids

We have reproduced this on Windows 10 and Mac OS X.

References

I think this might be very similar to an issue that was fixed in the Datadog provider: hashicorp/terraform#12168

@jamesrcounts
Copy link

Had a similar problem with the helm provider. I seem to be having success with the following workaround: add config_context argument to the kubernetes block and set it to a non-existent context.

provider "helm" {
  version = "~> 0.7"

  kubernetes {
    host                   = "redacted"
    client_certificate     = "redacted"
    client_key             = "redacted"
    cluster_ca_certificate = "redacted"
    config_context         = "nothing"
  }
}

In my case the only context in my config is docker-for-desktop with insecure-skip-tls-verify: true. Setting the config_context to something that doesn't exist seems to avoid loading the flag from the existing context.

@adamdodev
Copy link

Interestingly @jamesrcounts solution only seems to work in a helm provider, adding it to the main kubernetes provider throws an error for us:

provider "kubernetes" {
  host                   = "redacted"
  client_certificate     = "redacted"
  client_key             = "redacted"
  cluster_ca_certificate = "redacted"
  config_context         = "none"
}

provider "helm" {
  install_tiller = "false"

  kubernetes {
    host                   = "redacted"
    client_certificate     = "redacted"
    client_key             = "redacted"
    cluster_ca_certificate = "redacted"
    config_context         = "none"
  }
}
provider.kubernetes: Failed to load config (/Users/redacted/.kube/config; overriden context; config ctx: none): context "none" does not exist

Latest version of both providers.

@blandir
Copy link

blandir commented Jul 16, 2019

@adamdodev the current kubernetes provider (1.8.0) has an argument
load_config_file = false

From the docs

(Optional) By default the local config (~/.kube/config) is loaded when you use this provider. This option at false disable this behaviour

This worked for me with the same issues you have. I haven't checked since which version this is available however.

@dak1n1
Copy link
Contributor

dak1n1 commented Apr 15, 2020

As per @blandir's comment, load_config_file = false should solve this. Let us know if this is still an issue. Thanks!

@ghost
Copy link

ghost commented May 16, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators May 16, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants