Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zarf doesn't consider $KUBECONFIG env when running commands #315

Closed
YrrepNoj opened this issue Feb 11, 2022 · 4 comments · Fixed by #316
Closed

Zarf doesn't consider $KUBECONFIG env when running commands #315

YrrepNoj opened this issue Feb 11, 2022 · 4 comments · Fixed by #316

Comments

@YrrepNoj
Copy link
Contributor

Summary

Now that Zarf can support deploying onto existing clusters, we should be considering that users local $KUBECONFIG env when determining if there is an existing cluster to deploy onto. Currently, Zarf is hard coded to look for an existing kubeconfig at ~/.kube/config. This was perfectly fine when Zarf also was responsible for standing up its on k3s cluster but isn't as clear now.

Steps to reproduce

export KUBECONFIG=~/.kube/otherconfig
kind create cluster
kubectl get pods -A -> Returns the pods on the newly created cluster
zarf init --confirm -> Returns an error because it can't find a k8s even though the above kubectl command could

Expected Behavior

The Zarf CLI should check if the $KUBECONFIG env is set and prioritize using it whenever possible.

@RothAndrew
Copy link
Contributor

RothAndrew commented Feb 11, 2022

@YrrepNoj can you try something a little different for me and paste the console results in?

export KUBECONFIG=~/.kube/otherconfig          # Configure system to talk to cluster A
kubectl get pods -A                            # Confirm connection to cluster A
kind create cluster                            # Create cluster B
echo $KUBECONFIG                               # See whether kind changed KUBECONFIG
kubectl get pods -A                            # Confirm connection to cluster B

@YrrepNoj
Copy link
Contributor Author

@RothAndrew I think I understand what you're trying to ask. After that stage of commands here is what kubeconfig that lives at $KUBECONFIG looks like.

(base) ➜  build git:(315-kubeconfig-env) ✗ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://127.0.0.1:50685
  name: kind-cluster-a
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://127.0.0.1:51061
  name: kind-cluster-b
contexts:
- context:
    cluster: kind-cluster-a
    user: kind-cluster-a
  name: kind-cluster-a
- context:
    cluster: kind-cluster-b
    user: kind-cluster-b
  name: kind-cluster-b
current-context: kind-cluster-b
kind: Config
preferences: {}
users:
- name: kind-cluster-a
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kind-cluster-b
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

@RothAndrew
Copy link
Contributor

What is the error when you try to run zarf init?

@YrrepNoj
Copy link
Contributor Author

YrrepNoj commented Feb 11, 2022

Assuming your $KUBECONFIG is different from ~/.kube/config then you will see this error:
ERROR: Unable to connect to the K8s cluster

@YrrepNoj YrrepNoj moved this from New Requests to Under Review in Zarf Project Board Feb 14, 2022
Repository owner moved this from Under Review to Done in Zarf Project Board Feb 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants