You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There doesn't appear to be any way for ArgoCD to access its set of managed clusters via HTTP_PROXY.
Motivation
This is required when one wants to use a tunnel like inlets to remotely access a Kubernetes API server in another cluster.
Imagine you have 4 staging environments, and 2 production environments. Production is public and has a TLS SAN for its Kubernetes API with the master node's IP. That is something ArgoCD can address directly, but each of the staging environments may be within private clusters.
Using an SSH tunnel or inlets, we can make the Kubernetes API server of each staging environment appear as a ClusterIP Service within the main ArgoCD management cluster.
The challenge is that when Argo accesses the API server over the tunnel, its TLS SAN name i.e. clustera.default.svc will not match the names in the HTTPS cert: kubernetes.default.svc.
Proposal
Turning on TLS Insecure is a workaround, but not one to be encouraged.
Suggestion 1:
Can the HTTP proxy be added to the argo cluster add command such as:
Is there a way that ArgoCD can read a HTTP_PROXY from the kubeconfig? (last time I checked, specifying a HTTPS proxy was planned for kubectl, but wasn't implemented yet)
There is a small HTTPS CONNECT proxy that I created which allowed me to prove out the idea, but I can't find a way for ArgoCD to be configured with a separate HTTPS proxy - one for each cluster it tries to connect to.
Here is a conceptual diagram of how this would work. Multiple workload clusters can be added with this method, with each having their own tiny HTTPS proxy, exposed in the main cluster via a tunnel
In my testing with kubectl and HTTP_PROXY env-var, I was able to get this to work as follows:
The missing part for this solution would be to configure ArgoCD to use the appropriate named HTTPS proxy for each remote cluster. For the time being, for managed Kubernetes the workaround is TLS Insecure.
I'm open to hearing other solutions - or about how other users are deploying via ArgoCD to many different private clusters on managed K8s within private VPCs, where the API Server is not available on a public URL.
The text was updated successfully, but these errors were encountered:
We have a similar problem, I could manage to make the kube config use a proxy config by using the following argocd cluster add <kubeconfig-context> :
There is an (almost) non documented config option of kubectl with that you can configure a proxy inside your <kubeconfig-context>: https://stackoverflow.com/a/66547565
With the above I can manage the kubectl being called by argo to access the cluster, yet there is a /version call which argocd makes and which still ignores that kubectl proxy-url, as it uses the kubectl rest config ( which does not allow for proxy settings right now? ) it seems like.
This is not currently supported because the Cluster data structure object doesn't have all the same fields of the kubeconfig Config object. It should be possible to add support for HTTP proxy by matching connection options in the cluster datastructure with the Kubeconfig config datastructure, and making sure the options are carried forward when registering the cluster through argocd cluster add.
Summary
There doesn't appear to be any way for ArgoCD to access its set of managed clusters via HTTP_PROXY.
Motivation
This is required when one wants to use a tunnel like inlets to remotely access a Kubernetes API server in another cluster.
Imagine you have 4 staging environments, and 2 production environments. Production is public and has a TLS SAN for its Kubernetes API with the master node's IP. That is something ArgoCD can address directly, but each of the staging environments may be within private clusters.
Using an SSH tunnel or inlets, we can make the Kubernetes API server of each staging environment appear as a ClusterIP Service within the main ArgoCD management cluster.
The challenge is that when Argo accesses the API server over the tunnel, its TLS SAN name i.e.
clustera.default.svc
will not match the names in the HTTPS cert:kubernetes.default.svc
.Proposal
Turning on TLS Insecure is a workaround, but not one to be encouraged.
Suggestion 1:
Can the HTTP proxy be added to the
argo cluster add
command such as:Suggestion 2:
Is there a way that ArgoCD can read a HTTP_PROXY from the kubeconfig? (last time I checked, specifying a HTTPS proxy was planned for kubectl, but wasn't implemented yet)
There is a small HTTPS CONNECT proxy that I created which allowed me to prove out the idea, but I can't find a way for ArgoCD to be configured with a separate HTTPS proxy - one for each cluster it tries to connect to.
Here is a conceptual diagram of how this would work. Multiple workload clusters can be added with this method, with each having their own tiny HTTPS proxy, exposed in the main cluster via a tunnel
In my testing with kubectl and
HTTP_PROXY
env-var, I was able to get this to work as follows:The missing part for this solution would be to configure ArgoCD to use the appropriate named HTTPS proxy for each remote cluster. For the time being, for managed Kubernetes the workaround is TLS Insecure.
Where we have access to kubeadm or k3s, we can add an additional TLS SAN name and the solution works by directly tunnelling the API server. @jsiebens has an example of that here: Argo CD for your private Raspberry Pi k3s cluster
I'm open to hearing other solutions - or about how other users are deploying via ArgoCD to many different private clusters on managed K8s within private VPCs, where the API Server is not available on a public URL.
The text was updated successfully, but these errors were encountered: