diff --git a/docs/pages/getting-started/connect.mdx b/docs/pages/getting-started/connect.mdx index 1dc9eefd3..0fb44413b 100644 --- a/docs/pages/getting-started/connect.mdx +++ b/docs/pages/getting-started/connect.mdx @@ -20,6 +20,14 @@ done Switched active kube context to vcluster_my-cluster By default, the vCluster CLI connects to the virtual cluster either directly (on local Kubernetes distributions) or via port-forwarding for remote clusters. If you want to use vCluster on remote clusters without port-forwarding, you can take a look at [other supported exposing methods](../using-vclusters/access.mdx). +### Obtain the kubeconfig for your vCluster + +Instead of switching the current context you can also obtain its kubeconfig and write it to an output file via the following: + +```bash +vcluster connect my-cluster --update-current=false --print > /tmp/vcluster.kubeconfig +``` + ## Run kubectl commands A virtual cluster behaves the same way as a regular Kubernetes cluster. That means you can run any `kubectl` command. Since you are admin of this vCluster, you can even run commands like these: @@ -64,7 +72,7 @@ To verify this, perform these steps: 1. Check namespaces in the host cluster. - ```bash + ```bash kubectl get namespaces ``` @@ -79,8 +87,8 @@ To verify this, perform these steps: kube-system Active 11d ``` - Notice that there is **no namespace `demo-nginx`** because this namespace only exists inside the virtual cluster. - + Notice that there is **no namespace `demo-nginx`** because this namespace only exists inside the virtual cluster. + Everything that belongs to the virtual cluster always remains inside the vCluster's `vcluster-my-vcluster` namespace. 1. Look for the NGINX deployment. @@ -122,16 +130,16 @@ To verify this, perform these steps: ::: The vCluster `my-cluster-0` pod contains the virtual cluster’s API server and some additional tools. There’s also a CoreDNS pod, which vCluster uses, and the two NGINX pods. - + The host cluster has the `nginx-deployment` pods because the virtual cluster **does not** have separate nodes or a scheduler. Instead, the virtual cluster has a _syncer_ that synchronizes resources from the virtual cluster to the underlying host namespace. - The vCluster syncer process tells the underlying cluster to schedule workloads. This syncer process communicates with the API server of the host cluster to schedule the pods and keep track of their state. + The vCluster syncer process tells the underlying cluster to schedule workloads. This syncer process communicates with the API server of the host cluster to schedule the pods and keep track of their state. To prevent collisions, vCluster appends the name of the virtual cluster namespace the pods are running in and the name of the virtual cluster. Only very few resources and API server requests actually reach the underlying Kubernetes API server. Only workload-related resources (e.g. Pod) and networking-related resources (e.g. Service) need to be synchronized down to the host cluster since the vCluster does **not** have any nodes or network itself. - + The state of most objects running in the virtual cluster is stored in a database inside it. vCluster uses SQLite by default for that DB, but it can also use etcd or a few other options like PostgreSQL. But pods are scheduled in the host cluster. - +