From d9fa6a6748dc7ddaa0dc63b84c93473ff823f7ed Mon Sep 17 00:00:00 2001 From: Lukas Gentele <17330150+LukasGentele@users.noreply.github.com> Date: Tue, 16 Apr 2024 04:47:46 -0700 Subject: [PATCH] fix: adjust paths to static assets --- .../advanced-topics/load-testing/results.mdx | 22 +++++++++---------- docs/pages/architecture/overview.mdx | 6 ++--- docs/pages/architecture/scheduling.mdx | 12 +++++----- .../architecture/syncer/single_vs_multins.mdx | 6 ++--- docs/pages/networking/networking.mdx | 2 +- docs/pages/storage.mdx | 2 +- docs/pages/what-are-virtual-clusters.mdx | 6 ++--- 7 files changed, 28 insertions(+), 28 deletions(-) diff --git a/docs/pages/advanced-topics/load-testing/results.mdx b/docs/pages/advanced-topics/load-testing/results.mdx index 9bde1790f..97c155891 100644 --- a/docs/pages/advanced-topics/load-testing/results.mdx +++ b/docs/pages/advanced-topics/load-testing/results.mdx @@ -12,7 +12,7 @@ If you plan on having high api usage in your vClusters, we recommend using an et ## API Response Times
- apiserver-avg-baseline + apiserver-avg-baseline
APIserver average response time (baseline)
@@ -20,7 +20,7 @@ During our baseline testing (300 secrets, 30qps), K3s with SQLite was significan
- apiserver-avg-intensive + apiserver-avg-intensive
APIserver average response time (intensive)
@@ -28,7 +28,7 @@ For our more intensive testing (5000 secrets, 200qps), the differences between t
- apiserver-cumu-dist-intensive + apiserver-cumu-dist-intensive
Cumulative distribution of request time during the intensive testing
@@ -37,17 +37,17 @@ For our more intensive testing (5000 secrets, 200qps), the differences between t During our testing, most distributions had similar CPU usage, with the exception of k3s with SQLite which had a higher CPU usage, most likely due to having to convert etcd requests into SQLite ones.
- cpu usage (baseline) + cpu usage (baseline)
CPU usage during the baseline test
- cpu usage (intensive) + cpu usage (intensive)
CPU usage during the intensive test
- cpu usage (intensive) for ha setups + cpu usage (intensive) for ha setups
CPU usage during the intensive test (ha setups)
@@ -56,17 +56,17 @@ During our testing, most distributions had similar CPU usage, with the exception Memory usage was relatively similar in all setups
- memory usage over time sn setup + memory usage over time sn setup
Memory usage during the baseline test
- memory usage over time sn setup + memory usage over time sn setup
Memory usage during the intensive test
- memory usage over time ha setup + memory usage over time ha setup
Memory usage during the intensive test with HA setups
@@ -75,11 +75,11 @@ Memory usage was relatively similar in all setups The filesystem usage was higher in the k3s SQLite version compared to all etcd backed versions in the intensive setup. In the baseline setup there was little to no usage of the filesystem
- fs usage over time + fs usage over time
Filesystem writes over time
- memory usage over time sn setup + memory usage over time sn setup
Filesystem reads over time
diff --git a/docs/pages/architecture/overview.mdx b/docs/pages/architecture/overview.mdx index d87b21911..8a24758c4 100644 --- a/docs/pages/architecture/overview.mdx +++ b/docs/pages/architecture/overview.mdx @@ -6,7 +6,7 @@ sidebar_label: Overview Virtual clusters are Kubernetes clusters that run on top of other Kubernetes clusters. Compared to fully separate "real" clusters, virtual clusters do not have their own node pools or networking. Instead, they schedule workloads inside the underlying cluster while having their own control plane.
- vCluster Architecture + vCluster Architecture
vCluster - Architecture
@@ -17,7 +17,7 @@ By default, vClusters run as a single pod (scheduled by a StatefulSet) that cons - [**Syncer**](./syncer/syncer.mdx) ## Host Cluster & Namespace -Every vCluster runs on top of another Kubernetes cluster, called the host cluster. Each vCluster runs as a regular StatefulSet inside a namespace of the host cluster. This namespace is called the host namespace. Everything that you create inside the vCluster lives either inside the vCluster itself or inside the host namespace. +Every vCluster runs on top of another Kubernetes cluster, called the host cluster. Each vCluster runs as a regular StatefulSet inside a namespace of the host cluster. This namespace is called the host namespace. Everything that you create inside the vCluster lives either inside the vCluster itself or inside the host namespace. It is possible to run multiple vClusters inside the same namespace and you can even run vClusters inside another vCluster (vCluster nesting). @@ -43,7 +43,7 @@ vClusters should be as lightweight as possible to minimize resource overhead ins ### 2. No Performance Degradation Workloads running inside a vCluster (even inside [nested vClusters](#host-cluster--namespace)) should run with the same performance as workloads that are running directly on the underlying host cluster. The computing power, the access to underlying persistent storage as well as the network performance should not be degraded at all. -**Implementation:** This is mainly achieved by synchronizing pods which means that the pods are actually being scheduled and started just like regular pods of the underlying host cluster, i.e. if you run a pod inside the vCluster and you run the same pod directly on the host cluster it will be exactly the same in terms of computing power, storage access, and networking. +**Implementation:** This is mainly achieved by synchronizing pods which means that the pods are actually being scheduled and started just like regular pods of the underlying host cluster, i.e. if you run a pod inside the vCluster and you run the same pod directly on the host cluster it will be exactly the same in terms of computing power, storage access, and networking. ### 3. Reduce Requests On Host Cluster vClusters should greatly reduce the number of requests to the Kubernetes API server of the underlying [host cluster](#host-cluster--namespace) by ensuring that all high-level resources remain in the virtual cluster only without ever reaching the underlying host cluster. diff --git a/docs/pages/architecture/scheduling.mdx b/docs/pages/architecture/scheduling.mdx index e0dc3a27e..383f6831c 100644 --- a/docs/pages/architecture/scheduling.mdx +++ b/docs/pages/architecture/scheduling.mdx @@ -4,7 +4,7 @@ sidebar_label: Pod Scheduling ---
- vcluster Pod Scheduling + vcluster Pod Scheduling
vcluster - Pod Scheduling
@@ -60,13 +60,13 @@ then create or upgrade the vCluster with: vcluster create my-vcluster --upgrade -f values.yaml ``` -This will pass the necessary flags to the "syncer" container and create or update the ClusterRole used by vCluster to include necessary permissions. +This will pass the necessary flags to the "syncer" container and create or update the ClusterRole used by vCluster to include necessary permissions. ### Limiting pod scheduling to selected nodes Vcluster allows you to limit on which nodes the pods synced by vCluster will run. -You can achieve this by combining `--node-selector` and `--enforce-node-selector` syncer flags. +You can achieve this by combining `--node-selector` and `--enforce-node-selector` syncer flags. The `--enforce-node-selector` flag is enabled by default. When `--enforce-node-selector` flag is disabled, and a `--node-selector` is specified nodes will be synced based on the selector, as well as nodes running pod workloads. @@ -100,10 +100,10 @@ vcluster create my-vcluster --upgrade -f values.yaml :::info When sync of the real nodes is enabled and nodeSelector is set, all nodes that match the selector will be synced into vCluster. Read more about Node sync modes on the [Nodes documentation page](./nodes.mdx). -::: +::: -### Automatically applying tolerations to all pods synced by vCluster +### Automatically applying tolerations to all pods synced by vCluster Kubernetes has a concept of [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/), which is used for controlling scheduling. If you have a use case requiring all pods synced by vCluster to have a toleration set automatically, then you can achieve this with the `--enforce-toleration` syncer flag. You can pass multiple `--enforce-toleration` flags with different toleration expressions, and syncer will add them to every new pod that gets synced by vCluster. @@ -131,5 +131,5 @@ syncer: :::info vCluster does not support setting the `tolerationSeconds` field of a toleration through the syntax that the `--enforce-toleration` flag uses. If your use case requires this, please raise an issue in [the vCluster repo on GitHub](https://github.com/loft-sh/vcluster/issues). -::: +::: diff --git a/docs/pages/architecture/syncer/single_vs_multins.mdx b/docs/pages/architecture/syncer/single_vs_multins.mdx index dcd54c929..68ede3261 100644 --- a/docs/pages/architecture/syncer/single_vs_multins.mdx +++ b/docs/pages/architecture/syncer/single_vs_multins.mdx @@ -4,7 +4,7 @@ sidebar_label: Single vs Multi-Namespace Sync ---
- vcluster Multi-Namespace Architecture + vcluster Multi-Namespace Architecture
vcluster Multi-Namespace Architecture
@@ -20,5 +20,5 @@ Enabling, or disabling, it on an existing vCluster instance will force it into a ::: :::warning Alpha feature -Multi-namespace mode is currently in an alpha state. This is an advanced feature that requires more permissions in the host cluster, and as a result, it can potentially cause significant disruption in the host cluster. -::: \ No newline at end of file +Multi-namespace mode is currently in an alpha state. This is an advanced feature that requires more permissions in the host cluster, and as a result, it can potentially cause significant disruption in the host cluster. +::: diff --git a/docs/pages/networking/networking.mdx b/docs/pages/networking/networking.mdx index 3d0d8189d..62f35f4cc 100644 --- a/docs/pages/networking/networking.mdx +++ b/docs/pages/networking/networking.mdx @@ -4,7 +4,7 @@ sidebar_label: Overview ---
- vCluster Networking + vCluster Networking
vCluster - Networking
diff --git a/docs/pages/storage.mdx b/docs/pages/storage.mdx index e4643cf48..ef817adeb 100644 --- a/docs/pages/storage.mdx +++ b/docs/pages/storage.mdx @@ -4,7 +4,7 @@ sidebar_label: Storage ---
- vcluster Persistent Volume Provisioning + vcluster Persistent Volume Provisioning
vcluster - Persistent Volume Provisioning
diff --git a/docs/pages/what-are-virtual-clusters.mdx b/docs/pages/what-are-virtual-clusters.mdx index 3d5046371..d52f9a03d 100644 --- a/docs/pages/what-are-virtual-clusters.mdx +++ b/docs/pages/what-are-virtual-clusters.mdx @@ -6,7 +6,7 @@ sidebar_label: What Are Virtual Clusters? Virtual clusters are fully working Kubernetes clusters that run on top of other Kubernetes clusters. Compared to fully separate "real" clusters, virtual clusters reuse worker nodes and networking of the host cluster. They have their own control plane and schedule all workloads into a single namespace of the host cluster. Like virtual machines, virtual clusters partition a single physical cluster into multiple separate ones.
- vcluster Architecture + vcluster Architecture
vCluster - Architecture
@@ -42,7 +42,7 @@ Because you can have many virtual clusters within a single cluster, they are muc Finally, virtual clusters can be configured independently of the physical cluster. This is great for multi-tenancy, like giving your customers the ability to spin up a new environment or quickly setting up demo applications for your sales team.
- vCluster Comparison + vCluster Comparison
vCluster - Comparison
@@ -73,4 +73,4 @@ vClusters provide immense benefits for large-scale Kubernetes deployments and mu - **Scalability:** - Less pressure / fewer requests on the K8s API server in a large-scale cluster. - Higher scalability of clusters via cluster sharding / API server sharding into smaller vClusters. - - No need for cluster admins to worry about conflicting CRDs or CRD versions with a growing number of users and deployments. \ No newline at end of file + - No need for cluster admins to worry about conflicting CRDs or CRD versions with a growing number of users and deployments.