Skip to content

Commit

Permalink
fix: adjust paths to static assets
Browse files Browse the repository at this point in the history
  • Loading branch information
LukasGentele committed Apr 16, 2024
1 parent db4c0d5 commit d9fa6a6
Show file tree
Hide file tree
Showing 7 changed files with 28 additions and 28 deletions.
22 changes: 11 additions & 11 deletions docs/pages/advanced-topics/load-testing/results.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,23 +12,23 @@ If you plan on having high api usage in your vClusters, we recommend using an et
## API Response Times

<figure>
<img src="/docs/media/apiserver-latency-baseline.svg" alt="apiserver-avg-baseline" />
<img src="/docs/v0.19/media/apiserver-latency-baseline.svg" alt="apiserver-avg-baseline" />
<figcaption>APIserver average response time (baseline)</figcaption>
</figure>

During our baseline testing (300 secrets, 30qps), K3s with SQLite was significantly slower than the other distributions, with an average of 0.17s while the other distributions were all around 0.05s. This however should not have an impact since 0.17 is still a relatively good average.


<figure>
<img src="/docs/media/apiserver-latency-intensive.svg" alt="apiserver-avg-intensive" />
<img src="/docs/v0.19/media/apiserver-latency-intensive.svg" alt="apiserver-avg-intensive" />
<figcaption>APIserver average response time (intensive)</figcaption>
</figure>

For our more intensive testing (5000 secrets, 200qps), the differences between the distributions are more pronounced, where K3s with SQLite trailed behind with a 1.4s average response time while etcd K3s (vCluster.Pro distro) had an average response time of around 0.35s for both single node and HA setups. k0s and K8s were the fastest in these tests with an average of around 0.15s. Below is also the cumulative distribution of request times.


<figure>
<img src="/docs/media/cumu-distribution-apiserver.svg" alt="apiserver-cumu-dist-intensive" />
<img src="/docs/v0.19/media/cumu-distribution-apiserver.svg" alt="apiserver-cumu-dist-intensive" />
<figcaption>Cumulative distribution of request time during the intensive testing</figcaption>
</figure>

Expand All @@ -37,17 +37,17 @@ For our more intensive testing (5000 secrets, 200qps), the differences between t
During our testing, most distributions had similar CPU usage, with the exception of k3s with SQLite which had a higher CPU usage, most likely due to having to convert etcd requests into SQLite ones.

<figure>
<img src="/docs/media/cpu-sn-baseline.svg" alt="cpu usage (baseline)" />
<img src="/docs/v0.19/media/cpu-sn-baseline.svg" alt="cpu usage (baseline)" />
<figcaption>CPU usage during the baseline test</figcaption>
</figure>

<figure>
<img src="/docs/media/cpu-sn-intensive.svg" alt="cpu usage (intensive)" />
<img src="/docs/v0.19/media/cpu-sn-intensive.svg" alt="cpu usage (intensive)" />
<figcaption>CPU usage during the intensive test</figcaption>
</figure>

<figure>
<img src="/docs/media/cpu-intensive-ha.svg" alt="cpu usage (intensive) for ha setups" />
<img src="/docs/v0.19/media/cpu-intensive-ha.svg" alt="cpu usage (intensive) for ha setups" />
<figcaption>CPU usage during the intensive test (ha setups)</figcaption>
</figure>

Expand All @@ -56,17 +56,17 @@ During our testing, most distributions had similar CPU usage, with the exception
Memory usage was relatively similar in all setups

<figure>
<img src="/docs/media/mem-usage-baseline.svg" alt="memory usage over time sn setup" />
<img src="/docs/v0.19/media/mem-usage-baseline.svg" alt="memory usage over time sn setup" />
<figcaption>Memory usage during the baseline test</figcaption>
</figure>

<figure>
<img src="/docs/media/mem-usage-intensive.svg" alt="memory usage over time sn setup" />
<img src="/docs/v0.19/media/mem-usage-intensive.svg" alt="memory usage over time sn setup" />
<figcaption>Memory usage during the intensive test</figcaption>
</figure>

<figure>
<img src="/docs/media/mem-usage-ha.svg" alt="memory usage over time ha setup" />
<img src="/docs/v0.19/media/mem-usage-ha.svg" alt="memory usage over time ha setup" />
<figcaption>Memory usage during the intensive test with HA setups</figcaption>
</figure>

Expand All @@ -75,11 +75,11 @@ Memory usage was relatively similar in all setups
The filesystem usage was higher in the k3s SQLite version compared to all etcd backed versions in the intensive setup. In the baseline setup there was little to no usage of the filesystem

<figure>
<img src="/docs/media/fs-write-intensive.svg" alt="fs usage over time" />
<img src="/docs/v0.19/media/fs-write-intensive.svg" alt="fs usage over time" />
<figcaption>Filesystem writes over time</figcaption>
</figure>
<figure>
<img src="/docs/media/fs-read-intensive.svg" alt="memory usage over time sn setup" />
<img src="/docs/v0.19/media/fs-read-intensive.svg" alt="memory usage over time sn setup" />
<figcaption>Filesystem reads over time</figcaption>
</figure>

Expand Down
6 changes: 3 additions & 3 deletions docs/pages/architecture/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_label: Overview
Virtual clusters are Kubernetes clusters that run on top of other Kubernetes clusters. Compared to fully separate "real" clusters, virtual clusters do not have their own node pools or networking. Instead, they schedule workloads inside the underlying cluster while having their own control plane.

<figure>
<img src="/docs/media/diagrams/vcluster-architecture.svg" alt="vCluster Architecture" />
<img src="/docs/v0.19/media/diagrams/vcluster-architecture.svg" alt="vCluster Architecture" />
<figcaption>vCluster - Architecture</figcaption>
</figure>

Expand All @@ -17,7 +17,7 @@ By default, vClusters run as a single pod (scheduled by a StatefulSet) that cons
- [**Syncer**](./syncer/syncer.mdx)

## Host Cluster & Namespace
Every vCluster runs on top of another Kubernetes cluster, called the host cluster. Each vCluster runs as a regular StatefulSet inside a namespace of the host cluster. This namespace is called the host namespace. Everything that you create inside the vCluster lives either inside the vCluster itself or inside the host namespace.
Every vCluster runs on top of another Kubernetes cluster, called the host cluster. Each vCluster runs as a regular StatefulSet inside a namespace of the host cluster. This namespace is called the host namespace. Everything that you create inside the vCluster lives either inside the vCluster itself or inside the host namespace.

It is possible to run multiple vClusters inside the same namespace and you can even run vClusters inside another vCluster (vCluster nesting).

Expand All @@ -43,7 +43,7 @@ vClusters should be as lightweight as possible to minimize resource overhead ins
### 2. No Performance Degradation
Workloads running inside a vCluster (even inside [nested vClusters](#host-cluster--namespace)) should run with the same performance as workloads that are running directly on the underlying host cluster. The computing power, the access to underlying persistent storage as well as the network performance should not be degraded at all.

**Implementation:** This is mainly achieved by synchronizing pods which means that the pods are actually being scheduled and started just like regular pods of the underlying host cluster, i.e. if you run a pod inside the vCluster and you run the same pod directly on the host cluster it will be exactly the same in terms of computing power, storage access, and networking.
**Implementation:** This is mainly achieved by synchronizing pods which means that the pods are actually being scheduled and started just like regular pods of the underlying host cluster, i.e. if you run a pod inside the vCluster and you run the same pod directly on the host cluster it will be exactly the same in terms of computing power, storage access, and networking.

### 3. Reduce Requests On Host Cluster
vClusters should greatly reduce the number of requests to the Kubernetes API server of the underlying [host cluster](#host-cluster--namespace) by ensuring that all high-level resources remain in the virtual cluster only without ever reaching the underlying host cluster.
Expand Down
12 changes: 6 additions & 6 deletions docs/pages/architecture/scheduling.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ sidebar_label: Pod Scheduling
---

<figure>
<img src="/docs/media/diagrams/vcluster-pod-scheduling.svg" alt="vcluster Pod Scheduling" />
<img src="/docs/v0.19/media/diagrams/vcluster-pod-scheduling.svg" alt="vcluster Pod Scheduling" />
<figcaption>vcluster - Pod Scheduling</figcaption>
</figure>

Expand Down Expand Up @@ -60,13 +60,13 @@ then create or upgrade the vCluster with:
vcluster create my-vcluster --upgrade -f values.yaml
```

This will pass the necessary flags to the "syncer" container and create or update the ClusterRole used by vCluster to include necessary permissions.
This will pass the necessary flags to the "syncer" container and create or update the ClusterRole used by vCluster to include necessary permissions.


### Limiting pod scheduling to selected nodes

Vcluster allows you to limit on which nodes the pods synced by vCluster will run.
You can achieve this by combining `--node-selector` and `--enforce-node-selector` syncer flags.
You can achieve this by combining `--node-selector` and `--enforce-node-selector` syncer flags.
The `--enforce-node-selector` flag is enabled by default.
When `--enforce-node-selector` flag is disabled, and a `--node-selector` is specified nodes will be synced based on the
selector, as well as nodes running pod workloads.
Expand Down Expand Up @@ -100,10 +100,10 @@ vcluster create my-vcluster --upgrade -f values.yaml

:::info
When sync of the real nodes is enabled and nodeSelector is set, all nodes that match the selector will be synced into vCluster. Read more about Node sync modes on the [Nodes documentation page](./nodes.mdx).
:::
:::


### Automatically applying tolerations to all pods synced by vCluster
### Automatically applying tolerations to all pods synced by vCluster

Kubernetes has a concept of [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/), which is used for controlling scheduling. If you have a use case requiring all pods synced by vCluster to have a toleration set automatically, then you can achieve this with the `--enforce-toleration` syncer flag. You can pass multiple `--enforce-toleration` flags with different toleration expressions, and syncer will add them to every new pod that gets synced by vCluster.

Expand Down Expand Up @@ -131,5 +131,5 @@ syncer:

:::info
vCluster does not support setting the `tolerationSeconds` field of a toleration through the syntax that the `--enforce-toleration` flag uses. If your use case requires this, please raise an issue in [the vCluster repo on GitHub](https://github.com/loft-sh/vcluster/issues).
:::
:::

6 changes: 3 additions & 3 deletions docs/pages/architecture/syncer/single_vs_multins.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ sidebar_label: Single vs Multi-Namespace Sync
---

<figure>
<img src="/docs/media/diagrams/vcluster-multinamespace-architecture.svg" alt="vcluster Multi-Namespace Architecture" />
<img src="/docs/v0.19/media/diagrams/vcluster-multinamespace-architecture.svg" alt="vcluster Multi-Namespace Architecture" />
<figcaption>vcluster Multi-Namespace Architecture</figcaption>
</figure>

Expand All @@ -20,5 +20,5 @@ Enabling, or disabling, it on an existing vCluster instance will force it into a
:::
:::warning Alpha feature
Multi-namespace mode is currently in an alpha state. This is an advanced feature that requires more permissions in the host cluster, and as a result, it can potentially cause significant disruption in the host cluster.
:::
Multi-namespace mode is currently in an alpha state. This is an advanced feature that requires more permissions in the host cluster, and as a result, it can potentially cause significant disruption in the host cluster.
:::
2 changes: 1 addition & 1 deletion docs/pages/networking/networking.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ sidebar_label: Overview
---

<figure>
<img src="/docs/media/diagrams/vcluster-networking.svg" alt="vCluster Networking" />
<img src="/docs/v0.19/media/diagrams/vcluster-networking.svg" alt="vCluster Networking" />
<figcaption>vCluster - Networking</figcaption>
</figure>

Expand Down
2 changes: 1 addition & 1 deletion docs/pages/storage.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ sidebar_label: Storage
---

<figure>
<img src="/docs/media/diagrams/vcluster-persistent-volume-provisioning.svg" alt="vcluster Persistent Volume Provisioning" />
<img src="/docs/v0.19/media/diagrams/vcluster-persistent-volume-provisioning.svg" alt="vcluster Persistent Volume Provisioning" />
<figcaption>vcluster - Persistent Volume Provisioning</figcaption>
</figure>

Expand Down
6 changes: 3 additions & 3 deletions docs/pages/what-are-virtual-clusters.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_label: What Are Virtual Clusters?
Virtual clusters are fully working Kubernetes clusters that run on top of other Kubernetes clusters. Compared to fully separate "real" clusters, virtual clusters reuse worker nodes and networking of the host cluster. They have their own control plane and schedule all workloads into a single namespace of the host cluster. Like virtual machines, virtual clusters partition a single physical cluster into multiple separate ones.

<figure>
<img src="/docs/media/diagrams/vcluster-architecture.svg" alt="vcluster Architecture" />
<img src="/docs/v0.19/media/diagrams/vcluster-architecture.svg" alt="vcluster Architecture" />
<figcaption>vCluster - Architecture</figcaption>
</figure>

Expand Down Expand Up @@ -42,7 +42,7 @@ Because you can have many virtual clusters within a single cluster, they are muc
Finally, virtual clusters can be configured independently of the physical cluster. This is great for multi-tenancy, like giving your customers the ability to spin up a new environment or quickly setting up demo applications for your sales team.

<figure>
<img src="/docs/media/vcluster-comparison.png" alt="vCluster Comparison" />
<img src="/docs/v0.19/media/vcluster-comparison.png" alt="vCluster Comparison" />
<figcaption>vCluster - Comparison</figcaption>
</figure>

Expand Down Expand Up @@ -73,4 +73,4 @@ vClusters provide immense benefits for large-scale Kubernetes deployments and mu
- **Scalability:**
- Less pressure / fewer requests on the K8s API server in a large-scale cluster.
- Higher scalability of clusters via cluster sharding / API server sharding into smaller vClusters.
- No need for cluster admins to worry about conflicting CRDs or CRD versions with a growing number of users and deployments.
- No need for cluster admins to worry about conflicting CRDs or CRD versions with a growing number of users and deployments.

0 comments on commit d9fa6a6

Please sign in to comment.