diff --git a/docs/src/capi/explanation/capi-ck8s.md b/docs/src/capi/explanation/capi-ck8s.md new file mode 100644 index 000000000..d75db76ac --- /dev/null +++ b/docs/src/capi/explanation/capi-ck8s.md @@ -0,0 +1,42 @@ +# Cluster API - {{product}} + +ClusterAPI (CAPI) is an open-source Kubernetes project that provides a declarative API for cluster creation, configuration, and management. It is designed to automate the creation and management of Kubernetes clusters in various environments, including on-premises data centers, public clouds, and edge devices. + +CAPI abstracts away the details of infrastructure provisioning, networking, and other low-level tasks, allowing users to define their desired cluster configuration using simple YAML manifests. This makes it easier to create and manage clusters in a repeatable and consistent manner, regardless of the underlying infrastructure. In this way a wide range of infrastructure providers has been made available, including but not limited to Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and OpenStack. + +CAPI also abstracts the provisioning and management of Kubernetes clusters allowing for a variety of Kubernetes distributions to be delivered in all of the supported infrastructure providers. {{product}} is one such Kubernetes distribution that seamlessly integrates with Cluster API. + +With {{product}} CAPI you can: +- provision a cluster with: + - Kubernetes version 1.31 onwards + - risk level of the track you want to follow (stable, candidate, beta, edge) + - deploy behind proxies +- upgrade clusters with no downtime: + - rolling upgrades for HA clusters and worker nodes + - in-place upgrades for non-HA control planes and worker nodes + +Please refer to the “Tutorial” section for concrete examples on CAPI deployments: + + +## CAPI architecture + +Being a cloud-native framework, CAPI implements all its components as controllers that run within a Kubernetes cluster. There is a separate controller, called a ‘provider’, for each supported infrastructure substrate. The infrastructure providers are responsible for provisioning physical or virtual nodes and setting up networking elements such as load balancers and virtual networks. In a similar way, each Kubernetes distribution that integrates with ClusterAPI is managed by two providers: the control plane provider and the bootstrap provider. The bootstrap provider is responsible for delivering and managing Kubernetes on the nodes, while the control plane provider handles the control plane’s specific lifecycle. + +The CAPI providers operate within a Kubernetes cluster known as the management cluster. The administrator is responsible for selecting the desired combination of infrastructure and Kubernetes distribution by instantiating the respective infrastructure, bootstrap, and control plane providers on the management cluster. + +The management cluster functions as the control plane for the ClusterAPI operator, which is responsible for provisioning and managing the infrastructure resources necessary for creating and managing additional Kubernetes clusters. It is important to note that the management cluster is not intended to support any other workload, as the workloads are expected to run on the provisioned clusters. As a result, the provisioned clusters are referred to as workload clusters. + +Typically, the management cluster runs in a separate environment from the clusters it manages, such as a public cloud or an on-premises data center. It serves as a centralized location for managing the configuration, policies, and security of multiple managed clusters. By leveraging the management cluster, users can easily create and manage a fleet of Kubernetes clusters in a consistent and repeatable manner. + +The {{product}} team maintains the two providers required for integrating with CAPI: + +- The Cluster API Bootstrap Provider {{product}} (**CABPCK**) responsible for provisioning the nodes in the cluster and preparing them to be joined to the Kubernetes control plane. When you use the CABPCK you define a Kubernetes Cluster object that describes the desired state of the new cluster and includes the number and type of nodes in the cluster, as well as any additional configuration settings. The Bootstrap Provider then creates the necessary resources in the Kubernetes API server to bring the cluster up to the desired state. Under the hood, the Bootstrap Provider uses cloud-init to configure the nodes in the cluster. This includes setting up SSH keys, configuring the network, and installing necessary software packages. + +- The Cluster API Control Plane Provider {{product}} (**CACPCK**) enables the creation and management of Kubernetes control planes using {{product}} as the underlying Kubernetes distribution. Its main tasks are to update the machine state and to generate the kubeconfig file used for accessing the cluster. The kubeconfig file is stored as a secret which the user can then retrieve using the `clusterctl` command. + +```{figure} ./capi-ck8s.svg + :width: 100% + :alt: Deployment of components + + Deployment of components +``` diff --git a/docs/src/capi/explanation/capi-ck8s.svg b/docs/src/capi/explanation/capi-ck8s.svg new file mode 100644 index 000000000..d7df80727 --- /dev/null +++ b/docs/src/capi/explanation/capi-ck8s.svg @@ -0,0 +1,4 @@ + + + +
Canonical Kubernetes Bootstrap Provider (CABCK)
CAPI Machine with Canonical Kubernetes Config
CA
Join Token
kubeconfig
Canonical Kubernetes Control Plane Provider (CACPCK)
Infrastructure Providder
Control Plane
Worker Nodes
VM  #1
VM  #2
VM  #3
VM  #N-2
VM  #N-1
VM  #N
...
Provisioned (Workload) Cluster
User
Cluster EP
clusterctl get config
Bootstrap (Management) Cluster
Bootstrap secret
Deliver cloudinit 
for nodes
User talks to cluster EP
Generate Secrets
- Join Token
- CA
diff --git a/docs/src/capi/explanation/index.md b/docs/src/capi/explanation/index.md index 61336858e..775dd26a3 100644 --- a/docs/src/capi/explanation/index.md +++ b/docs/src/capi/explanation/index.md @@ -15,6 +15,7 @@ Overview about security +capi-ck8s.md ``` diff --git a/docs/src/capi/howto/custom-ck8s.md b/docs/src/capi/howto/custom-ck8s.md new file mode 100644 index 000000000..d81191980 --- /dev/null +++ b/docs/src/capi/howto/custom-ck8s.md @@ -0,0 +1,64 @@ +# Install custom {{product}} on machines + +By default, the `version` field in the machine specifications will determine which {{product}} is downloaded from the `stable` rist level. While you can install different versions of the `stable` risk level by changing the `version` field, extra steps should be taken if you're willing to install a specific risk level. +This guide walks you through the process of installing custom {{product}} on workload cluster machines. + +## Prerequisites + +To follow this guide, you will need: + +- A Kubernetes management cluster with Cluster API and providers installed and configured. +- A generated cluster spec manifest + +Please refer to the [getting-started guide][getting-started] for further +details on the required setup. + +In this guide we call the generated cluster spec manifrst `cluster.yaml`. + +## Overwrite the existing `install.sh` script + +The installation of the {{product}} snap is done via running the `install.sh` script in the cloud-init. +While this file is automatically placed in every workload cluster machine which hard-coded content by {{product}} providers, you can overwrite this file to make sure your desired content is available in the script. + +As an example, let's overwrite the `install.sh` for our control plane nodes. Inside the `cluster.yaml`, add the new file content: +```yaml +apiVersion: controlplane.cluster.x-k8s.io/v1beta2 +kind: CK8sControlPlane +... +spec: + ... + spec: + files: + - content: | + #!/bin/bash -xe + snap install k8s --classic --channel=latest/edge + owner: root:root + path: /capi/scripts/install.sh + permissions: "0500" +``` + +Now the new control plane nodes that are created using this manifest will have the `latest/edge` {{product}} snap installed on them! + +## Use `preRunCommands` + +As mentioned above, the `install.sh` script is responsible for installing {{product}} snap on machines. `preRunCommands` are executed before `install.sh`. You can also add an install command to the `preRunCommands` in order to install your desired {{product}} version. + +```{note} +Installing the {{product}} snap via the `preRunCommands`, does not prevent the `install.sh` script from running. Instead, the installation process in the `install.sh` will fail with a message indicating that `k8s` is already installed. +This is not considered a standard way and overwriting the `install.sh` script is recommended. +``` + +Edit the `cluster.yaml` to add the installation command: +```yaml +apiVersion: controlplane.cluster.x-k8s.io/v1beta2 +kind: CK8sControlPlane +... +spec: + ... + spec: + preRunCommands: + - snap install k8s --classic --channel=latest/edge +``` + + +[getting-started]: ../tutorial/getting-started.md diff --git a/docs/src/capi/howto/index.md b/docs/src/capi/howto/index.md index f013e6b35..375a5025a 100644 --- a/docs/src/capi/howto/index.md +++ b/docs/src/capi/howto/index.md @@ -16,6 +16,9 @@ Overview external-etcd rollout-upgrades +upgrade-providers +migrate-management +custom-ck8s ``` --- diff --git a/docs/src/capi/howto/migrate-management.md b/docs/src/capi/howto/migrate-management.md new file mode 100644 index 000000000..11a1474f3 --- /dev/null +++ b/docs/src/capi/howto/migrate-management.md @@ -0,0 +1,29 @@ +# Migrate the managment cluster + +Management cluster migration is a really powerful operation in the cluster’s lifecycle as it allows admins +to move the management cluster in a more reliable substrate or perform maintenance tasks without disruptions. +In this guide we will walk through the migration of a management cluster. + +## Prerequisites + +In the [Cluster provisioning with CAPI and {{product}} tutorial] we showed how to provision a workloads cluster. Here, we start from the point where the workloads cluster is available and we will migrate the management cluster to the one cluster we just provisioned. + +## Install the same set of providers to the provisioned cluster + +Before migrating a cluster, we must make sure that both the target and source management clusters run the same version of providers (infrastructure, bootstrap, control plane). To do so, `clusterctl init` should be called against the target cluster: + +``` +clusterctl get kubeconfig > targetconfig +clusterctl init --kubeconfig=$PWD/targetconfig --bootstrap ck8s --control-plane ck8s --infrastructure +``` + +## Move the cluster + +Simply call: + +``` +clusterctl move --to-kubeconfig=$PWD/targetconfig +``` + + +[Cluster provisioning with CAPI and {{product}} tutorial]: ../tutorial/getting-started.md diff --git a/docs/src/capi/howto/upgrade-providers.md b/docs/src/capi/howto/upgrade-providers.md new file mode 100644 index 000000000..5188c2413 --- /dev/null +++ b/docs/src/capi/howto/upgrade-providers.md @@ -0,0 +1,53 @@ +# Upgrading the providers of a management cluster + +In this guide we will go through the process of upgrading providers of a management cluster. + +## Prerequisites + +We assume we already have a management cluster and the infrastructure provider configured as described in the [Cluster provisioning with CAPI and {{product}} tutorial]. The selected infrastructure provider is AWS. We have not yet called `clusterctl init` to initialise the cluster. + +## Initialise the cluster + +To demonstrate the steps of upgrading the management cluster, we will begin by initialising a desired version of the {{product}} CAPI providers. + +To set the version of the providers to be installed we use the following notation: + +``` +clusterctl init --bootstrap ck8s:v0.1.2 --control-plane ck8s:v0.1.2 --infrastructure +``` + +## Check for updates + +With `clusterctl` we can check if there are any new versions of the running providers: + +``` +clusterctl upgrade plan +``` + +The output shows the existing version of each provider as well as the version that we can upgrade into: + +```text +NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION +bootstrap-ck8s cabpck-system BootstrapProvider v0.1.2 v0.2.0 +control-plane-ck8s cacpck-system ControlPlaneProvider v0.1.2 v0.2.0 +cluster-api capi-system CoreProvider v1.8.1 Already up to date +infrastructure-aws capa-system InfrastructureProvider v2.6.1 Already up to date +``` + +## Trigger providers upgrade + +To apply the upgrade plan recommended by `clusterctl upgrade plan`, simply: + +``` +clusterctl upgrade apply --contract v1beta1 +``` + +To upgrade each provider one by one, issue: + +``` +clusterctl upgrade apply --bootstrap cabpck-system/ck8s:v0.2.0 +clusterctl upgrade apply --control-plane cacpck-system/ck8s:v0.2.0 +``` + + +[Cluster provisioning with CAPI and {{product}} tutorial]: ../tutorial/getting-started.md diff --git a/docs/src/capi/reference/configs.md b/docs/src/capi/reference/configs.md new file mode 100644 index 000000000..60ce9bebe --- /dev/null +++ b/docs/src/capi/reference/configs.md @@ -0,0 +1,224 @@ +# Providers Configurations + +{{product}} bootstrap and control plane providers (CABPCK and CACPCK) can be configured to aid the cluster admin in reaching the desired state for the workload cluster. In this section we will go through different configurations that each one of these providers expose. + +## Common Configurations + +The following configurations are available for both bootstrap and control plane providers. + +### `version` +**Type:** `string` + +**Required:** yes + +`version` is used to specify the {{product}} version installed on the nodes. + +```{note} +The {{product}} providers will install the latest patch in the `stable` risk level by default, e.g. `1.30/stable`. Patch versions specified in this configuration will be ignored. + +To install a specific track or risk level, see [Install custom {{product}} on machines] guide. +``` + +**Example Usage:** +```yaml +spec: + version: 1.30 +``` + +### `files` +**Type:** `struct` + +**Required:** no + +`files` can be used to add new files to the machines or overwrite existing files. + +**Fields:** + +| Name | Type | Description | Default | +|------|------|-------------|---------| +| `path` | `string` | Where the file should be created | `""` | +| `content` | `string` | Content of the created file | `""` | +| `permissions` | `string` | Permissions of the file to create, e.g. "0600" | `""` | +| `owner` | `string` | Owner of the file to create, e.g. "root:root" | `""` | + +**Example Usage:** +```yaml +spec: + files: + path: "/path/to/my-file" + content: | + #!/bin/bash -xe + echo "hello from my-file + permissions: "0500" + owner: root:root +``` + +### `bootCommands` +**Type:** `[]string` + +**Required:** no + +`bootCommands` specifies extra commands to run in cloud-init early in the boot process. + +**Example Usage:** +```yaml +spec: + bootCommands: + - echo "first-command" + - echo "second-command" +``` + +### `preRunCommands` +**Type:** `[]string` + +**Required:** no + +`preRunCommands` specifies extra commands to run in cloud-init before k8s-snap setup runs. + +```{note} +`preRunCommands` can also be used to install custom {{product}} versions on machines. See [Install custom {{product}} on machines] guide for more info. +``` + +**Example Usage:** +```yaml +spec: + preRunCommands: + - echo "first-command" + - echo "second-command" +``` + +### `postRunCommands` +**Type:** `[]string` + +**Required:** no + +`postRunCommands` specifies extra commands to run in cloud-init after k8s-snap setup runs. + +**Example Usage:** +```yaml +spec: + postRunCommands: + - echo "first-command" + - echo "second-command" +``` + +### `airGapped` +**Type:** `bool` + +**Required:** no + +`airGapped` is used to signal that we are deploying to an airgap environment. In this case, the provider will not attempt to install k8s-snap on the machine. The user is expected to install k8s-snap manually with [`preRunCommands`](#preRunCommands), or provide an image with k8s-snap pre-installed. + +**Example Usage:** +```yaml +spec: + airGapped: true +``` + +### `initConfig` +**Type:** `struct` + +**Required:** no + +`initConfig` is configuration for the initializing the cluster features + +**Fields:** + +| Name | Type | Description | Default | +|------|------|-------------|---------| +| `annotations` | `map[string]string` | Are used to configure the behaviour of the built-in features. | `nil` | +| `enableDefaultDNS` | `bool` | Specifies whether to enable the default DNS configuration. | `true` | +| `enableDefaultLocalStorage` | `bool` | Specifies whether to enable the default local storage. | `true` | +| `enableDefaultMetricsServer` | `bool` | Specifies whether to enable the default metrics server. | `true` | +| `enableDefaultNetwork` | `bool` | Specifies whether to enable the default CNI. | `true` | + + +**Example Usage:** +```yaml +spec: + initConfig: + annotations: + annotationKey: "annotationValue" + enableDefaultDNS: false + enableDefaultLocalStorage: true + enableDefaultMetricsServer: false + enableDefaultNetwork: true +``` + +### `nodeName` +**Type:** `string` + +**Required:** no + +`nodeName` is the name to use for the kubelet of this node. It is needed for clouds where the cloud-provider has specific pre-requisites about the node names. It is typically set in Jinja template form, e.g. `"{{ ds.meta_data.local_hostname }}"`. + +**Example Usage:** +```yaml +spec: + nodeName: "{{ ds.meta_data.local_hostname }}" +``` + +## Control plane provider (CACPCK) + +The following configurations are only available for the control plane provider. + +### `replicas` +**Type:** `int32` + +**Required:** no + +`replicas` is the number of desired machines. Defaults to 1. When stacked etcd is used only odd numbers are permitted, as per [etcd best practice]. + +**Example Usage:** +```yaml +spec: + replicas: 2 +``` + +### `controlPlane` +**Type:** `struct` + +**Required:** no + +`controlPlane` is configuration for control plane nodes. + +**Fields:** + +| Name | Type | Description | Default | +|------|------|-------------|---------| +| `extraSANs` | `[]string` | A list of SANs to include in the server certificates. | `[]` | +| `cloudProvider` | `string` | The cloud-provider configuration option to set. | `""` | +| `nodeTaints` | `[]string` | Taints to add to the control plane kubelet nodes. | `[]` | +| `datastoreType` | `string` | The type of datastore to use for the control plane. | `""` | +| `datastoreServersSecretRef` | `struct{name:str, key:str}` | A reference to a secret containing the datastore servers. | `{}` | +| `k8sDqlitePort` | `int` | The port to use for k8s-dqlite. If unset, 2379 (etcd) will be used. | `2379` | +| `microclusterAddress` | `string` | The address (or CIDR) to use for microcluster. If unset, the default node interface is chosen. | `""` | +| `microclusterPort` | `int` | The port to use for microcluster. If unset, ":2380" (etcd peer) will be used. | `":2380"` | +| `extraKubeAPIServerArgs` | `map[string]string` | Extra arguments to add to kube-apiserver. | `map[]` | + +**Example Usage:** +```yaml +spec: + controlPlane: + extraSANs: + - extra.san + cloudProvider: external + nodeTaints: + - myTaint + datastoreType: k8s-dqlite + datastoreServersSecretRef: + name: sfName + key: sfKey + k8sDqlitePort: 2379 + microclusterAddress: my.address + microclusterPort: ":2380" + extraKubeAPIServerArgs: + argKey: argVal +``` + + +[Install custom {{product}} on machines]: ../howto/custom-ck8s.md +[etcd best practices]: https://etcd.io/docs/v3.5/faq/#why-an-odd-number-of-cluster-members + + + diff --git a/docs/src/capi/reference/index.md b/docs/src/capi/reference/index.md index b98291faf..1712305ec 100644 --- a/docs/src/capi/reference/index.md +++ b/docs/src/capi/reference/index.md @@ -12,6 +12,7 @@ Overview :titlesonly: releases community +configs ```