Skip to content

Commit

Permalink
Add rollout upgrade docs page (#583)
Browse files Browse the repository at this point in the history
---------

Co-authored-by: Mateo Florido <32885896+mateoflorido@users.noreply.github.com>
Co-authored-by: Louise K. Schmidtgen <louise.schmidtgen@canonical.com>
Co-authored-by: Nick Veitch <nick.veitch@canonical.com>
  • Loading branch information
4 people authored Aug 8, 2024
1 parent 7d34ca4 commit af4f6cc
Show file tree
Hide file tree
Showing 4 changed files with 129 additions and 7 deletions.
2 changes: 0 additions & 2 deletions docs/src/_parts/capi_infra_providers/aws.md

This file was deleted.

9 changes: 4 additions & 5 deletions docs/src/capi/howto/external-etcd.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
To replace the built-in datastore with an external etcd to
manage the Kubernetes state in the Cluster API (CAPI) workload cluster follow
this `how-to guide`. This example shows how to create a 3-node workload cluster
with an external etcd.
with an external etcd.

## Prerequisites

Expand Down Expand Up @@ -31,7 +31,7 @@ It is important to follow this naming convention for the secrets since the provi
Create the secret for the etcd servers:

```
kubectl apply -f - <<EOF
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
Expand Down Expand Up @@ -67,7 +67,7 @@ Create the `peaches-apiserver-etcd-client` secret:

```
kubectl create secret tls peaches-apiserver-etcd-client \
--cert=$CERTS_DIR/etcd-1.pem --key=$CERTS_DIR/etcd-1-key.pem
--cert=$CERTS_DIR/etcd-1.pem --key=$CERTS_DIR/etcd-1-key.pem
```

To confirm the secrets are created, run:
Expand Down Expand Up @@ -116,9 +116,8 @@ kubectl create -f peaches.yaml
To check the status of the cluster, run:

```
clusterctl describe cluster peaches
clusterctl describe cluster peaches
```

<!-- LINKS -->
[getting-started]: ../tutorial/getting-started.md
[capi-templates]: https://github.com/canonical/cluster-api-k8s/tree/main/templates
Expand Down
1 change: 1 addition & 0 deletions docs/src/capi/howto/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ Overview <self>
:titlesonly:
external-etcd
rollout-upgrades
```

---
Expand Down
124 changes: 124 additions & 0 deletions docs/src/capi/howto/rollout-upgrades.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
# Upgrade the Kubernetes version of a cluster

This guide walks you through the steps to rollout an upgrade for a
Cluster API managed Kubernetes cluster. The upgrade process includes updating
the Kubernetes version of the control plane and worker nodes.

## Prerequisites

To follow this guide, you will need:

- A Kubernetes management cluster with Cluster API and providers installed
and configured.
- A target workload cluster managed by CAPI.
- `kubectl` installed and configured to access your management cluster.
- The workload cluster kubeconfig. We will refer to it as `c1-kubeconfig.yaml`
in the following steps.

Please refer to the [getting-started guide][getting-started] for further
details on the required setup.
This guide refers to the workload cluster as `c1` and its
kubeconfig as `c1-kubeconfig.yaml`.

## Check the current cluster status

Prior to the upgrade, ensure that the management cluster is in a healthy
state.

```
kubectl get nodes -o wide
```

Confirm the Kubernetes version of the workload cluster:

```
kubectl --kubeconfig c1-kubeconfig.yaml get nodes -o wide
```

```{note} For rollout upgrades, only the minor version should be updated.
```
<!-- TODO(ben): add reference to in-place upgrades once we have those docs. -->

## Update the control plane

In this first step, update the CK8sControlPlane
resource with the new Kubernetes version. In this example, the control plane
is called `c1-control-plane`.

```
kubectl edit ck8scontrolplane c1-control-plane
```

Replace the `spec.version` field with the new Kubernetes version.

```yaml
spec:
version: v1.30.3
```
Please save your changes.
## Monitor the control plane upgrade
Watch CAPI handle the rolling upgrade of control plane nodes, by running the
following command:
```
kubectl get ck8scontrolplane c1-control-plane -w
```

To inspect the current machines, execute:

```
kubectl --kubeconfig c1-kubeconfig.yaml get nodes -o wide
```

The machines will be replaced in turn until all machines run on
the desired version.

## Update the worker nodes

After upgrading the control plane, proceed with upgrading the worker nodes
by updating the `MachineDeployment` resource. For
instance, we will be updating the `c1-worker-md`.

```
kubectl edit machinedeployment c1-worker-md
```

Update the `spec.template.spec.version` field with the new
Kubernetes version.

```yaml
spec:
template:
spec:
version: v1.30.3
```
Please save your changes.
## Monitor the worker node upgrade
Just like with the control planes, monitor the upgrade using:
```
kubectl get machinedeployment c1-worker-md
```

## Verify the Kubernetes upgrade

Confirm that all nodes are healthy and run on the new Kubernetes version:

```
kubectl --kubeconfig c1-kubeconfig.yaml get nodes -o wide
```

As a last step, ensure that no old machines are left behind:

```
kubectl get machines -A
```

<!-- LINKS -->
[getting-started]: ../tutorial/getting-started.md

0 comments on commit af4f6cc

Please sign in to comment.