Skip to content

Commit

Permalink
Update formatting and wording for StatefulSet Migration Redis demo
Browse files Browse the repository at this point in the history
  • Loading branch information
pwschuurman committed Jan 12, 2023
1 parent 75e8897 commit d59099b
Showing 1 changed file with 52 additions and 35 deletions.
87 changes: 52 additions & 35 deletions content/en/blog/_posts/2022-12-16-statefulset-migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
layout: blog
title: "Kubernetes 1.26: StatefulSet Start Ordinal Simplifies Migration"
date: 2023-01-03
slug: statefulset-migration
slug: statefulset-start-ordinal
---

**Author**: Peter Schuurman (Google)
Expand Down Expand Up @@ -32,11 +32,12 @@ You lose the self-healing benefit of the StatefulSet controller when your Pods
fail or are evicted.

Kubernetes v1.26 enables a StatefulSet to be responsible for a range of ordinals
within a half-open interval `[0, N)` (the ordinals 0, 1, ... N-1).
within a range {0..N-1} (the ordinals 0, 1, ... up to N-1).
With it, you can scale down a range
(`[0, k)`) in a source cluster, and scale up the complementary range (`[k, N)`)
{0..k-1} in a source cluster, and scale up the complementary range {k..N-1}
in a destination cluster, while maintaining application availability. This
enables you to retain *at most one* semantics and
enables you to retain *at most one* semantics (meaning there is at most one Pod
with a given identity running in a StatefulSet) and
[Rolling Update](/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update)
behavior when orchestrating a migration across clusters.

Expand All @@ -61,42 +62,50 @@ to a different cluster. There are many reasons why you would need to do this:
Enable the `StatefulSetStartOrdinal` feature gate on a cluster, and create a
StatefulSet with a customized `.spec.ordinals.start`.

## Try it for yourself
## Try it out

In this demo, you'll use the `StatefulSetStartOrdinal` feature to migrate a
StatefulSet from one Kubernetes cluster to another. For this demo, the
In this demo, I'll use the new mechanism to migrate a
StatefulSet from one Kubernetes cluster to another. The
[redis-cluster](https://github.com/bitnami/charts/tree/main/bitnami/redis-cluster)
Bitnami Helm chart is used to install Redis.
Bitnami Helm chart will be used to install Redis.

Tools Required:
* [yq](https://github.com/mikefarah/yq)
* [helm](https://helm.sh/docs/helm/helm_install/)

Pre-requisites: Two Kubernetes clusters named `source` and `destination`.
* `StatefulSetStartOrdinal` feature gate is enabled on both clusters
* The same default `StorageClass` is installed on both clusters. This
`StorageClass` should provision underlying storage that is accessible from
both clusters.
* A flat network topology that allows for pods to be accessible across both
Kubernetes clusters. If creating clusters on a cloud provider, this
configuration may be called private cloud or private network.
### Pre-requisites {#demo-pre-requisites}

1. Create a demo namespace on both clusters.
To do this, I need two Kubernetes clusters that can both access common
networking and storage; I've named my clusters `source` and `destination`.
Specifically, I need:

* The `StatefulSetStartOrdinal` feature gate enabled on both clusters.
* Client configuration for `kubectl` that lets me access both clusters as an
administrator.
* The same `StorageClass` installed on both clusters, and set as the default
StorageClass for both clusters. This `StorageClass` should provision
underlying storage that is accessible from either or both clusters.
* A flat network topology that allows for pods to send and receive packets to
and from Pods in either clusters. If you are creating clusters on a cloud
provider, this configuration may be called private cloud or private network.

1. Create a demo namespace on both clusters:

```
kubectl create ns kep-3335
```

2. Deploy a Redis cluster on `source`.
2. Deploy a Redis cluster with six replicas in the source cluster:

```
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install redis --namespace kep-3335 \
bitnami/redis-cluster \
--set persistence.size=1Gi
--set persistence.size=1Gi \
--set cluster.nodes=6
```

3. On `source`, check the replication status.
3. Check the replication status in the source cluster:

```
kubectl exec -it redis-redis-cluster-0 -- /bin/bash -c \
Expand All @@ -112,7 +121,7 @@ Pre-requisites: Two Kubernetes clusters named `source` and `destination`.
2cff613d763b22c180cd40668da8e452edef3fc8 10.104.0.17:6379@16379 master - 0 1669764410000 2 connected 5461-10922
```

4. On `destination`, deploy Redis with zero replicas.
4. Deploy a Redis cluster with zero replicas in the destination cluster:

```
helm install redis --namespace kep-3335 \
Expand All @@ -123,19 +132,20 @@ Pre-requisites: Two Kubernetes clusters named `source` and `destination`.
--set existingSecret=redis-redis-cluster
```

5. Scale down replica `redis-redis-cluster-5` in the source cluster.
5. Scale down the `redis-redis-cluster` StatefulSet in the source cluster by 1,
to remove the replica `redis-redis-cluster-5`:

```
kubectl patch sts redis-redis-cluster -p '{"spec": {"replicas": 5}}'
```

6. Migrate dependencies from `source` to `destination`.
6. Migrate dependencies from the source cluster to the destination cluster:

The following commands copy resources from `source` to `destionation`. Details
that are not relevant in `destination` cluster are removed (eg: `uid`,
`resourceVersion`, `status`).

#### Source cluster
**Steps for the source cluster**

Note: If using a `StorageClass` with `reclaimPolicy: Delete` configured, you
should patch the PVs in `source` with `reclaimPolicy: Retain` prior to
Expand All @@ -149,7 +159,7 @@ Pre-requisites: Two Kubernetes clusters named `source` and `destination`.
kubectl get secret redis-redis-cluster -o yaml | yq 'del(.metadata.uid, .metadata.resourceVersion)' > /tmp/secret-redis-redis-cluster.yaml
```

#### Destination cluster
**Steps for the destination cluster**

Note: For the PV/PVC, this procedure only works if the underlying storage system
that your PVs use can support being copied into `destination`. Storage
Expand All @@ -164,42 +174,49 @@ Pre-requisites: Two Kubernetes clusters named `source` and `destination`.
kubectl create -f /tmp/secret-redis-redis-cluster.yaml
```

7. Scale up replica `redis-redis-cluster-5` in the destination cluster, with a
start ordinal of 5:
7. Scale up the `redis-redis-cluster` StatefulSet in the destination cluster by
1, with a start ordinal of 5:

```
kubectl patch sts redis-redis-cluster -p '{"spec": {"ordinals": {"start": 5}, "replicas": 1}}'
```

8. On the source cluster, check the replication status.
8. Check the replication status in the destination cluster:

```
kubectl exec -it redis-redis-cluster-0 -- /bin/bash -c \
kubectl exec -it redis-redis-cluster-5 -- /bin/bash -c \
"redis-cli -c -h redis-redis-cluster -a $(kubectl get secret redis-redis-cluster -o jsonpath="{.data.redis-password}" | base64 -d) CLUSTER NODES;"
```

You should see that the new replica's address has joined the Redis cluster.
I should see that the new replica (labeled `myself`) has joined the Redis
cluster (the IP address belongs to a different CIDR block than the
replicas in the source cluster).

```
2cff613d763b22c180cd40668da8e452edef3fc8 10.104.0.17:6379@16379 myself,master - 0 1669766684000 2 connected 5461-10922
7136e37d8864db983f334b85d2b094be47c830e5 10.108.0.22:6379@16379 slave 2cff613d763b22c180cd40668da8e452edef3fc8 0 1669766685609 2 connected
2cff613d763b22c180cd40668da8e452edef3fc8 10.104.0.17:6379@16379 master - 0 1669766684000 2 connected 5461-10922
7136e37d8864db983f334b85d2b094be47c830e5 10.108.0.22:6379@16379 myself,slave 2cff613d763b22c180cd40668da8e452edef3fc8 0 1669766685609 2 connected
2ce30362c188aabc06f3eee5d92892d95b1da5c3 10.104.0.14:6379@16379 master - 0 1669766684000 3 connected 10923-16383
961f35e37c4eea507cfe12f96e3bfd694b9c21d4 10.104.0.18:6379@16379 slave a8765caed08f3e185cef22bd09edf409dc2bcc61 0 1669766683600 1 connected
a8765caed08f3e185cef22bd09edf409dc2bcc61 10.104.0.19:6379@16379 master - 0 1669766685000 1 connected 0-5460
7743661f60b6b17b5c71d083260419588b4f2451 10.104.0.16:6379@16379 slave 2ce30362c188aabc06f3eee5d92892d95b1da5c3 0 1669766686613 3 connected
```

9. Repeat steps #5 to #7 for the remainder of the replicas, until the
Redis StatefulSet in the source cluster is scaled to 0, and the Redis
StatefulSet in the destination cluster is healthy with 6 total replicas.

## What's Next?

This feature provides a building block for a StatefulSet to be split up across
clusters, but does not prescribe the mechanism as to how the StatefulSet should
be migrated. Migration requires coordination of StatefulSet replicas, along with
orchestration of the storage and network layer. This is dependent on the storage
and connectivity requirements of the application installed by the StatefulSet.
Additionally, many StatefulSets are controlled by Operators, which adds another
Additionally, many StatefulSets are managed by
[operators](/docs/concepts/extend-kubernetes/operator/), which adds another
layer of complexity to migration.

If you're interested in building blocks to make these processes easier, get
involved with
If you're interested in building enhancements to make these processes easier,
get involved with
[SIG Multicluster](https://github.com/kubernetes/community/blob/master/sig-multicluster)
to contribute!

0 comments on commit d59099b

Please sign in to comment.