diff --git a/docs/addons/nats/README.md b/docs/addons/nats/README.md new file mode 100644 index 00000000..3bf5c32f --- /dev/null +++ b/docs/addons/nats/README.md @@ -0,0 +1,50 @@ +--- +title: NATS Addon Overview | Stash +description: NATS Addon Overview | Stash +menu: + docs_{{ .version }}: + identifier: stash-nats-readme + name: Readme + parent: stash-nats + weight: -1 +product_name: stash +menu_name: docs_{{ .version }} +section_menu_id: stash-addons +url: /docs/{{ .version }}/addons/nats/ +aliases: + - /docs/{{ .version }}/addons/nats/README/ +--- + +{{< notice type="warning" message="This is an Enterprise-only feature. Please install [Stash Enterprise Edition](/docs/setup/install/enterprise.md) to try this feature." >}} + +# Stash NATS Addon + +Stash `{{< param "info.version" >}}` supports extending its functionality through addons. Stash NATS addon enables Stash to backup and restore NATS streams. + +This guide will give you an overview of which NATS versions are supported and how the docs are organized. + +## Supported NATS Versions + +Stash has the following addon versions for NATS: + +{{< versionlist "nats">}} + +Here, the addon follows `M.M.P` versioning scheme where `M.M.P` (Major.Minor.Patch) represents the respective NATS version. + +## Addon Version Compatibility + +This addon with matching major version with the NATS version should be able to take backup of that NATS streams. For example, NATS addon with version `2.x.x` should be able take backup of any NATS of `2.x.x` series. However, this might not be true for some versions. In that case, we will have separate addon for that version. + +## Documentation Overview + +Stash NATS documentations are organized as below: + +- [How does it works?](/docs/addons/nats/overview/index.md) gives an overview of how backup and restore process for NATS works in Stash. +- [Helm managed NATS](/docs/addons/nats/helm/index.md) shows how to backup and restore a Helm managed NATS. +- **Different authentications:** shows how to backup and restore NATS using different authentication methods. + - [Basic Authentication](/docs/addons/nats/authentications/basic-auth/index.md) + - [Token Authentication](/docs/addons/nats/authentications/token-auth/index.md) + - [Nkey Authentication](/docs/addons/nats/authentications/nkey-auth/index.md) + - [JWT Authentication](/docs/addons/nats/authentications/jwt-auth/index.md) +- [TLS secured NATS](/docs/addons/nats/tls/index.md) shows how to backup and restore TLS secured NATS. +- [Customizing Backup & Restore Process](/docs/addons/nats/customization/index.md) shows how to customize the backup & restore process. diff --git a/docs/addons/nats/_index.md b/docs/addons/nats/_index.md new file mode 100644 index 00000000..a5179d7b --- /dev/null +++ b/docs/addons/nats/_index.md @@ -0,0 +1,11 @@ +--- +title: Stash NATS Addon +menu: + docs_{{ .version }}: + identifier: stash-nats + name: NATS + parent: stash-addons + weight: 90 +menu_name: docs_{{ .version }} +--- + diff --git a/docs/addons/nats/authentications/_index.md b/docs/addons/nats/authentications/_index.md new file mode 100644 index 00000000..4d6748c1 --- /dev/null +++ b/docs/addons/nats/authentications/_index.md @@ -0,0 +1,13 @@ +--- +title: NATS with authentication +description: Backup and restore NATS streams using different authentication methods with Stash +menu: + docs_{{ .version }}: + identifier: stash-nats-auth + name: Authentication + parent: stash-nats + weight: 30 +product_name: stash +menu_name: docs_{{ .version }} +section_menu_id: stash-addons +--- diff --git a/docs/addons/nats/authentications/basic-auth/examples/appbinding.yaml b/docs/addons/nats/authentications/basic-auth/examples/appbinding.yaml new file mode 100644 index 00000000..7e4a9e8c --- /dev/null +++ b/docs/addons/nats/authentications/basic-auth/examples/appbinding.yaml @@ -0,0 +1,17 @@ +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats + namespace: demo +spec: + clientConfig: + service: + name: sample-nats + port: 4222 + scheme: nats + secret: + name: sample-nats-auth + type: nats.io/nats + version: 2.6.1 diff --git a/docs/addons/nats/authentications/basic-auth/examples/backupconfiguration.yaml b/docs/addons/nats/authentications/basic-auth/examples/backupconfiguration.yaml new file mode 100644 index 00000000..3e1fe9fa --- /dev/null +++ b/docs/addons/nats/authentications/basic-auth/examples/backupconfiguration.yaml @@ -0,0 +1,29 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: sample-nats-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/docs/addons/nats/authentications/basic-auth/examples/repository.yaml b/docs/addons/nats/authentications/basic-auth/examples/repository.yaml new file mode 100644 index 00000000..dae91419 --- /dev/null +++ b/docs/addons/nats/authentications/basic-auth/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/nats/sample-nats + storageSecretName: gcs-secret diff --git a/docs/addons/nats/authentications/basic-auth/examples/restoresession.yaml b/docs/addons/nats/authentications/basic-auth/examples/restoresession.yaml new file mode 100644 index 00000000..9c5f5242 --- /dev/null +++ b/docs/addons/nats/authentications/basic-auth/examples/restoresession.yaml @@ -0,0 +1,26 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: nats-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] diff --git a/docs/addons/nats/authentications/basic-auth/examples/secret.yaml b/docs/addons/nats/authentications/basic-auth/examples/secret.yaml new file mode 100644 index 00000000..d83d02ad --- /dev/null +++ b/docs/addons/nats/authentications/basic-auth/examples/secret.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Secret +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats-auth + namespace: demo +data: + password: MjIy + username: dTI= diff --git a/docs/addons/nats/authentications/basic-auth/images/sample-nats-backup.png b/docs/addons/nats/authentications/basic-auth/images/sample-nats-backup.png new file mode 100644 index 00000000..b3f603de Binary files /dev/null and b/docs/addons/nats/authentications/basic-auth/images/sample-nats-backup.png differ diff --git a/docs/addons/nats/authentications/basic-auth/index.md b/docs/addons/nats/authentications/basic-auth/index.md new file mode 100644 index 00000000..594c8b20 --- /dev/null +++ b/docs/addons/nats/authentications/basic-auth/index.md @@ -0,0 +1,649 @@ +--- +title: NATS with Basic authentication +description: Backup NATS with Basic authentication using Stash +menu: + docs_{{ .version }}: + identifier: stash-nats-basic-auth + name: Basic Authentication + parent: stash-nats-auth + weight: 10 +product_name: stash +menu_name: docs_{{ .version }} +section_menu_id: stash-addons +--- + +# Backup NATS with Basic Authentication using Stash + +Stash `{{< param "info.version" >}}` supports backup and restoration of NATS streams. This guide will show you how you can backup & restore a NATS server with basic authentication using Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install Stash Enterprise in your cluster following the steps [here](/docs/setup/install/enterprise.md). +- If you are not familiar with how Stash backup and restore NATS streams, please check the following guide [here](/docs/addons/nats/overview/index.md). + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/concepts/crds/appbinding.md) +- [Function](/docs/concepts/crds/function.md) +- [Task](/docs/concepts/crds/task.md) +- [BackupConfiguration](/docs/concepts/crds/backupconfiguration.md) +- [BackupSession](/docs/concepts/crds/backupsession.md) +- [RestoreSession](/docs/concepts/crds/restoresession.md) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created already. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored [here](https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/authentications/basic-auth/examples). + +## Prepare NATS + +In this section, we are going to deploy a NATS cluster with basic authentication enabled. Then, we are going to create a stream and publish some messages into it. + +### Deploy NATS Cluster + +At first, let's deploy a NATS cluster. Here, we are going to use [NATS](https://github.com/nats-io/k8s/tree/main/helm/charts/nats ) chart from [nats.io](https://nats.io/). + +Let's deploy a NATS cluster named `sample-nats` using Helm as below, + +```bash +# Add nats chart registry +$ helm repo add nats https://nats-io.github.io/k8s/helm/charts/ +# Update helm registries +$ helm repo update +# Install nats/nats chart into demo namespace +$ helm install sample-nats nats/nats -n demo \ +--set nats.jetstream.enabled=true \ +--set nats.jetstream.fileStorage.enabled=true \ +--set cluster.enabled=true \ +--set cluster.replicas=3 \ +--set auth.enabled=true \ +--set-string auth.basic.users[0].user="sample-user",auth.basic.users[0].password="changeit" +``` + +This chart will create the necessary StatefulSet, Service, PVCs etc. for the NATS cluster. You can easily view all the resources created by chart using [ketall](https://github.com/corneliusweig/ketall) `kubectl` plugin as below, + +```bash +❯ kubectl get-all -n demo -l app.kubernetes.io/instance=sample-nats +NAME NAMESPACE AGE +configmap/sample-nats-config demo 11m +endpoints/sample-nats demo 11m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-0 demo 11m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-1 demo 10m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-2 demo 10m +pod/sample-nats-0 demo 11m +pod/sample-nats-1 demo 10m +pod/sample-nats-2 demo 10m +service/sample-nats demo 11m +controllerrevision.apps/sample-nats-775468b94f demo 11m +statefulset.apps/sample-nats demo 11m +endpointslice.discovery.k8s.io/sample-nats-7n7v6 demo 11m +``` + +Now, wait for the NATS server pods `sample-nats-0`, `sample-nats-1`, `sample-nats-2` to go into `Running` state, + +```bash +❯ kubectl get pod -n demo -l app.kubernetes.io/instance=sample-nats +NAME READY STATUS RESTARTS AGE +sample-nats-0 3/3 Running 0 9m58s +sample-nats-1 3/3 Running 0 9m35s +sample-nats-2 3/3 Running 0 9m12s +``` + +Once the pods are in `Running` state, verify that the NATS server is ready to accept the connections. + +```bash +❯ kubectl logs -n demo sample-nats-0 -c nats +[7] 2021/09/06 08:33:53.111508 [INF] Starting nats-server +[7] 2021/09/06 08:33:53.111560 [INF] Version: 2.6.1 +... +[7] 2021/09/06 08:33:53.116004 [INF] Server is ready +``` + +From the above log, we can see the NATS server is ready to accept connections. + +### Insert Sample Data + +The above Helm chart also deploy a pod with nats-box image which can be used to interact with the NATS server. Let's verify the nats-box pod has been created. + +```bash +❯ kubectl get pod -n demo -l app=sample-nats-box +NAME READY STATUS RESTARTS AGE +sample-nats-box-785f8458d7-wtnfx 1/1 Running 0 7m20s +``` + +Let's exec into the nats-box pod, + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Let's export the username and password as environment variables to make further commands re-usable. +sample-nats-box-785f8458d7-wtnfx:~# export NATS_USER=sample-user +sample-nats-box-785f8458d7-wtnfx:~# export NATS_PASSWORD=changeit + +# Let's create a stream named "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats stream add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --max-msgs-per-subject=-1 --discard old --dupe-window="0s" --replicas 1 +Stream ORDERS was created + +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 0 + Bytes: 0 B + FirstSeq: 0 + LastSeq: 0 + Active Consumers: 0 + + +# Verify that the stream has been created successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +Streams: + + ORDERS + +# Lets add some messages to the stream "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats pub ORDERS.scratch hello +08:55:39 Published 5 bytes to "ORDERS.scratch" + +# Add another message +sample-nats-box-785f8458d7-wtnfx:~# nats pub ORDERS.scratch world +08:56:11 Published 5 bytes to "ORDERS.scratch" + +# Verify that the messages have been published to the stream successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream info ORDERS +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 2 + Bytes: 98 B + FirstSeq: 1 @ 2021-09-03T08:55:39 UTC + LastSeq: 2 @ 2021-09-03T08:56:11 UTC + Active Consumers: 0 + +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +We have successfully deployed a NATS cluster, created a stream and publish some messages into the stream. In the subsequent sections, we are going to backup this sample data using Stash. + +## Prepare for Backup + +In this section, we are going to prepare the necessary resources (i.e. connection information, backend information, etc.) before backup. + +### Ensure NATS Addon + +When you install Stash Enterprise version, it will automatically install all the official addons. Make sure that NATS addon has been installed properly using the following command. + +```bash +❯ kubectl get tasks.stash.appscode.com | grep nats +nats-backup-2.6.1 24m +nats-restore-2.6.1 24m +``` + +This addon should be able to take backup of the NATS streams with matching major versions as discussed in [Addon Version Compatibility](/docs/addons/nats/README.md#addon-version-compatibility). + +### Create Secret + + Lets create a secret with basic auth credentials. Below is the YAML of `Secret` object we are going to create. + +```yaml +apiVersion: v1 +kind: Secret +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats-auth + namespace: demo +data: + password: Y2hhbmdlaXQ= + username: c2FtcGxlLXVzZXI= +``` + +Let's create the `Secret` we have shown above, + +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/authentications/basic-auth/examples/secret.yaml +secret/sample-nats-auth created +``` + + + +### Create AppBinding + +Stash needs to know how to connect with the NATS server. An `AppBinding` exactly provides this information. It holds the Service and Secret information of the NATS server. You have to point to the respective `AppBinding` as a target of backup instead of the NATS server itself. + +Here, is the YAML of the `AppBinding` that we are going to create for the NATS server we have deployed earlier. + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats + namespace: demo +spec: + clientConfig: + service: + name: sample-nats + port: 4222 + scheme: nats + secret: + name: sample-nats-auth + type: nats.io/nats + version: 2.6.1 +``` + +Here, + +- `.spec.clientConfig.service` specifies the Service information to use to connects with the NATS server. +- `.spec.secret` specifies the name of the Secret that holds necessary credentials to access the server. +- `spec.type` specifies the type of the target. This is particularly helpful in auto-backup where you want to use different path prefixes for different types of target. + +Let's create the `AppBinding` we have shown above, + +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/authentications/basic-auth/examples/appbinding.yaml +appbinding.appcatalog.appscode.com/sample-nats created +``` + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. So, we need to create a Secret with GCS credentials and a `Repository` object with the bucket information. If you want to use a different backend, please read the respective backend configuration doc from [here](/docs/guides/latest/backends/overview.md). + +**Create Storage Secret:** + +At first, let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```bash +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-json.key > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, create a `Repository` object with the information of your desired bucket. Below is the YAML of `Repository` object we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/nats/sample-nats + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```bash +$ kubectl create -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/authentications/basic-auth/examples/repository.yaml +repository.stash.appscode.com/gcs-repo created +``` + +Now, we are ready to backup our streams into our desired backend. + +### Backup + +To schedule a backup, we have to create a `BackupConfiguration` object targeting the respective `AppBinding` of our NATS server. Then, Stash will create a CronJob to periodically backup the streams. + +#### Create BackupConfiguration + +Below is the YAML for `BackupConfiguration` object that we are going to use to backup the streams of the NATS server we have created earlier, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: sample-nats-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `.spec.schedule` specifies that we want to backup the streams at 5 minutes intervals. +- `.spec.task.name` specifies the name of the Task object that specifies the necessary Functions and their execution order to backup NATS streams. +- `.spec.repository.name` specifies the Repository CR name we have created earlier with backend information. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted NATS server. +- `spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the dumped data temporarily before uploading it into the cloud bucket. +- `.spec.retentionPolicy` specifies a policy indicating how we want to cleanup the old backups. + +Let's create the `BackupConfiguration` object we have shown above, + +```bash +$ kubectl create -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/authentications/basic-auth/examples/backupconfiguration.yaml +backupconfiguration.stash.appscode.com/sample-nats-backup created +``` + +#### Verify CronJob + +If everything goes well, Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` object. + +Verify that the CronJob has been created using the following command, + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * False 0 14s +``` + +#### Wait for BackupSession + +The `sample-nats-backup` CronJob will trigger a backup on each scheduled slot by creating a `BackupSession` object. + +Now, wait for a schedule to appear. Run the following command to watch for `BackupSession` object, + +```bash +❯ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +sample-nats-backup-8x8fp BackupConfiguration sample-nats-backup Succeeded 42s 8m28s +``` + +Here, the phase `Succeeded` means that the backup process has been completed successfully. + +#### Verify Backup + +Now, we are going to verify whether the backed up data is present in the backend or not. Once a backup is completed, Stash will update the respective `Repository` object to reflect the backup completion. Check that the repository `gcs-repo` has been updated by the following command, + +```bash +❯ kubectl get repository -n demo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo true 1.382 KiB 1 9m4s 24m +``` + +Now, if we navigate to the GCS bucket, we will see the backed up data has been stored in `demo/nats/sample-nats` directory as specified by `.spec.backend.gcs.prefix` field of the `Repository` object. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore + +If you have followed the previous sections properly, you should have a successful backup of your NATS streams. Now, we are going to show how you can restore the streams from the backed up data. + +### Restore Into the Same NATS Cluster + +You can restore your data into the same nats cluster you have backed up from or into a different NATS cluster in the same cluster or a different cluster. In this section, we are going to show you how to restore in the same nats cluster which may be necessary when you have accidentally lost any data. + +#### Temporarily Pause Backup + +At first, let's stop taking any further backup of the NATS streams so that no backup runs after we delete the sample data. We are going to pause the `BackupConfiguration` object. Stash will stop taking any further backup when the `BackupConfiguration` is paused. + +Let's pause the `sample-nats-backup` BackupConfiguration, + +```bash +$ kubectl patch backupconfiguration -n demo sample-nats-backup --type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-nats-backup patched +``` + +Verify that the `BackupConfiguration` has been paused, + +```bash +❯ kubectl get backupconfiguration -n demo sample-nats-backup +NAME TASK SCHEDULE PAUSED AGE +sample-nats-backup nats-backup-2.6.1 */5 * * * * true 2d18h +``` + +Notice the `PAUSED` column. Value `true` for this field means that the `BackupConfiguration` has been paused. + +Stash will also suspend the respective CronJob. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * True 0 56s 2d18h +``` + +#### Simulate Disaster + +Now, let's simulate a disaster scenario. Here, we are going to exec into the nats-box pod and delete the sample data we have inserted earlier. + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Let's export the username and password as environment variables to make further commands re-usable. +sample-nats-box-785f8458d7-wtnfx:~# export NATS_USER=sample-user +sample-nats-box-785f8458d7-wtnfx:~# export NATS_PASSWORD=changeit + +# delete the stream "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats stream rm ORDERS -f + +# verify that the stream has been deleted +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +No Streams defined +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +#### Create RestoreSession + +To restore the streams, you have to create a `RestoreSession` object pointing to the `AppBinding` of the targeted NATS server. + +Here, is the YAML of the `RestoreSession` object that we are going to use for restoring the streams of the NATS server. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: nats-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] +``` + +Here, + +- `.spec.task.name` specifies the name of the Task object that specifies the necessary Functions and their execution order to restore NATS streams. +- `.spec.repository.name` specifies the Repository object that holds the backend information where our backed up data has been stored. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted NATS server. +- `.spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the restored data temporarily before injecting into the NATS server. +- `.spec.rules` specifies that we are restoring data from the latest backup snapshot of the streams. + +Let's create the `RestoreSession` object object we have shown above, + +```bash +$ kubectl apply -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/authentications/basic-auth/examples/restoresession.yaml +restoresession.stash.appscode.com/sample-nats-restore created +``` + +Once, you have created the `RestoreSession` object, Stash will create a restore Job. Run the following command to watch the phase of the `RestoreSession` object, + +```bash +❯ kubectl get restoresession -n demo -w +NAME REPOSITORY PHASE DURATION AGE +sample-nats-restore gcs-repo Succeeded 15s 55s +``` + +The `Succeeded` phase means that the restore process has been completed successfully. + +#### Verify Restored Data + +Now, let's exec into the nats-box pod and verify whether data actual data has been restored or not, + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Let's export the username and password as environment variables to make further commands re-usable. +sample-nats-box-785f8458d7-wtnfx:~# export NATS_USER=sample-user +sample-nats-box-785f8458d7-wtnfx:~# export NATS_PASSWORD=changeit + +# Verify that the stream has been restored successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +Streams: + + ORDERS + +# Verify that the messages have been restored successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream info ORDERS +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 2 + Bytes: 98 B + FirstSeq: 1 @ 2021-09-03T08:55:39 UTC + LastSeq: 2 @ 2021-09-03T08:56:11 UTC + Active Consumers: 0 + +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +Hence, we can see from the above output that the deleted data has been restored successfully from the backup. + +#### Resume Backup + +Since our data has been restored successfully we can now resume our usual backup process. Resume the `BackupConfiguration` using following command, + +```bash +❯ kubectl patch backupconfiguration -n demo sample-nats-backup --type="merge" --patch='{"spec": {"paused": false}}' +backupconfiguration.stash.appscode.com/sample-nats-backup patched +``` + +Verify that the `BackupConfiguration` has been resumed, +```bash +❯ kubectl get backupconfiguration -n demo sample-nats-backup +NAME TASK SCHEDULE PAUSED AGE +sample-nats-backup nats-backup-2.6.1 */5 * * * * false 2d19h +``` + +Here, `false` in the `PAUSED` column means the backup has been resumed successfully. The CronJob also should be resumed now. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * False 0 3m24s 4h54m +``` + +Here, `False` in the `SUSPEND` column means the CronJob is no longer suspended and will trigger in the next schedule. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo backupconfiguration sample-nats-backup +kubectl delete -n demo restoresession sample-nats-restore +kubectl delete -n demo repository gcs-repo +# delete the nats chart +helm delete sample-nats -n demo +``` diff --git a/docs/addons/nats/authentications/jwt-auth/examples/appbinding.yaml b/docs/addons/nats/authentications/jwt-auth/examples/appbinding.yaml new file mode 100644 index 00000000..7e4a9e8c --- /dev/null +++ b/docs/addons/nats/authentications/jwt-auth/examples/appbinding.yaml @@ -0,0 +1,17 @@ +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats + namespace: demo +spec: + clientConfig: + service: + name: sample-nats + port: 4222 + scheme: nats + secret: + name: sample-nats-auth + type: nats.io/nats + version: 2.6.1 diff --git a/docs/addons/nats/authentications/jwt-auth/examples/backupconfiguration.yaml b/docs/addons/nats/authentications/jwt-auth/examples/backupconfiguration.yaml new file mode 100644 index 00000000..70b58a66 --- /dev/null +++ b/docs/addons/nats/authentications/jwt-auth/examples/backupconfiguration.yaml @@ -0,0 +1,29 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: sample-nats-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/docs/addons/nats/authentications/jwt-auth/examples/repository.yaml b/docs/addons/nats/authentications/jwt-auth/examples/repository.yaml new file mode 100644 index 00000000..dae91419 --- /dev/null +++ b/docs/addons/nats/authentications/jwt-auth/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/nats/sample-nats + storageSecretName: gcs-secret diff --git a/docs/addons/nats/authentications/jwt-auth/examples/restoresession.yaml b/docs/addons/nats/authentications/jwt-auth/examples/restoresession.yaml new file mode 100644 index 00000000..9c5f5242 --- /dev/null +++ b/docs/addons/nats/authentications/jwt-auth/examples/restoresession.yaml @@ -0,0 +1,26 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: nats-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] diff --git a/docs/addons/nats/authentications/jwt-auth/examples/secret.yaml b/docs/addons/nats/authentications/jwt-auth/examples/secret.yaml new file mode 100644 index 00000000..7e92b7e6 --- /dev/null +++ b/docs/addons/nats/authentications/jwt-auth/examples/secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats-auth + namespace: demo +data: + creds: LS0tLS1CRUdJTiBOQVRTIFVTRVIgSldULS0tLS0KZXlKMGVYQWlPaUpLVjFRaUxDSmhiR2NpT2lKbFpESTFOVEU1TFc1clpYa2lmUS5leUpxZEdraU9pSklRVXBLVDFJeVQwZFBXRVphVUZOQ1VVUk5OVTlaVnpKYVNVVlRURXcxTjFNMVJGVkZWVmhSVGxRMlNUVkZUMWxFVnpaQklpd2lhV0YwSWpveE5qSTVPRGt6TWpReExDSnBjM01pT2lKQlJFWkRNMWxNUVZVMU5rNHlOa2hIVGpkVVIxQlhSRmhSVGxwVFFsRkZXa3RSV0ZCUVVsQXlORWhMTkRaV1NsbFJXRFExVXpKVlNDSXNJbTVoYldVaU9pSjRJaXdpYzNWaUlqb2lWVUZZVEVnMFdUVlNOazVNUmxwT1RsaENVVTh5V1VaWk56SlNRbE5ETms4MFZFNUpRa0pOUzBOVlRGcExNell6TjFGQ05GTldNa1lpTENKdVlYUnpJanA3SW5CMVlpSTZlMzBzSW5OMVlpSTZlMzBzSW5OMVluTWlPaTB4TENKa1lYUmhJam90TVN3aWNHRjViRzloWkNJNkxURXNJblI1Y0dVaU9pSjFjMlZ5SWl3aWRtVnljMmx2YmlJNk1uMTkuMFFyeW11Mi1HdUVYV3hOaU5MNGRxc1JMdlJ4VGNXQ24zRHN6UTJIbUhHOElEbXhwUG9oeGRGMFU3aUQ5WGdQU2xSMVBOakJ6bXFxMHhFME1lWmRTRHcKLS0tLS0tRU5EIE5BVFMgVVNFUiBKV1QtLS0tLS0KCi0tLS0tQkVHSU4gVVNFUiBOS0VZIFNFRUQtLS0tLQpTVUFCUlc0Sjc2RlpCTjVTMlZOR0MzWVdLRlRXR1FVNTI3TzVSQkdPTVRQNkRYRUpDSUZSS0NKSUtVCi0tLS0tLUVORCBVU0VSIE5LRVkgU0VFRC0tLS0tLQo= diff --git a/docs/addons/nats/authentications/jwt-auth/images/sample-nats-backup.png b/docs/addons/nats/authentications/jwt-auth/images/sample-nats-backup.png new file mode 100644 index 00000000..b3f603de Binary files /dev/null and b/docs/addons/nats/authentications/jwt-auth/images/sample-nats-backup.png differ diff --git a/docs/addons/nats/authentications/jwt-auth/index.md b/docs/addons/nats/authentications/jwt-auth/index.md new file mode 100644 index 00000000..91fe1ff0 --- /dev/null +++ b/docs/addons/nats/authentications/jwt-auth/index.md @@ -0,0 +1,652 @@ +--- +title: NATS with JWT authentication +description: Backup NATS with JWT authentication using Stash +menu: + docs_{{ .version }}: + identifier: stash-nats-jwt-auth + name: JWT Authentication + parent: stash-nats-auth + weight: 25 +product_name: stash +menu_name: docs_{{ .version }} +section_menu_id: stash-addons +--- + +# Backup NATS with JWT Authentication using Stash + +Stash `{{< param "info.version" >}}` supports backup and restoration of NATS streams. This guide will show you how you can backup & restore a NATS server with JWT authentication using Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install Stash Enterprise in your cluster following the steps [here](/docs/setup/install/enterprise.md). +- If you are not familiar with how Stash backup and restore NATS streams, please check the following guide [here](/docs/addons/nats/overview/index.md). + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/concepts/crds/appbinding.md) +- [Function](/docs/concepts/crds/function.md) +- [Task](/docs/concepts/crds/task.md) +- [BackupConfiguration](/docs/concepts/crds/backupconfiguration.md) +- [BackupSession](/docs/concepts/crds/backupsession.md) +- [RestoreSession](/docs/concepts/crds/restoresession.md) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created already. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored [here](https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/authentications/jwt-auth/examples). + +## Prepare NATS + +In this section, we are going to deploy a NATS cluster with JWT authentication enabled. Then, we are going to create a stream and publish some messages into it. + +### Deploy NATS Cluster + +At first, let's deploy a NATS cluster. Here, we are going to use [NATS](https://github.com/nats-io/k8s/tree/main/helm/charts/nats ) chart from [nats.io](https://nats.io/). + +Let's deploy a NATS cluster named `sample-nats` using Helm as below, + +```bash +# Add nats chart registry +$ helm repo add nats https://nats-io.github.io/k8s/helm/charts/ +# Update helm registries +$ helm repo update +# Install nats/nats chart into demo namespace +$ helm install sample-nats nats/nats -n demo \ +--set nats.jetstream.enabled=true \ +--set nats.jetstream.fileStorage.enabled=true \ +--set cluster.enabled=true \ +--set cluster.replicas=3 \ +--set auth.enabled=true \ +--set auth.resolver.type=full \ +--set auth.resolver.operator=eyJ0eXAiOiJKV1QiLCJhbGciOiJlZDI1NTE5LW5rZXkifQ.eyJhdWQiOiJPQU5US0NDTkFQTUdCNE9YRE1YT1ZON01DQUVKNVZGV1ZXVEFVVlVXQllBUFhMWlpHU1NZVTRLVCIsImV4cCI6MTk0NTQyNjA0MSwianRpIjoiWktRTllaTlNRTUhNQzNHREdVRVpDUFlDT0RNSjIyMzRPM0pGTjUzWlZYWEZBUFU3Qlg2QSIsImlhdCI6MTYyOTg5MzI0MSwiaXNzIjoiT0FOVEtDQ05BUE1HQjRPWERNWE9WTjdNQ0FFSjVWRldWV1RBVVZVV0JZQVBYTFpaR1NTWVU0S1QiLCJuYW1lIjoiS08iLCJuYmYiOjE2Mjk4OTMyNDEsInN1YiI6Ik9BTlRLQ0NOQVBNR0I0T1hETVhPVk43TUNBRUo1VkZXVldUQVVWVVdCWUFQWExaWkdTU1lVNEtUIiwibmF0cyI6eyJzaWduaW5nX2tleXMiOlsiT0FOVEtDQ05BUE1HQjRPWERNWE9WTjdNQ0FFSjVWRldWV1RBVVZVV0JZQVBYTFpaR1NTWVU0S1QiXSwidHlwZSI6Im9wZXJhdG9yIiwidmVyc2lvbiI6Mn19.jxs4znpE50PzRFfKOjENlFQTfsRHH5VqIplnTgAziUJuYBSNmBQeYsBDJTOgLJyADgtqIWkAQF_G5K7xuVXpCg \ +--set auth.resolver.systemAccount=ABQBM7PTUNWRWQRWFFQGRCVRQ7ULYSZQLCMGDK62WYRHRO3NUN3SUONF \ +--set auth.resolver.resolverPreload.ABQBM7PTUNWRWQRWFFQGRCVRQ7ULYSZQLCMGDK62WYRHRO3NUN3SUONF=eyJ0eXAiOiJKV1QiLCJhbGciOiJlZDI1NTE5LW5rZXkifQ.eyJqdGkiOiJDRTZHVUdISEYzVzRGUU5VMjdVWks2SUtNSkpQWEEzUlEyNDVYTE5LWEVIS0xaQ1ZaT1FBIiwiaWF0IjoxNjI5ODkzMjQxLCJpc3MiOiJPQU5US0NDTkFQTUdCNE9YRE1YT1ZON01DQUVKNVZGV1ZXVEFVVlVXQllBUFhMWlpHU1NZVTRLVCIsIm5hbWUiOiJTWVMiLCJzdWIiOiJBQlFCTTdQVFVOV1JXUVJXRkZRR1JDVlJRN1VMWVNaUUxDTUdESzYyV1lSSFJPM05VTjNTVU9ORiIsIm5hdHMiOnsibGltaXRzIjp7InN1YnMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwid2lsZGNhcmRzIjp0cnVlLCJjb25uIjotMSwibGVhZiI6LTF9LCJkZWZhdWx0X3Blcm1pc3Npb25zIjp7InB1YiI6e30sInN1YiI6e319LCJ0eXBlIjoiYWNjb3VudCIsInZlcnNpb24iOjJ9fQ.DM8U4Ld4OWmd-hk9fobMI3fWMDZdLr-Q33Uq7h5eoM-RMVKN-5nuUlmaxPffYRwE1egVIn9mQuu7YYmwX31wDQ \ +--set auth.resolver.resolverPreload.ADFC3YLAU56N26HGN7TGPWDXQNZSBQEZKQXPPRP24HK46VJYQX45S2UH=eyJ0eXAiOiJKV1QiLCJhbGciOiJlZDI1NTE5LW5rZXkifQ.eyJqdGkiOiJEMlYzVE9NUTQ1WjRGUFNTM0hCMk5TR1dLV0xITDVNSUpVWENNSTRGVEtYNTIzNjdWRUhRIiwiaWF0IjoxNjI5ODkzMjQxLCJpc3MiOiJPQU5US0NDTkFQTUdCNE9YRE1YT1ZON01DQUVKNVZGV1ZXVEFVVlVXQllBUFhMWlpHU1NZVTRLVCIsIm5hbWUiOiJYIiwic3ViIjoiQURGQzNZTEFVNTZOMjZIR043VEdQV0RYUU5aU0JRRVpLUVhQUFJQMjRISzQ2VkpZUVg0NVMyVUgiLCJuYXRzIjp7ImV4cG9ydHMiOlt7Im5hbWUiOiJ4LkV2ZW50cyIsInN1YmplY3QiOiJ4LkV2ZW50cyIsInR5cGUiOiJzdHJlYW0ifSx7Im5hbWUiOiJ4Lk5vdGlmaWNhdGlvbnMiLCJzdWJqZWN0IjoieC5Ob3RpZmljYXRpb25zIiwidHlwZSI6InNlcnZpY2UiLCJyZXNwb25zZV90eXBlIjoiU3RyZWFtIn1dLCJsaW1pdHMiOnsic3VicyI6LTEsImRhdGEiOi0xLCJwYXlsb2FkIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJ3aWxkY2FyZHMiOnRydWUsImNvbm4iOi0xLCJsZWFmIjotMSwibWVtX3N0b3JhZ2UiOi0xLCJkaXNrX3N0b3JhZ2UiOi0xLCJzdHJlYW1zIjotMSwiY29uc3VtZXIiOi0xfSwiZGVmYXVsdF9wZXJtaXNzaW9ucyI6eyJwdWIiOnt9LCJzdWIiOnt9fSwidHlwZSI6ImFjY291bnQiLCJ2ZXJzaW9uIjoyfX0.oXatnt7Tqt1iHDpUAKGroac9Sv6G4kbAPIt75BrBRh6B9MOFa_y8QLsUnIffI4-aG31cVYjECs7QlsNTPJ-oCg +``` + +This chart will create the necessary StatefulSet, Service, PVCs etc. for the NATS cluster. You can easily view all the resources created by chart using [ketall](https://github.com/corneliusweig/ketall) `kubectl` plugin as below, + +```bash +❯ kubectl get-all -n demo -l app.kubernetes.io/instance=sample-nats +NAME NAMESPACE AGE +configmap/sample-nats-config demo 11m +endpoints/sample-nats demo 11m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-0 demo 11m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-1 demo 10m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-2 demo 10m +pod/sample-nats-0 demo 11m +pod/sample-nats-1 demo 10m +pod/sample-nats-2 demo 10m +service/sample-nats demo 11m +controllerrevision.apps/sample-nats-775468b94f demo 11m +statefulset.apps/sample-nats demo 11m +endpointslice.discovery.k8s.io/sample-nats-7n7v6 demo 11m +``` + +Now, wait for the NATS server pods `sample-nats-0`, `sample-nats-1`, `sample-nats-2` to go into `Running` state, + +```bash +❯ kubectl get pod -n demo -l app.kubernetes.io/instance=sample-nats +NAME READY STATUS RESTARTS AGE +sample-nats-0 3/3 Running 0 9m58s +sample-nats-1 3/3 Running 0 9m35s +sample-nats-2 3/3 Running 0 9m12s +``` + +Once the pods are in `Running` state, verify that the NATS server is ready to accept the connections. + +```bash +❯ kubectl logs -n demo sample-nats-0 -c nats +[7] 2021/09/06 08:33:53.111508 [INF] Starting nats-server +[7] 2021/09/06 08:33:53.111560 [INF] Version: 2.6.1 +... +[7] 2021/09/06 08:33:53.116004 [INF] Server is ready +``` + +From the above log, we can see the NATS server is ready to accept connections. + +### Insert Sample Data +The above Helm chart also deploy a pod with nats-box image which can be used to interact with the NATS server. Let's verify the nats-box pod has been created. + +```bash +❯ kubectl get pod -n demo -l app=sample-nats-box +NAME READY STATUS RESTARTS AGE +sample-nats-box-785f8458d7-wtnfx 1/1 Running 0 7m20s +``` + +Let's exec into the nats-box pod, + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Let's create the creds file for our user +sample-nats-box-785f8458d7-wtnfx:~# echo LS0tLS1CRUdJTiBOQVRTIFVTRVIgSldULS0tLS0KZXlKMGVYQWlPaUpLVjFRaUxDSmhiR2NpT2lKbFpESTFOVEU1TFc1clpYa2lmUS5leUpxZEdraU9pSklRVXBLVDFJeVQwZFBXRVphVUZOQ1VVUk5OVTlaVnpKYVNVVlRURXcxTjFNMVJGVkZWVmhSVGxRMlNUVkZUMWxFVnpaQklpd2lhV0YwSWpveE5qSTVPRGt6TWpReExDSnBjM01pT2lKQlJFWkRNMWxNUVZVMU5rNHlOa2hIVGpkVVIxQlhSRmhSVGxwVFFsRkZXa3RSV0ZCUVVsQXlORWhMTkRaV1NsbFJXRFExVXpKVlNDSXNJbTVoYldVaU9pSjRJaXdpYzNWaUlqb2lWVUZZVEVnMFdUVlNOazVNUmxwT1RsaENVVTh5V1VaWk56SlNRbE5ETms4MFZFNUpRa0pOUzBOVlRGcExNell6TjFGQ05GTldNa1lpTENKdVlYUnpJanA3SW5CMVlpSTZlMzBzSW5OMVlpSTZlMzBzSW5OMVluTWlPaTB4TENKa1lYUmhJam90TVN3aWNHRjViRzloWkNJNkxURXNJblI1Y0dVaU9pSjFjMlZ5SWl3aWRtVnljMmx2YmlJNk1uMTkuMFFyeW11Mi1HdUVYV3hOaU5MNGRxc1JMdlJ4VGNXQ24zRHN6UTJIbUhHOElEbXhwUG9oeGRGMFU3aUQ5WGdQU2xSMVBOakJ6bXFxMHhFME1lWmRTRHcKLS0tLS0tRU5EIE5BVFMgVVNFUiBKV1QtLS0tLS0KCi0tLS0tQkVHSU4gVVNFUiBOS0VZIFNFRUQtLS0tLQpTVUFCUlc0Sjc2RlpCTjVTMlZOR0MzWVdLRlRXR1FVNTI3TzVSQkdPTVRQNkRYRUpDSUZSS0NKSUtVCi0tLS0tLUVORCBVU0VSIE5LRVkgU0VFRC0tLS0tLQo= | base64 -d > user.creds + +# Let's export the file path as environment variables to make further commands re-usable. +sample-nats-box-785f8458d7-wtnfx:~# export NATS_CREDS=/tmp/user.creds + +# Let's create a stream named "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats stream add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --max-msgs-per-subject=-1 --discard old --dupe-window="0s" --replicas 1 +Stream ORDERS was created + +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 0 + Bytes: 0 B + FirstSeq: 0 + LastSeq: 0 + Active Consumers: 0 + + +# Verify that the stream has been created successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +Streams: + + ORDERS + +# Lets add some messages to the stream "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats pub ORDERS.scratch hello +08:55:39 Published 5 bytes to "ORDERS.scratch" + +# Add another message +sample-nats-box-785f8458d7-wtnfx:~# nats pub ORDERS.scratch world +08:56:11 Published 5 bytes to "ORDERS.scratch" + +# Verify that the messages have been published to the stream successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream info ORDERS +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 2 + Bytes: 98 B + FirstSeq: 1 @ 2021-09-03T08:55:39 UTC + LastSeq: 2 @ 2021-09-03T08:56:11 UTC + Active Consumers: 0 + +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +We have successfully deployed a NATS cluster, created a stream and publish some messages into the stream. In the subsequent sections, we are going to backup this sample data using Stash. + +## Prepare for Backup + +In this section, we are going to prepare the necessary resources (i.e. connection information, backend information, etc.) before backup. + +### Ensure NATS Addon + +When you install Stash Enterprise version, it will automatically install all the official addons. Make sure that NATS addon has been installed properly using the following command. + +```bash +❯ kubectl get tasks.stash.appscode.com | grep nats +nats-backup-2.6.1 24m +nats-restore-2.6.1 24m +``` + +This addon should be able to take backup of the NATS streams with matching major versions as discussed in [Addon Version Compatibility](/docs/addons/nats/README.md#addon-version-compatibility). + +### Create Secret + + Lets create a secret with JWT auth credentials. Below is the YAML of `Secret` object we are going to create. + +```yaml +apiVersion: v1 +kind: Secret +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats-auth + namespace: demo +data: + creds: LS0tLS1CRUdJTiBOQVRTIFVTRVIgSldULS0tLS0KZXlKMGVYQWlPaUpLVjFRaUxDSmhiR2NpT2lKbFpESTFOVEU1TFc1clpYa2lmUS5leUpxZEdraU9pSklRVXBLVDFJeVQwZFBXRVphVUZOQ1VVUk5OVTlaVnpKYVNVVlRURXcxTjFNMVJGVkZWVmhSVGxRMlNUVkZUMWxFVnpaQklpd2lhV0YwSWpveE5qSTVPRGt6TWpReExDSnBjM01pT2lKQlJFWkRNMWxNUVZVMU5rNHlOa2hIVGpkVVIxQlhSRmhSVGxwVFFsRkZXa3RSV0ZCUVVsQXlORWhMTkRaV1NsbFJXRFExVXpKVlNDSXNJbTVoYldVaU9pSjRJaXdpYzNWaUlqb2lWVUZZVEVnMFdUVlNOazVNUmxwT1RsaENVVTh5V1VaWk56SlNRbE5ETms4MFZFNUpRa0pOUzBOVlRGcExNell6TjFGQ05GTldNa1lpTENKdVlYUnpJanA3SW5CMVlpSTZlMzBzSW5OMVlpSTZlMzBzSW5OMVluTWlPaTB4TENKa1lYUmhJam90TVN3aWNHRjViRzloWkNJNkxURXNJblI1Y0dVaU9pSjFjMlZ5SWl3aWRtVnljMmx2YmlJNk1uMTkuMFFyeW11Mi1HdUVYV3hOaU5MNGRxc1JMdlJ4VGNXQ24zRHN6UTJIbUhHOElEbXhwUG9oeGRGMFU3aUQ5WGdQU2xSMVBOakJ6bXFxMHhFME1lWmRTRHcKLS0tLS0tRU5EIE5BVFMgVVNFUiBKV1QtLS0tLS0KCi0tLS0tQkVHSU4gVVNFUiBOS0VZIFNFRUQtLS0tLQpTVUFCUlc0Sjc2RlpCTjVTMlZOR0MzWVdLRlRXR1FVNTI3TzVSQkdPTVRQNkRYRUpDSUZSS0NKSUtVCi0tLS0tLUVORCBVU0VSIE5LRVkgU0VFRC0tLS0tLQo= +``` + +Let's create the `Secret` we have shown above, +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/authentications/jwt-auth/examples/secret.yaml +secret/sample-nats-auth created +``` + + +### Create AppBinding + +Stash needs to know how to connect with the NATS server. An `AppBinding` exactly provides this information. It holds the Service and Secret information of the NATS server. You have to point to the respective `AppBinding` as a target of backup instead of the NATS server itself. + +Here, is the YAML of the `AppBinding` that we are going to create for the NATS server we have deployed earlier. + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats + namespace: demo +spec: + clientConfig: + service: + name: sample-nats + port: 4222 + scheme: nats + secret: + name: sample-nats-auth + type: nats.io/nats + version: 2.6.1 +``` + +Here, + +- `.spec.clientConfig.service` specifies the Service information to use to connects with the NATS server. +- `.spec.secret` specifies the name of the Secret that holds necessary credentials to access the server. +- `.spec.type` specifies the type of the target. This is particularly helpful in auto-backup where you want to use different path prefixes for different types of target. + +Let's create the `AppBinding` we have shown above, + +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/authentications/jwt-auth/examples/appbinding.yaml +appbinding.appcatalog.appscode.com/sample-nats created +``` + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. So, we need to create a Secret with GCS credentials and a `Repository` object with the bucket information. If you want to use a different backend, please read the respective backend configuration doc from [here](/docs/guides/latest/backends/overview.md). + +**Create Storage Secret:** + +At first, let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```bash +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-json.key > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, create a `Repository` object with the information of your desired bucket. Below is the YAML of `Repository` object we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/nats/sample-nats + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```bash +$ kubectl create -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/authentications/jwt-auth/examples/repository.yaml +repository.stash.appscode.com/gcs-repo created +``` + +Now, we are ready to backup our streams into our desired backend. + +### Backup + +To schedule a backup, we have to create a `BackupConfiguration` object targeting the respective `AppBinding` of our NATS server. Then, Stash will create a CronJob to periodically backup the streams. + +#### Create BackupConfiguration + +Below is the YAML for `BackupConfiguration` object that we are going to use to backup the streams of the NATS server we have created earlier, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: sample-nats-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `.spec.schedule` specifies that we want to backup the streams at 5 minutes intervals. +- `.spec.task.name` specifies the name of the Task object that specifies the necessary Functions and their execution order to backup NATS streams. +- `.spec.repository.name` specifies the Repository CR name we have created earlier with backend information. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted NATS server. +- `spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the dumped data temporarily before uploading it into the cloud bucket. +- `.spec.retentionPolicy` specifies a policy indicating how we want to cleanup the old backups. + +Let's create the `BackupConfiguration` object we have shown above, + +```bash +$ kubectl create -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/authentications/jwt-auth/examples/backupconfiguration.yaml +backupconfiguration.stash.appscode.com/sample-nats-backup created +``` + +#### Verify CronJob + +If everything goes well, Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` object. + +Verify that the CronJob has been created using the following command, + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * False 0 14s +``` + +#### Wait for BackupSession + +The `sample-nats-backup` CronJob will trigger a backup on each scheduled slot by creating a `BackupSession` object. + +Now, wait for a schedule to appear. Run the following command to watch for `BackupSession` object, + +```bash +❯ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +sample-nats-backup-8x8fp BackupConfiguration sample-nats-backup Succeeded 42s 8m28s +``` + +Here, the phase `Succeeded` means that the backup process has been completed successfully. + +#### Verify Backup + +Now, we are going to verify whether the backed up data is present in the backend or not. Once a backup is completed, Stash will update the respective `Repository` object to reflect the backup completion. Check that the repository `gcs-repo` has been updated by the following command, + +```bash +❯ kubectl get repository -n demo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo true 1.382 KiB 1 9m4s 24m +``` + +Now, if we navigate to the GCS bucket, we will see the backed up data has been stored in `demo/nats/sample-nats` directory as specified by `.spec.backend.gcs.prefix` field of the `Repository` object. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ + + + +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore + +If you have followed the previous sections properly, you should have a successful backup of your nats streams. Now, we are going to show how you can restore the streams from the backed up data. + +### Restore Into the Same NATS Cluster + +You can restore your data into the same NATS cluster you have backed up from or into a different NATS cluster in the same cluster or a different cluster. In this section, we are going to show you how to restore in the same NATS cluster which may be necessary when you have accidentally lost any data. + +#### Temporarily Pause Backup + +At first, let's stop taking any further backup of the NATS streams so that no backup runs after we delete the sample data. We are going to pause the `BackupConfiguration` object. Stash will stop taking any further backup when the `BackupConfiguration` is paused. + +Let's pause the `sample-nats-backup` BackupConfiguration, + +```bash +$ kubectl patch backupconfiguration -n demo sample-nats-backup --type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-nats-backup patched +``` + +Verify that the `BackupConfiguration` has been paused, + +```bash +❯ kubectl get backupconfiguration -n demo sample-nats-backup +NAME TASK SCHEDULE PAUSED AGE +sample-nats-backup nats-backup-2.6.1 */5 * * * * true 2d18h +``` + +Notice the `PAUSED` column. Value `true` for this field means that the `BackupConfiguration` has been paused. + +Stash will also suspend the respective CronJob. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * True 0 56s 2d18h +``` + +#### Simulate Disaster + +Now, let's simulate a disaster scenario. Here, we are going to exec into the nats-box pod and delete the sample data we have inserted earlier. + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Let's export the file path of user.creds file as environment variable to make further commands re-usable. +sample-nats-box-785f8458d7-wtnfx:~# export NATS_CREDS=/tmp/user.creds + +# delete the stream "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats stream rm ORDERS -f + +# verify that the stream has been deleted +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +No Streams defined +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +#### Create RestoreSession + +To restore the streams, you have to create a `RestoreSession` object pointing to the `AppBinding` of the targeted NATS server. + +Here, is the YAML of the `RestoreSession` object that we are going to use for restoring the streams of the NATS server. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: nats-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] +``` + +Here, + +- `.spec.task.name` specifies the name of the Task object that specifies the necessary Functions and their execution order to restore NATS streams. +- `.spec.repository.name` specifies the Repository object that holds the backend information where our backed up data has been stored. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted NATS server. +- `.spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the restored data temporarily before injecting into the NATS server. +- `.spec.rules` specifies that we are restoring data from the latest backup snapshot of the streams. + +Let's create the `RestoreSession` object object we have shown above, + +```bash +$ kubectl apply -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/authentications/jwt-auth/examples/restoresession.yaml +restoresession.stash.appscode.com/sample-nats-restore created +``` + +Once, you have created the `RestoreSession` object, Stash will create a restore Job. Run the following command to watch the phase of the `RestoreSession` object, + +```bash +❯ kubectl get restoresession -n demo -w +NAME REPOSITORY PHASE DURATION AGE +sample-nats-restore gcs-repo Succeeded 15s 55s +``` + +The `Succeeded` phase means that the restore process has been completed successfully. + +#### Verify Restored Data + +Now, let's exec into the nats-box pod and verify whether data actual data has been restored or not, + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Let's export the file path of user.creds file as environment variable to make further commands re-usable. +sample-nats-box-785f8458d7-wtnfx:~# export NATS_CREDS=/tmp/user.creds + +# Verify that the stream has been restored successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +Streams: + + ORDERS + +# Verify that the messages have been restored successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream info ORDERS +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 2 + Bytes: 98 B + FirstSeq: 1 @ 2021-09-03T08:55:39 UTC + LastSeq: 2 @ 2021-09-03T08:56:11 UTC + Active Consumers: 0 + +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +Hence, we can see from the above output that the deleted data has been restored successfully from the backup. + +#### Resume Backup + +Since our data has been restored successfully we can now resume our usual backup process. Resume the `BackupConfiguration` using following command, + +```bash +❯ kubectl patch backupconfiguration -n demo sample-nats-backup --type="merge" --patch='{"spec": {"paused": false}}' +backupconfiguration.stash.appscode.com/sample-nats-backup patched +``` + +Verify that the `BackupConfiguration` has been resumed, +```bash +❯ kubectl get backupconfiguration -n demo sample-nats-backup +NAME TASK SCHEDULE PAUSED AGE +sample-nats-backup nats-backup-2.6.1 */5 * * * * false 2d19h +``` + +Here, `false` in the `PAUSED` column means the backup has been resumed successfully. The CronJob also should be resumed now. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * False 0 3m24s 4h54m +``` + +Here, `False` in the `SUSPEND` column means the CronJob is no longer suspended and will trigger in the next schedule. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo backupconfiguration sample-nats-backup +kubectl delete -n demo restoresession sample-nats-restore +kubectl delete -n demo repository gcs-repo +# delete the nats chart +helm delete sample-nats -n demo +``` diff --git a/docs/addons/nats/authentications/nkey-auth/examples/appbinding.yaml b/docs/addons/nats/authentications/nkey-auth/examples/appbinding.yaml new file mode 100644 index 00000000..7e4a9e8c --- /dev/null +++ b/docs/addons/nats/authentications/nkey-auth/examples/appbinding.yaml @@ -0,0 +1,17 @@ +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats + namespace: demo +spec: + clientConfig: + service: + name: sample-nats + port: 4222 + scheme: nats + secret: + name: sample-nats-auth + type: nats.io/nats + version: 2.6.1 diff --git a/docs/addons/nats/authentications/nkey-auth/examples/backupconfiguration.yaml b/docs/addons/nats/authentications/nkey-auth/examples/backupconfiguration.yaml new file mode 100644 index 00000000..70b58a66 --- /dev/null +++ b/docs/addons/nats/authentications/nkey-auth/examples/backupconfiguration.yaml @@ -0,0 +1,29 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: sample-nats-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/docs/addons/nats/authentications/nkey-auth/examples/repository.yaml b/docs/addons/nats/authentications/nkey-auth/examples/repository.yaml new file mode 100644 index 00000000..dae91419 --- /dev/null +++ b/docs/addons/nats/authentications/nkey-auth/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/nats/sample-nats + storageSecretName: gcs-secret diff --git a/docs/addons/nats/authentications/nkey-auth/examples/restoresession.yaml b/docs/addons/nats/authentications/nkey-auth/examples/restoresession.yaml new file mode 100644 index 00000000..9c5f5242 --- /dev/null +++ b/docs/addons/nats/authentications/nkey-auth/examples/restoresession.yaml @@ -0,0 +1,26 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: nats-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] diff --git a/docs/addons/nats/authentications/nkey-auth/examples/secret.yaml b/docs/addons/nats/authentications/nkey-auth/examples/secret.yaml new file mode 100644 index 00000000..71bd6e09 --- /dev/null +++ b/docs/addons/nats/authentications/nkey-auth/examples/secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats-auth + namespace: demo +data: + nkey: U1VBRDJRWlBJQU9aRTdTQlZHUjJQS09YVkEyTDYzVUQ1UEVWNkVVUTZPTEdUS0ZJV0o0VTNaQ1NDQQpVQVdHR1ZFSFhJRU9XWVZCTjdSTzdJSUtJWEhJT0s2SldXVURKT1dJVVo2TDNYTUlXTTVJRkdQRAo= diff --git a/docs/addons/nats/authentications/nkey-auth/images/sample-nats-backup.png b/docs/addons/nats/authentications/nkey-auth/images/sample-nats-backup.png new file mode 100644 index 00000000..b3f603de Binary files /dev/null and b/docs/addons/nats/authentications/nkey-auth/images/sample-nats-backup.png differ diff --git a/docs/addons/nats/authentications/nkey-auth/index.md b/docs/addons/nats/authentications/nkey-auth/index.md new file mode 100644 index 00000000..34213e44 --- /dev/null +++ b/docs/addons/nats/authentications/nkey-auth/index.md @@ -0,0 +1,649 @@ +--- +title: NKey authentication +description: Backup NATS with Nkey authentication using Stash +menu: + docs_{{ .version }}: + identifier: stash-nats-nkey-auth + name: Nkey Authentication + parent: stash-nats-auth + weight: 20 +product_name: stash +menu_name: docs_{{ .version }} +section_menu_id: stash-addons +--- + +# Backup NATS with Nkey Authentication using Stash + +Stash `{{< param "info.version" >}}` supports backup and restoration of NATS streams. This guide will show you how you can backup & restore a NATS server with nkey authentication using Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install Stash Enterprise in your cluster following the steps [here](/docs/setup/install/enterprise.md). +- If you are not familiar with how Stash backup and restore NATS streams, please check the following guide [here](/docs/addons/nats/overview/index.md). + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/concepts/crds/appbinding.md) +- [Function](/docs/concepts/crds/function.md) +- [Task](/docs/concepts/crds/task.md) +- [BackupConfiguration](/docs/concepts/crds/backupconfiguration.md) +- [BackupSession](/docs/concepts/crds/backupsession.md) +- [RestoreSession](/docs/concepts/crds/restoresession.md) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created already. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored [here](https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/authentications/nkey-auth/examples). + +## Prepare NATS + +In this section, we are going to deploy a NATS cluster with nkey authentication enabled. Then, we are going to create a stream and publish some messages into it. + +### Deploy NATS Cluster + +At first, let's deploy a NATS cluster. Here, we are going to use [NATS]( https://github.com/nats-io/k8s/tree/main/helm/charts/nats) chart from [nats.io](https://nats.io/). + +Let's deploy a NATS cluster named `sample-nats` using Helm as below, + +```bash +# Add nats chart registry +$ helm repo add nats https://nats-io.github.io/k8s/helm/charts/ +# Update helm registries +$ helm repo update +# Install nats/nats chart into demo namespace +$ helm install sample-nats nats/nats -n demo \ +--set nats.jetstream.enabled=true \ +--set nats.jetstream.fileStorage.enabled=true \ +--set cluster.enabled=true \ +--set cluster.replicas=3 \ +--set auth.enabled=true \ +--set auth.nkeys.users[0].nkey="UAWGGVEHXIEOWYVBN7RO7IIKIXHIOK6JWWUDJOWIUZ6L3XMIWM5IFGPD" + +``` + +This chart will create the necessary StatefulSet, Service, PVCs etc. for the NATS cluster. You can easily view all the resources created by chart using [ketall](https://github.com/corneliusweig/ketall) `kubectl` plugin as below, + +```bash +❯ kubectl get-all -n demo -l app.kubernetes.io/instance=sample-nats +NAME NAMESPACE AGE +configmap/sample-nats-config demo 11m +endpoints/sample-nats demo 11m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-0 demo 11m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-1 demo 10m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-2 demo 10m +pod/sample-nats-0 demo 11m +pod/sample-nats-1 demo 10m +pod/sample-nats-2 demo 10m +service/sample-nats demo 11m +controllerrevision.apps/sample-nats-775468b94f demo 11m +statefulset.apps/sample-nats demo 11m +endpointslice.discovery.k8s.io/sample-nats-7n7v6 demo 11m +``` + +Now, wait for the NATS server pods `sample-nats-0`, `sample-nats-1`, `sample-nats-2` to go into `Running` state, + +```bash +❯ kubectl get pod -n demo -l app.kubernetes.io/instance=sample-nats +NAME READY STATUS RESTARTS AGE +sample-nats-0 3/3 Running 0 9m58s +sample-nats-1 3/3 Running 0 9m35s +sample-nats-2 3/3 Running 0 9m12s +``` + +Once the pods are in `Running` state, verify that the NATS server is ready to accept the connections. + +```bash +❯ kubectl logs -n demo sample-nats-0 -c nats +[7] 2021/09/06 08:33:53.111508 [INF] Starting nats-server +[7] 2021/09/06 08:33:53.111560 [INF] Version: 2.6.1 +... +[7] 2021/09/06 08:33:53.116004 [INF] Server is ready +``` + +From the above log, we can see the NATS server is ready to accept connections. + +### Insert Sample Data +The above Helm chart also deploy a pod with nats-box image which can be used to interact with the NATS server. Let's verify the nats-box pod has been created. + +```bash +❯ kubectl get pod -n demo -l app=sample-nats-box +NAME READY STATUS RESTARTS AGE +sample-nats-box-785f8458d7-wtnfx 1/1 Running 0 7m20s +``` + +Let's exec into the nats-box pod, + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Let's create the nkey file for our user +sample-nats-box-785f8458d7-wtnfx:~# echo U1VBRDJRWlBJQU9aRTdTQlZHUjJQS09YVkEyTDYzVUQ1UEVWNkVVUTZPTEdUS0ZJV0o0VTNaQ1NDQQpVQVdHR1ZFSFhJRU9XWVZCTjdSTzdJSUtJWEhJT0s2SldXVURKT1dJVVo2TDNYTUlXTTVJRkdQRA== | base64 -d > user.nk + +# Let's export the file path as environment variables to make further commands re-usable. +sample-nats-box-785f8458d7-wtnfx:~# export NATS_NKEY=/tmp/user.nk + +# Let's create a stream named "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats stream add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --max-msgs-per-subject=-1 --discard old --dupe-window="0s" --replicas 1 +Stream ORDERS was created + +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 0 + Bytes: 0 B + FirstSeq: 0 + LastSeq: 0 + Active Consumers: 0 + + +# Verify that the stream has been created successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +Streams: + + ORDERS + +# Lets add some messages to the stream "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats pub ORDERS.scratch hello +08:55:39 Published 5 bytes to "ORDERS.scratch" + +# Add another message +sample-nats-box-785f8458d7-wtnfx:~# nats pub ORDERS.scratch world +08:56:11 Published 5 bytes to "ORDERS.scratch" + +# Verify that the messages have been published to the stream successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream info ORDERS +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 2 + Bytes: 98 B + FirstSeq: 1 @ 2021-09-03T08:55:39 UTC + LastSeq: 2 @ 2021-09-03T08:56:11 UTC + Active Consumers: 0 + +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +We have successfully deployed a NATS cluster, created a stream and publish some messages into the stream. In the subsequent sections, we are going to backup this sample data using Stash. + +## Prepare for Backup + +In this section, we are going to prepare the necessary resources (i.e. connection information, backend information, etc.) before backup. + +### Ensure NATS Addon + +When you install Stash Enterprise version, it will automatically install all the official addons. Make sure that NATS addon has been installed properly using the following command. + +```bash +❯ kubectl get tasks.stash.appscode.com | grep nats +nats-backup-2.6.1 24m +nats-restore-2.6.1 24m +``` + +This addon should be able to take backup of the NATS streams with matching major versions as discussed in [Addon Version Compatibility](/docs/addons/nats/README.md#addon-version-compatibility). + +### Create Secret + + Lets create a secret with nkey credentials. Below is the YAML of `Secret` object we are going to create. + +```yaml +apiVersion: v1 +kind: Secret +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats-auth + namespace: demo +data: + nkey: U1VBRDJRWlBJQU9aRTdTQlZHUjJQS09YVkEyTDYzVUQ1UEVWNkVVUTZPTEdUS0ZJV0o0VTNaQ1NDQQpVQVdHR1ZFSFhJRU9XWVZCTjdSTzdJSUtJWEhJT0s2SldXVURKT1dJVVo2TDNYTUlXTTVJRkdQRAo= +``` + +Let's create the `Secret` we have shown above, +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/authentications/nkey-auth/examples/secret.yaml +secret/sample-nats-auth created +``` + + +### Create AppBinding + +Stash needs to know how to connect with the NATS server. An `AppBinding` exactly provides this information. It holds the Service and Secret information of the NATS server. You have to point to the respective `AppBinding` as a target of backup instead of the NATS server itself. + +Here, is the YAML of the `AppBinding` that we are going to create for the NATS server we have deployed earlier. + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats + namespace: demo +spec: + clientConfig: + service: + name: sample-nats + port: 4222 + scheme: nats + secret: + name: sample-nats-auth + type: nats.io/nats + version: 2.6.1 +``` + +Here, + +- `.spec.clientConfig.service` specifies the Service information to use to connects with the NATS server. +- `.spec.secret` specifies the name of the Secret that holds necessary credentials to access the server. +- `.spec.type` specifies the type of the target. This is particularly helpful in auto-backup where you want to use different path prefixes for different types of target. + +Let's create the `AppBinding` we have shown above, + +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/authentications/nkey-auth/examples/appbinding.yaml +appbinding.appcatalog.appscode.com/sample-nats created +``` + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. So, we need to create a Secret with GCS credentials and a `Repository` object with the bucket information. If you want to use a different backend, please read the respective backend configuration doc from [here](/docs/guides/latest/backends/overview.md). + +**Create Storage Secret:** + +At first, let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```bash +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-json.key > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, create a `Repository` object with the information of your desired bucket. Below is the YAML of `Repository` object we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/nats/sample-nats + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```bash +$ kubectl create -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/authentications/nkey-auth/examples/repository.yaml +repository.stash.appscode.com/gcs-repo created +``` + +Now, we are ready to backup our streams into our desired backend. + +### Backup + +To schedule a backup, we have to create a `BackupConfiguration` object targeting the respective `AppBinding` of our NATS server. Then, Stash will create a CronJob to periodically backup the streams. + +#### Create BackupConfiguration + +Below is the YAML for `BackupConfiguration` object that we are going to use to backup the streams of the NATS server we have created earlier, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: sample-nats-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `.spec.schedule` specifies that we want to backup the streams at 5 minutes intervals. +- `.spec.task.name` specifies the name of the Task object that specifies the necessary Functions and their execution order to backup NATS streams. +- `.spec.repository.name` specifies the Repository CR name we have created earlier with backend information. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted NATS server. +- `spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the dumped data temporarily before uploading it into the cloud bucket. +- `.spec.retentionPolicy` specifies a policy indicating how we want to cleanup the old backups. + +Let's create the `BackupConfiguration` object we have shown above, + +```bash +$ kubectl create -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/authentications/nkey-auth/examples/backupconfiguration.yaml +backupconfiguration.stash.appscode.com/sample-nats-backup created +``` + +#### Verify CronJob + +If everything goes well, Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` object. + +Verify that the CronJob has been created using the following command, + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * False 0 14s +``` + +#### Wait for BackupSession + +The `sample-nats-backup` CronJob will trigger a backup on each scheduled slot by creating a `BackupSession` object. + +Now, wait for a schedule to appear. Run the following command to watch for `BackupSession` object, + +```bash +❯ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +sample-nats-backup-8x8fp BackupConfiguration sample-nats-backup Succeeded 42s 8m28s +``` + +Here, the phase `Succeeded` means that the backup process has been completed successfully. + +#### Verify Backup + +Now, we are going to verify whether the backed up data is present in the backend or not. Once a backup is completed, Stash will update the respective `Repository` object to reflect the backup completion. Check that the repository `gcs-repo` has been updated by the following command, + +```bash +❯ kubectl get repository -n demo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo true 1.382 KiB 1 9m4s 24m +``` + +Now, if we navigate to the GCS bucket, we will see the backed up data has been stored in `demo/nats/sample-nats` directory as specified by `.spec.backend.gcs.prefix` field of the `Repository` object. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ + + + +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore + +If you have followed the previous sections properly, you should have a successful backup of your nats streams. Now, we are going to show how you can restore the streams from the backed up data. + +### Restore Into the Same NATS Cluster + +You can restore your data into the same NATS cluster you have backed up from or into a different NATS cluster in the same cluster or a different cluster. In this section, we are going to show you how to restore in the same NATS cluster which may be necessary when you have accidentally lost any data. + +#### Temporarily Pause Backup + +At first, let's stop taking any further backup of the NATS streams so that no backup runs after we delete the sample data. We are going to pause the `BackupConfiguration` object. Stash will stop taking any further backup when the `BackupConfiguration` is paused. + +Let's pause the `sample-nats-backup` BackupConfiguration, + +```bash +$ kubectl patch backupconfiguration -n demo sample-nats-backup --type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-nats-backup patched +``` + +Verify that the `BackupConfiguration` has been paused, + +```bash +❯ kubectl get backupconfiguration -n demo sample-nats-backup +NAME TASK SCHEDULE PAUSED AGE +sample-nats-backup nats-backup-2.6.1 */5 * * * * true 2d18h +``` + +Notice the `PAUSED` column. Value `true` for this field means that the `BackupConfiguration` has been paused. + +Stash will also suspend the respective CronJob. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * True 0 56s 2d18h +``` + +#### Simulate Disaster + +Now, let's simulate a disaster scenario. Here, we are going to exec into the nats-box pod and delete the sample data we have inserted earlier. + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Let's export the file path of user.nk file as environment variable to make further commands re-usable. +sample-nats-box-785f8458d7-wtnfx:~# export NATS_NKEY=/tmp/user.nk + +# delete the stream "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats stream rm ORDERS -f + +# verify that the stream has been deleted +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +No Streams defined +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +#### Create RestoreSession + +To restore the streams, you have to create a `RestoreSession` object pointing to the `AppBinding` of the targeted NATS server. + +Here, is the YAML of the `RestoreSession` object that we are going to use for restoring the streams of the NATS server. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: nats-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] +``` + +Here, + +- `.spec.task.name` specifies the name of the Task object that specifies the necessary Functions and their execution order to restore NATS streams. +- `.spec.repository.name` specifies the Repository object that holds the backend information where our backed up data has been stored. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted NATS server. +- `.spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the restored data temporarily before injecting into the NATS server. +- `.spec.rules` specifies that we are restoring data from the latest backup snapshot of the streams. + +Let's create the `RestoreSession` object object we have shown above, + +```bash +$ kubectl apply -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/authentications/nkey-auth/examples/restoresession.yaml +restoresession.stash.appscode.com/sample-nats-restore created +``` + +Once, you have created the `RestoreSession` object, Stash will create a restore Job. Run the following command to watch the phase of the `RestoreSession` object, + +```bash +❯ kubectl get restoresession -n demo -w +NAME REPOSITORY PHASE DURATION AGE +sample-nats-restore gcs-repo Succeeded 15s 55s +``` + +The `Succeeded` phase means that the restore process has been completed successfully. + +#### Verify Restored Data + +Now, let's exec into the nats-box pod and verify whether data actual data has been restored or not, + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Let's export the file path of user.nk file as environment variable to make further commands re-usable. +sample-nats-box-785f8458d7-wtnfx:~# export NATS_NKEY=/tmp/user.nk + +# Verify that the stream has been restored successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +Streams: + + ORDERS + +# Verify that the messages have been restored successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream info ORDERS +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 2 + Bytes: 98 B + FirstSeq: 1 @ 2021-09-03T08:55:39 UTC + LastSeq: 2 @ 2021-09-03T08:56:11 UTC + Active Consumers: 0 + +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +Hence, we can see from the above output that the deleted data has been restored successfully from the backup. + +#### Resume Backup + +Since our data has been restored successfully we can now resume our usual backup process. Resume the `BackupConfiguration` using following command, + +```bash +❯ kubectl patch backupconfiguration -n demo sample-nats-backup --type="merge" --patch='{"spec": {"paused": false}}' +backupconfiguration.stash.appscode.com/sample-nats-backup patched +``` + +Verify that the `BackupConfiguration` has been resumed, +```bash +❯ kubectl get backupconfiguration -n demo sample-nats-backup +NAME TASK SCHEDULE PAUSED AGE +sample-nats-backup nats-backup-2.6.1 */5 * * * * false 2d19h +``` + +Here, `false` in the `PAUSED` column means the backup has been resumed successfully. The CronJob also should be resumed now. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * False 0 3m24s 4h54m +``` + +Here, `False` in the `SUSPEND` column means the CronJob is no longer suspended and will trigger in the next schedule. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo backupconfiguration sample-nats-backup +kubectl delete -n demo restoresession sample-nats-restore +kubectl delete -n demo repository gcs-repo +# delete the nats chart +helm delete sample-nats -n demo +``` diff --git a/docs/addons/nats/authentications/token-auth/examples/appbinding.yaml b/docs/addons/nats/authentications/token-auth/examples/appbinding.yaml new file mode 100644 index 00000000..7e4a9e8c --- /dev/null +++ b/docs/addons/nats/authentications/token-auth/examples/appbinding.yaml @@ -0,0 +1,17 @@ +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats + namespace: demo +spec: + clientConfig: + service: + name: sample-nats + port: 4222 + scheme: nats + secret: + name: sample-nats-auth + type: nats.io/nats + version: 2.6.1 diff --git a/docs/addons/nats/authentications/token-auth/examples/backupconfiguration.yaml b/docs/addons/nats/authentications/token-auth/examples/backupconfiguration.yaml new file mode 100644 index 00000000..70b58a66 --- /dev/null +++ b/docs/addons/nats/authentications/token-auth/examples/backupconfiguration.yaml @@ -0,0 +1,29 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: sample-nats-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/docs/addons/nats/authentications/token-auth/examples/repository.yaml b/docs/addons/nats/authentications/token-auth/examples/repository.yaml new file mode 100644 index 00000000..dae91419 --- /dev/null +++ b/docs/addons/nats/authentications/token-auth/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/nats/sample-nats + storageSecretName: gcs-secret diff --git a/docs/addons/nats/authentications/token-auth/examples/restoresession.yaml b/docs/addons/nats/authentications/token-auth/examples/restoresession.yaml new file mode 100644 index 00000000..9c5f5242 --- /dev/null +++ b/docs/addons/nats/authentications/token-auth/examples/restoresession.yaml @@ -0,0 +1,26 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: nats-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] diff --git a/docs/addons/nats/authentications/token-auth/examples/secret.yaml b/docs/addons/nats/authentications/token-auth/examples/secret.yaml new file mode 100644 index 00000000..37266613 --- /dev/null +++ b/docs/addons/nats/authentications/token-auth/examples/secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats-auth + namespace: demo +data: + token: c2VjcmV0 diff --git a/docs/addons/nats/authentications/token-auth/images/sample-nats-backup.png b/docs/addons/nats/authentications/token-auth/images/sample-nats-backup.png new file mode 100644 index 00000000..b3f603de Binary files /dev/null and b/docs/addons/nats/authentications/token-auth/images/sample-nats-backup.png differ diff --git a/docs/addons/nats/authentications/token-auth/index.md b/docs/addons/nats/authentications/token-auth/index.md new file mode 100644 index 00000000..8643756f --- /dev/null +++ b/docs/addons/nats/authentications/token-auth/index.md @@ -0,0 +1,645 @@ +--- +title: NATS with Token authentication +description: Backup NATS with Token authetication using Stash +menu: + docs_{{ .version }}: + identifier: stash-nats-token-auth + name: Token Authentication + parent: stash-nats-auth + weight: 15 +product_name: stash +menu_name: docs_{{ .version }} +section_menu_id: stash-addons +--- + +# Backup NATS with Token Authentication using Stash + +Stash `{{< param "info.version" >}}` supports backup and restoration of NATS streams. This guide will show you how you can backup & restore a NATS server with token authentication using Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install Stash Enterprise in your cluster following the steps [here](/docs/setup/install/enterprise.md). +- If you are not familiar with how Stash backup and restore NATS streams, please check the following guide [here](/docs/addons/nats/overview/index.md). + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/concepts/crds/appbinding.md) +- [Function](/docs/concepts/crds/function.md) +- [Task](/docs/concepts/crds/task.md) +- [BackupConfiguration](/docs/concepts/crds/backupconfiguration.md) +- [BackupSession](/docs/concepts/crds/backupsession.md) +- [RestoreSession](/docs/concepts/crds/restoresession.md) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created already. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored [here](https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/authentications/token-auth/examples). + +## Prepare NATS + +In this section, we are going to deploy a NATS cluster with token authentication enabled. Then, we are going to create a stream and publish some messages into it. + +### Deploy NATS Cluster + +At first, let's deploy a NATS cluster. Here, we are going to use [NATS](https://github.com/nats-io/k8s/tree/main/helm/charts/nats ) chart from [nats.io](https://nats.io/). + +Let's deploy a NATS cluster named `sample-nats` using Helm as below, + +```bash +# Add nats chart registry +$ helm repo add nats https://nats-io.github.io/k8s/helm/charts/ +# Update helm registries +$ helm repo update +# Install nats/nats chart into demo namespace +$ helm install sample-nats nats/nats -n demo \ +--set nats.jetstream.enabled=true \ +--set nats.jetstream.fileStorage.enabled=true \ +--set cluster.enabled=true \ +--set cluster.replicas=3 \ +--set auth.enabled=true \ +--set auth.token="secret" +``` + +This chart will create the necessary StatefulSet, Service, PVCs etc. for the NATS cluster. You can easily view all the resources created by chart using [ketall](https://github.com/corneliusweig/ketall) `kubectl` plugin as below, + +```bash +❯ kubectl get-all -n demo -l app.kubernetes.io/instance=sample-nats +NAME NAMESPACE AGE +configmap/sample-nats-config demo 11m +endpoints/sample-nats demo 11m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-0 demo 11m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-1 demo 10m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-2 demo 10m +pod/sample-nats-0 demo 11m +pod/sample-nats-1 demo 10m +pod/sample-nats-2 demo 10m +service/sample-nats demo 11m +controllerrevision.apps/sample-nats-775468b94f demo 11m +statefulset.apps/sample-nats demo 11m +endpointslice.discovery.k8s.io/sample-nats-7n7v6 demo 11m +``` + +Now, wait for the NATS server pods `sample-nats-0`, `sample-nats-1`, `sample-nats-2` to go into `Running` state, + +```bash +❯ kubectl get pod -n demo -l app.kubernetes.io/instance=sample-nats +NAME READY STATUS RESTARTS AGE +sample-nats-0 3/3 Running 0 9m58s +sample-nats-1 3/3 Running 0 9m35s +sample-nats-2 3/3 Running 0 9m12s +``` + +Once the pods are in `Running` state, verify that the NATS server is ready to accept the connections. + +```bash +❯ kubectl logs -n demo sample-nats-0 -c nats +[7] 2021/09/06 08:33:53.111508 [INF] Starting nats-server +[7] 2021/09/06 08:33:53.111560 [INF] Version: 2.6.1 +... +[7] 2021/09/06 08:33:53.116004 [INF] Server is ready +``` + +From the above log, we can see the NATS server is ready to accept connections. + +### Insert Sample Data +The above Helm chart also deploy a pod with nats-box image which can be used to interact with the NATS server. Let's verify the nats-box pod has been created. + +```bash +❯ kubectl get pod -n demo -l app=sample-nats-box +NAME READY STATUS RESTARTS AGE +sample-nats-box-785f8458d7-wtnfx 1/1 Running 0 7m20s +``` + +Let's exec into the nats-box pod, + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Let's export the token as environment variables to make further commands re-usable. +sample-nats-box-785f8458d7-wtnfx:~# export NATS_USER=secret + +# Let's create a stream named "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats stream add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --max-msgs-per-subject=-1 --discard old --dupe-window="0s" --replicas 1 +Stream ORDERS was created + +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: truee + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 0 + Bytes: 0 B + FirstSeq: 0 + LastSeq: 0 + Active Consumers: 0 + + +# Verify that the stream has been created successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +Streams: + + ORDERS + +# Lets add some messages to the stream "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats pub ORDERS.scratch hello +08:55:39 Published 5 bytes to "ORDERS.scratch" + +# Add another message +sample-nats-box-785f8458d7-wtnfx:~# nats pub ORDERS.scratch world +08:56:11 Published 5 bytes to "ORDERS.scratch" + +# Verify that the messages have been published to the stream successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream info ORDERS +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 2 + Bytes: 98 B + FirstSeq: 1 @ 2021-09-03T08:55:39 UTC + LastSeq: 2 @ 2021-09-03T08:56:11 UTC + Active Consumers: 0 + +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +We have successfully deployed a NATS cluster, created a stream and publish some messages into the stream. In the subsequent sections, we are going to backup this sample data using Stash. + +## Prepare for Backup + +In this section, we are going to prepare the necessary resources (i.e. connection information, backend information, etc.) before backup. + +### Ensure NATS Addon + +When you install Stash Enterprise version, it will automatically install all the official addons. Make sure that NATS addon has been installed properly using the following command. + +```bash +❯ kubectl get tasks.stash.appscode.com | grep nats +nats-backup-2.6.1 24m +nats-restore-2.6.1 24m +``` + +This addon should be able to take backup of the NATS streams with matching major versions as discussed in [Addon Version Compatibility](/docs/addons/nats/README.md#addon-version-compatibility). + +### Create Secret + + Lets create a secret with token auth credentials. Below is the YAML of `Secret` object we are going to create. + +```yaml +apiVersion: v1 +kind: Secret +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats-auth + namespace: demo +data: + token: c2VjcmV0 +``` + +Let's create the `Secret` we have shown above, +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/authentications/token-auth/examples/secret.yaml +secret/sample-nats-auth created +``` + + +### Create AppBinding + +Stash needs to know how to connect with the NATS server. An `AppBinding` exactly provides this information. It holds the Service and Secret information of the NATS server. You have to point to the respective `AppBinding` as a target of backup instead of the NATS server itself. + +Here, is the YAML of the `AppBinding` that we are going to create for the NATS server we have deployed earlier. + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats + namespace: demo +spec: + clientConfig: + service: + name: sample-nats + port: 4222 + scheme: nats + secret: + name: sample-nats-auth + type: nats.io/nats + version: 2.6.1 +``` + +Here, + +- `.spec.clientConfig.service` specifies the Service information to use to connects with the NATS server. +- `.spec.secret` specifies the name of the Secret that holds necessary credentials to access the server. +- `.spec.type` specifies the type of the target. This is particularly helpful in auto-backup where you want to use different path prefixes for different types of target. + +Let's create the `AppBinding` we have shown above, + +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/authentications/token-auth/examples/appbinding.yaml +appbinding.appcatalog.appscode.com/sample-nats created +``` + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. So, we need to create a Secret with GCS credentials and a `Repository` object with the bucket information. If you want to use a different backend, please read the respective backend configuration doc from [here](/docs/guides/latest/backends/overview.md). + +**Create Storage Secret:** + +At first, let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```bash +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-json.key > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, create a `Repository` object with the information of your desired bucket. Below is the YAML of `Repository` object we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/nats/sample-nats + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```bash +$ kubectl create -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/authentications/token-auth/examples/repository.yaml +repository.stash.appscode.com/gcs-repo created +``` + +Now, we are ready to backup our streams into our desired backend. + +### Backup + +To schedule a backup, we have to create a `BackupConfiguration` object targeting the respective `AppBinding` of our NATS server. Then, Stash will create a CronJob to periodically backup the streams. + +#### Create BackupConfiguration + +Below is the YAML for `BackupConfiguration` object that we are going to use to backup the streams of the NATS server we have created earlier, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: sample-nats-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `.spec.schedule` specifies that we want to backup the streams at 5 minutes intervals. +- `.spec.task.name` specifies the name of the Task object that specifies the necessary Functions and their execution order to backup NATS streams. +- `.spec.repository.name` specifies the Repository CR name we have created earlier with backend information. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted NATS server. +- `spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the dumped data temporarily before uploading it into the cloud bucket. +- `.spec.retentionPolicy` specifies a policy indicating how we want to cleanup the old backups. + +Let's create the `BackupConfiguration` object we have shown above, + +```bash +$ kubectl create -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/authentications/token-auth/examples/backupconfiguration.yaml +backupconfiguration.stash.appscode.com/sample-nats-backup created +``` + +#### Verify CronJob + +If everything goes well, Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` object. + +Verify that the CronJob has been created using the following command, + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * False 0 14s +``` + +#### Wait for BackupSession + +The `sample-nats-backup` CronJob will trigger a backup on each scheduled slot by creating a `BackupSession` object. + +Now, wait for a schedule to appear. Run the following command to watch for `BackupSession` object, + +```bash +❯ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +sample-nats-backup-8x8fp BackupConfiguration sample-nats-backup Succeeded 42s 8m28s +``` + +Here, the phase `Succeeded` means that the backup process has been completed successfully. + +#### Verify Backup + +Now, we are going to verify whether the backed up data is present in the backend or not. Once a backup is completed, Stash will update the respective `Repository` object to reflect the backup completion. Check that the repository `gcs-repo` has been updated by the following command, + +```bash +❯ kubectl get repository -n demo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo true 1.382 KiB 1 9m4s 24m +``` + +Now, if we navigate to the GCS bucket, we will see the backed up data has been stored in `demo/nats/sample-nats` directory as specified by `.spec.backend.gcs.prefix` field of the `Repository` object. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ + + + +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore + +If you have followed the previous sections properly, you should have a successful backup of your nats streams. Now, we are going to show how you can restore the streams from the backed up data. + +### Restore Into the Same NATS Cluster + +You can restore your data into the same NATS cluster you have backed up from or into a different NATS cluster in the same cluster or a different cluster. In this section, we are going to show you how to restore in the same NATS cluster which may be necessary when you have accidentally lost any data. + +#### Temporarily Pause Backup + +At first, let's stop taking any further backup of the NATS streams so that no backup runs after we delete the sample data. We are going to pause the `BackupConfiguration` object. Stash will stop taking any further backup when the `BackupConfiguration` is paused. + +Let's pause the `sample-nats-backup` BackupConfiguration, + +```bash +$ kubectl patch backupconfiguration -n demo sample-nats-backup --type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-nats-backup patched +``` + +Verify that the `BackupConfiguration` has been paused, + +```bash +❯ kubectl get backupconfiguration -n demo sample-nats-backup +NAME TASK SCHEDULE PAUSED AGE +sample-nats-backup nats-backup-2.6.1 */5 * * * * true 2d18h +``` + +Notice the `PAUSED` column. Value `true` for this field means that the `BackupConfiguration` has been paused. + +Stash will also suspend the respective CronJob. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * True 0 56s 2d18h +``` + +#### Simulate Disaster + +Now, let's simulate a disaster scenario. Here, we are going to exec into the nats-box pod and delete the sample data we have inserted earlier. + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Let's export the token as environment variables to make further commands re-usable. +sample-nats-box-785f8458d7-wtnfx:~# export NATS_USER=secret + +# delete the stream "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats stream rm ORDERS -f + +# verify that the stream has been deleted +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +No Streams defined +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +#### Create RestoreSession + +To restore the streams, you have to create a `RestoreSession` object pointing to the `AppBinding` of the targeted NATS server. + +Here, is the YAML of the `RestoreSession` object that we are going to use for restoring the streams of the NATS server. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: nats-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] +``` + +Here, + +- `.spec.task.name` specifies the name of the Task object that specifies the necessary Functions and their execution order to restore NATS streams. +- `.spec.repository.name` specifies the Repository object that holds the backend information where our backed up data has been stored. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted NATS server. +- `.spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the restored data temporarily before injecting into the NATS server. +- `.spec.rules` specifies that we are restoring data from the latest backup snapshot of the streams. + +Let's create the `RestoreSession` object object we have shown above, + +```bash +$ kubectl apply -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/authentications/token-auth/examples/restoresession.yaml +restoresession.stash.appscode.com/sample-nats-restore created +``` + +Once, you have created the `RestoreSession` object, Stash will create a restore Job. Run the following command to watch the phase of the `RestoreSession` object, + +```bash +❯ kubectl get restoresession -n demo -w +NAME REPOSITORY PHASE DURATION AGE +sample-nats-restore gcs-repo Succeeded 15s 55s +``` + +The `Succeeded` phase means that the restore process has been completed successfully. + +#### Verify Restored Data + +Now, let's exec into the nats-box pod and verify whether data actual data has been restored or not, + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Let's export the token as environment variables to make further commands re-usable. +sample-nats-box-785f8458d7-wtnfx:~# export NATS_USER=secret + +# Verify that the stream has been restored successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +Streams: + + ORDERS + +# Verify that the messages have been restored successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream info ORDERS +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 2 + Bytes: 98 B + FirstSeq: 1 @ 2021-09-03T08:55:39 UTC + LastSeq: 2 @ 2021-09-03T08:56:11 UTC + Active Consumers: 0 + +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +Hence, we can see from the above output that the deleted data has been restored successfully from the backup. + +#### Resume Backup + +Since our data has been restored successfully we can now resume our usual backup process. Resume the `BackupConfiguration` using following command, + +```bash +❯ kubectl patch backupconfiguration -n demo sample-nats-backup --type="merge" --patch='{"spec": {"paused": false}}' +backupconfiguration.stash.appscode.com/sample-nats-backup patched +``` + +Verify that the `BackupConfiguration` has been resumed, +```bash +❯ kubectl get backupconfiguration -n demo sample-nats-backup +NAME TASK SCHEDULE PAUSED AGE +sample-nats-backup nats-backup-2.6.1 */5 * * * * false 2d19h +``` + +Here, `false` in the `PAUSED` column means the backup has been resumed successfully. The CronJob also should be resumed now. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * False 0 3m24s 4h54m +``` + +Here, `False` in the `SUSPEND` column means the CronJob is no longer suspended and will trigger in the next schedule. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo backupconfiguration sample-nats-backup +kubectl delete -n demo restoresession sample-nats-restore +kubectl delete -n demo repository gcs-repo +# delete the nats chart +helm delete sample-nats -n demo +``` diff --git a/docs/addons/nats/customization/examples/backup/multi-retention-policy.yaml b/docs/addons/nats/customization/examples/backup/multi-retention-policy.yaml new file mode 100644 index 00000000..0e8dc272 --- /dev/null +++ b/docs/addons/nats/customization/examples/backup/multi-retention-policy.yaml @@ -0,0 +1,24 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: nats-backup-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nat + retentionPolicy: + name: sample-nats-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true diff --git a/docs/addons/nats/customization/examples/backup/passing-args.yaml b/docs/addons/nats/customization/examples/backup/passing-args.yaml new file mode 100644 index 00000000..549b4628 --- /dev/null +++ b/docs/addons/nats/customization/examples/backup/passing-args.yaml @@ -0,0 +1,23 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: nats-backup-2.6.1 + params: + - name: args + value: --check + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/docs/addons/nats/customization/examples/backup/passing-streams.yaml b/docs/addons/nats/customization/examples/backup/passing-streams.yaml new file mode 100644 index 00000000..9057e8a0 --- /dev/null +++ b/docs/addons/nats/customization/examples/backup/passing-streams.yaml @@ -0,0 +1,23 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: nats-backup-2.6.1 + params: + - name: streams + value: "str1,str2" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/docs/addons/nats/customization/examples/backup/resource-limit.yaml b/docs/addons/nats/customization/examples/backup/resource-limit.yaml new file mode 100644 index 00000000..a9f4fc51 --- /dev/null +++ b/docs/addons/nats/customization/examples/backup/resource-limit.yaml @@ -0,0 +1,29 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: nats-backup-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/docs/addons/nats/customization/examples/backup/specific-user.yaml b/docs/addons/nats/customization/examples/backup/specific-user.yaml new file mode 100644 index 00000000..4e4774b9 --- /dev/null +++ b/docs/addons/nats/customization/examples/backup/specific-user.yaml @@ -0,0 +1,25 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: nats-backup-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/docs/addons/nats/customization/examples/restore/passing-args.yaml b/docs/addons/nats/customization/examples/restore/passing-args.yaml new file mode 100644 index 00000000..c637f1b9 --- /dev/null +++ b/docs/addons/nats/customization/examples/restore/passing-args.yaml @@ -0,0 +1,20 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + params: + - name: args + value: --no-progress + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + rules: + - snapshots: [latest] diff --git a/docs/addons/nats/customization/examples/restore/passing-streams.yaml b/docs/addons/nats/customization/examples/restore/passing-streams.yaml new file mode 100644 index 00000000..3bb16bb0 --- /dev/null +++ b/docs/addons/nats/customization/examples/restore/passing-streams.yaml @@ -0,0 +1,20 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + params: + - name: streams + value: "str1,str2" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + rules: + - snapshots: [latest] diff --git a/docs/addons/nats/customization/examples/restore/resource-limit.yaml b/docs/addons/nats/customization/examples/restore/resource-limit.yaml new file mode 100644 index 00000000..86a71af7 --- /dev/null +++ b/docs/addons/nats/customization/examples/restore/resource-limit.yaml @@ -0,0 +1,26 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] diff --git a/docs/addons/nats/customization/examples/restore/specific-snapshot.yaml b/docs/addons/nats/customization/examples/restore/specific-snapshot.yaml new file mode 100644 index 00000000..32ed0fa0 --- /dev/null +++ b/docs/addons/nats/customization/examples/restore/specific-snapshot.yaml @@ -0,0 +1,17 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + rules: + - snapshots: [4bc21d6f] diff --git a/docs/addons/nats/customization/examples/restore/specific-user.yaml b/docs/addons/nats/customization/examples/restore/specific-user.yaml new file mode 100644 index 00000000..7ffa6182 --- /dev/null +++ b/docs/addons/nats/customization/examples/restore/specific-user.yaml @@ -0,0 +1,22 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] diff --git a/docs/addons/nats/customization/index.md b/docs/addons/nats/customization/index.md new file mode 100644 index 00000000..ba778f67 --- /dev/null +++ b/docs/addons/nats/customization/index.md @@ -0,0 +1,380 @@ +--- +title: NATS Backup Customization +description: Customizing NATS Backup and Restore process with Stash +menu: + docs_{{ .version }}: + identifier: stash-nats-customization + name: Customizing Backup & Restore Process + parent: stash-nats + weight: 50 +product_name: stash +menu_name: docs_{{ .version }} +section_menu_id: stash-addons +--- + +# Customizing Backup and Restore Process + +Stash provides rich customization supports for the backup and restore process to meet the requirements of various cluster configurations. This guide will show you some examples of these customizations. + +> Note: YAML files used in this tutorial are stored [here](https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/customization/examples). + +## Customizing Backup Process + +In this section, we are going to show you how to customize the backup process. Here, we are going to show some examples of providing arguments to the backup process, running the backup process as a specific user, taking backup of specific streams, etc. + +### Passing arguments to the backup process +Stash NATS addon uses NATS CLI for backup. You can pass arguments to the backup command of the NATS CLI through `args` param under `task.params` section. + +The below example shows how you can pass the `--check` to check a stream for health prior to backup. + + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: nats-backup-2.6.1 + params: + - name: args + value: --check + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Backup specific streams + +Stash takes backup of all the streams by default. If you want to take backup of specific streams, you can pass a list of streams through `streams` param under `task.params` section. + +The below example shows how you can pass the `"str1,str2"` to take backup of the streams `str1` and ` str2`. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: nats-backup-2.6.1 + params: + - name: streams + value: "str1,str2" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Running backup job as a specific user + +If your cluster requires running the backup job as a specific user, you can provide `securityContext` under `runtimeSettings.pod` section. The below example shows how you can run the backup job as the root user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: nats-backup-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Specifying Memory/CPU limit/request for the backup job + +If you want to specify the Memory/CPU limit/request for your backup job, you can specify `resources` field under `runtimeSettings.container` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: nats-backup-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +### Using multiple retention policies + +You can also specify multiple retention policies for your backed up data. For example, you may want to keep few daily snapshots, few weekly snapshots, and few monthly snapshots, etc. You just need to pass the desired number with the respective key under the `retentionPolicy` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + schedule: "*/5 * * * *" + task: + name: nats-backup-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nat + retentionPolicy: + name: sample-nats-retention + keepLast: 5 + keepDaily: 10 + keepWeekly: 20 + keepMonthly: 50 + keepYearly: 100 + prune: true +``` + +To know more about the available options for retention policies, please visit [here](/docs/concepts/crds/backupconfiguration.md#specretentionpolicy). + +## Customizing Restore Process + +In this section, we are going to show how you can overwrite existing streams, restore a specific snapshot, run restore job as a specific user, etc. + +### Passing arguments to the restore process +Stash NATS addon uses NATS CLI for restore. You can pass arguments to the restore command of NATS CLI through `args` param under `task.params` section. + +The below example shows how you can pass the `--no-progress` to disable the progress using the terminal bar. It will then issue log lines instead. + + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name:nats-backup-2.6.1 + params: + - name: args + value: --no-progress + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + rules: + - snapshots: [latest] +``` + +### Restore specific streams + +Stash restores all the streams by default. If you want to restore specific streams, you can pass a list of streams through `streams` param under `task.params` section. + +The below example shows how you can pass the `"str1,str2"` to restore the streams `str1` and ` str2`. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + params: + - name: streams + value: "str1,str2" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + rules: + - snapshots: [latest] +``` + +### Overwrite existing streams + +Stash doesn't overwrite any existing stream by default during the restore process. If you want to overwrite the existing streams, you can pass `true` to the `overwrite` params under `task.params` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name:nats-backup-2.6.1 + params: + - name: overwrite + value: "true" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + rules: + - snapshots: [latest] +``` + +### Restore specific snapshot + +You can also restore a specific snapshot. At first, list the available snapshot as bellow, + +```bash +❯ kubectl get snapshots -n demo +NAME REPOSITORY HOSTNAME CREATED AT +gcs-repo-4bc21d6f gcs-repo host-0 2021-02-12T14:54:27Z +gcs-repo-f0ac7cbd gcs-repo host-0 2021-02-12T14:56:26Z +gcs-repo-9210ebb6 gcs-repo host-0 2021-02-12T14:58:27Z +gcs-repo-0aff8890 gcs-repo host-0 2021-02-12T15:00:28Z +``` + +>You can also filter the snapshots as shown in the guide [here](https://stash.run/docs/latest/concepts/crds/snapshot/#working-with-snapshot). + +Stash adds the Repository name as a prefix of the Snapshot. You have to remove the repository prefix and use only the last 8 characters as the snapshot name during restore. + +The below example shows how you can pass a specific snapshot name through the `snapshots` filed of `rules` section. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + rules: + - snapshots: [4bc21d6f] +``` + +>Please, do not specify multiple snapshots here. Each snapshot represents a complete backup of your database. Multiple snapshots are only usable during file/directory restore. + +### Running restore job as a specific user + +You can provide `securityContext` under `runtimeSettings.pod` section to run the restore job as a specific user. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + runtimeSettings: + pod: + securityContext: + runAsUser: 0 + runAsGroup: 0 + rules: + - snapshots: [latest] +``` + +### Specifying Memory/CPU limit/request for the restore job + +Similar to the backup process, you can also provide `resources` field under the `runtimeSettings.container` section to limit the Memory/CPU for your restore job. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + runtimeSettings: + container: + resources: + requests: + cpu: "200m" + memory: "1Gi" + limits: + cpu: "200m" + memory: "1Gi" + rules: + - snapshots: [latest] +``` diff --git a/docs/addons/nats/helm/examples/appbinding.yaml b/docs/addons/nats/helm/examples/appbinding.yaml new file mode 100644 index 00000000..0c418424 --- /dev/null +++ b/docs/addons/nats/helm/examples/appbinding.yaml @@ -0,0 +1,15 @@ +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats + namespace: demo +spec: + clientConfig: + service: + name: sample-nats + port: 4222 + scheme: nats + type: nats.io/nats + version: 2.6.1 diff --git a/docs/addons/nats/helm/examples/backupconfiguration.yaml b/docs/addons/nats/helm/examples/backupconfiguration.yaml new file mode 100644 index 00000000..3e1fe9fa --- /dev/null +++ b/docs/addons/nats/helm/examples/backupconfiguration.yaml @@ -0,0 +1,29 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: sample-nats-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/docs/addons/nats/helm/examples/repository.yaml b/docs/addons/nats/helm/examples/repository.yaml new file mode 100644 index 00000000..dae91419 --- /dev/null +++ b/docs/addons/nats/helm/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/nats/sample-nats + storageSecretName: gcs-secret diff --git a/docs/addons/nats/helm/examples/restoresession.yaml b/docs/addons/nats/helm/examples/restoresession.yaml new file mode 100644 index 00000000..9c5f5242 --- /dev/null +++ b/docs/addons/nats/helm/examples/restoresession.yaml @@ -0,0 +1,26 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: nats-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] diff --git a/docs/addons/nats/helm/images/sample-nats-backup.png b/docs/addons/nats/helm/images/sample-nats-backup.png new file mode 100644 index 00000000..b3f603de Binary files /dev/null and b/docs/addons/nats/helm/images/sample-nats-backup.png differ diff --git a/docs/addons/nats/helm/index.md b/docs/addons/nats/helm/index.md new file mode 100644 index 00000000..e239b1b0 --- /dev/null +++ b/docs/addons/nats/helm/index.md @@ -0,0 +1,620 @@ +--- +title: Helm managed NATS +description: Backup Helm managed NATS using Stash +menu: + docs_{{ .version }}: + identifier: stash-nats-helm + name: Helm managed NATS + parent: stash-nats + weight: 20 +product_name: stash +menu_name: docs_{{ .version }} +section_menu_id: stash-addons +--- + +# Backup Helm managed NATS using Stash + +Stash `{{< param "info.version" >}}` supports backup and restoration of NATS streams. This guide will show you how you can backup & restore a Helm managed NATS server using Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install Stash Enterprise in your cluster following the steps [here](/docs/setup/install/enterprise.md). +- If you are not familiar with how Stash backup and restore NATS streams, please check the following guide [here](/docs/addons/nats/overview/index.md). + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/concepts/crds/appbinding.md) +- [Function](/docs/concepts/crds/function.md) +- [Task](/docs/concepts/crds/task.md) +- [BackupConfiguration](/docs/concepts/crds/backupconfiguration.md) +- [BackupSession](/docs/concepts/crds/backupsession.md) +- [RestoreSession](/docs/concepts/crds/restoresession.md) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created already. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored [here](https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/helm/examples). + +## Prepare NATS + +In this section, we are going to deploy a NATS cluster. Then, we are going to insert some sample data into it. + +### Deploy NATS Cluster + +At first, let's deploy a NATS cluster. Here, we are going to use [NATS]( https://github.com/nats-io/k8s/tree/main/helm/charts/nats) chart from [nats.io](https://nats.io/). + +Let's deploy a nats cluster named `sample-nats` using Helm as below, + +```bash +# Add nats chart registry +$ helm repo add nats https://nats-io.github.io/k8s/helm/charts/ +# Update helm registries +$ helm repo update +# Install nats/nats chart into demo namespace +$ helm install sample-nats nats/nats -n demo \ +--set nats.jetstream.enabled=true \ +--set nats.jetstream.fileStorage.enabled=true \ +--set cluster.enabled=true \ +--set cluster.replicas=3 +``` + +This chart will create the necessary StatefulSet, Service, PVCs etc. for the NATS cluster. You can easily view all the resources created by chart using [ketall](https://github.com/corneliusweig/ketall) `kubectl` plugin as below, + +```bash +❯ kubectl get-all -n demo -l app.kubernetes.io/instance=sample-nats +NAME NAMESPACE AGE +configmap/sample-nats-config demo 11m +endpoints/sample-nats demo 11m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-0 demo 11m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-1 demo 10m +persistentvolumeclaim/sample-nats-js-pvc-sample-nats-2 demo 10m +pod/sample-nats-0 demo 11m +pod/sample-nats-1 demo 10m +pod/sample-nats-2 demo 10m +service/sample-nats demo 11m +controllerrevision.apps/sample-nats-775468b94f demo 11m +statefulset.apps/sample-nats demo 11m +endpointslice.discovery.k8s.io/sample-nats-7n7v6 demo 11m +``` + +Now, wait for the NATS server pods `sample-nats-0`, `sample-nats-1`, `sample-nats-2` to go into `Running` state, + +```bash +❯ kubectl get pod -n demo -l app.kubernetes.io/instance=sample-nats +NAME READY STATUS RESTARTS AGE +sample-nats-0 3/3 Running 0 9m58s +sample-nats-1 3/3 Running 0 9m35s +sample-nats-2 3/3 Running 0 9m12s +``` + +Once the nats server pods are in `Running` state, verify that the NATS server is ready to accept the connections. + +```bash +❯ kubectl logs -n demo sample-nats-0 -c nats +[7] 2021/09/06 08:33:53.111508 [INF] Starting nats-server +[7] 2021/09/06 08:33:53.111560 [INF] Version: 2.6.1 +... +[7] 2021/09/06 08:33:53.116004 [INF] Server is ready +``` + +From the above log, we can see the NATS server is ready to accept connections. + +### Insert Sample Data + +The above Helm chart also deploy a pod with nats-box image which can be used to interact with the NATS server. Let's verify the nats-box pod has been created. + +```bash +❯ kubectl get pod -n demo -l app=sample-nats-box +NAME READY STATUS RESTARTS AGE +sample-nats-box-785f8458d7-wtnfx 1/1 Running 0 7m20s +``` + +Now, let's exec into the nats-box pod and insert some sample data, + +``` +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Let's create a stream named "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats stream add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --max-msgs-per-subject=-1 --discard old --dupe-window="0s" --replicas 1 +Stream ORDERS was created + +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 0 + Bytes: 0 B + FirstSeq: 0 + LastSeq: 0 + Active Consumers: 0 + + +# Verify that the stream has been created successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +Streams: + + ORDERS + +# Lets add some messages to the stream "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats pub ORDERS.scratch hello +08:55:39 Published 5 bytes to "ORDERS.scratch" + +# Add another message +sample-nats-box-785f8458d7-wtnfx:~# nats pub ORDERS.scratch world +08:56:11 Published 5 bytes to "ORDERS.scratch" + +# Verify that the messages have been published to the stream successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream info ORDERS +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 2 + Bytes: 98 B + FirstSeq: 1 @ 2021-09-03T08:55:39 UTC + LastSeq: 2 @ 2021-09-03T08:56:11 UTC + Active Consumers: 0 + +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +We have successfully deployed a NATS cluster, created a stream and publish some messages into the stream. In the subsequent sections, we are going to backup this sample data using Stash. + +## Prepare for Backup + +In this section, we are going to prepare the necessary resources (i.e. connection information, backend information, etc.) before backup. + +### Ensure NATS Addon + +When you install Stash Enterprise version, it will automatically install all the official addons. Make sure that NATS addon has been installed properly using the following command. + +```bash +❯ kubectl get tasks.stash.appscode.com | grep nats +nats-backup-2.6.1 24m +nats-restore-2.6.1 24m +``` + +This addon should be able to take backup of the NATS streams with matching major versions as discussed in [Addon Version Compatibility](/docs/addons/nats/README.md#addon-version-compatibility). + +### Create AppBinding + +Stash needs to know how to connect with the NATS server. An `AppBinding` exactly provides this information. It holds the Service and Secret information of the NATS server. You have to point to the respective `AppBinding` as a target of backup instead of the NATS server itself. + +Here, is the YAML of the `AppBinding` that we are going to create for the NATS server we have deployed earlier. + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/instance: sample-nats + name: sample-nats + namespace: demo +spec: + clientConfig: + service: + name: sample-nats + port: 4222 + scheme: nats + type: nats.io/nats + version: 2.6.1 +``` + +Here, + +- **.spec.clientConfig.service** specifies the Service information to use to connects with the NATS server. +- `spec.type` specifies the type of the target. This is particularly helpful in auto-backup where you want to use different path prefixes for different types of target. + +Let's create the `AppBinding` we have shown above, + +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/helm/examples/appbinding.yaml +appbinding.appcatalog.appscode.com/sample-nats created +``` + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. So, we need to create a Secret with GCS credentials and a `Repository` object with the bucket information. If you want to use a different backend, please read the respective backend configuration doc from [here](/docs/guides/latest/backends/overview.md). + +**Create Storage Secret:** + +At first, let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```bash +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-json.key > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, create a `Repository` object with the information of your desired bucket. Below is the YAML of `Repository` object we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/nats/sample-nats + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```bash +$ kubectl create -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/helm/examples/repository.yaml +repository.stash.appscode.com/gcs-repo created +``` + +Now, we are ready to backup our streams into our desired backend. + +### Backup + +To schedule a backup, we have to create a `BackupConfiguration` object targeting the respective `AppBinding` of our NATS server. Then, Stash will create a CronJob to periodically backup the streams. + +#### Create BackupConfiguration + +Below is the YAML for `BackupConfiguration` object that we are going to use to backup the streams of the NATS server we have created earlier, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: sample-nats-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `.spec.schedule` specifies that we want to backup the streams at 5 minutes intervals. +- `.spec.task.name` specifies the name of the Task object that specifies the necessary Functions and their execution order to backup NATS streams. +- `.spec.repository.name` specifies the Repository CR name we have created earlier with backend information. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted NATS server. +- `spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the dumped data temporarily before uploading it into the cloud bucket. +- `.spec.retentionPolicy` specifies a policy indicating how we want to cleanup the old backups. + +Let's create the `BackupConfiguration` object we have shown above, + +```bash +$ kubectl create -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/helm/examples/backupconfiguration.yaml +backupconfiguration.stash.appscode.com/sample-nats-backup created +``` + +#### Verify CronJob + +If everything goes well, Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` object. + +Verify that the CronJob has been created using the following command, + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * False 0 14s +``` + +#### Wait for BackupSession + +The `sample-nats-backup` CronJob will trigger a backup on each scheduled slot by creating a `BackupSession` object. + +Now, wait for a schedule to appear. Run the following command to watch for `BackupSession` object, + +```bash +❯ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +sample-nats-backup-8x8fp BackupConfiguration sample-nats-backup Succeeded 42s 8m28s +``` + +Here, the phase `Succeeded` means that the backup process has been completed successfully. + +#### Verify Backup + +Now, we are going to verify whether the backed up data is present in the backend or not. Once a backup is completed, Stash will update the respective `Repository` object to reflect the backup completion. Check that the repository `gcs-repo` has been updated by the following command, + +```bash +❯ kubectl get repository -n demo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo true 1.382 KiB 1 9m4s 24m +``` + +Now, if we navigate to the GCS bucket, we will see the backed up data has been stored in `demo/nats/sample-nats` directory as specified by `.spec.backend.gcs.prefix` field of the `Repository` object. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ + +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore + +If you have followed the previous sections properly, you should have a successful backup of your NATS streams. Now, we are going to show how you can restore the streams from the backed up data. + +### Restore Into the Same NATS Cluster + +You can restore your data into the same NATS cluster you have backed up from or into a different NATS cluster in the same cluster or a different cluster. In this section, we are going to show you how to restore in the same NATS cluster which may be necessary when you have accidentally lost any data. + +#### Temporarily Pause Backup + +At first, let's stop taking any further backup of the NATS streams so that no backup runs after we delete the sample data. We are going to pause the `BackupConfiguration` object. Stash will stop taking any further backup when the `BackupConfiguration` is paused. + +Let's pause the `sample-nats-backup` BackupConfiguration, + +```bash +$ kubectl patch backupconfiguration -n demo sample-nats-backup --type="merge" --patch='{"spec": {"paused": true}}' +backupconfiguration.stash.appscode.com/sample-nats-backup patched +``` + +Verify that the `BackupConfiguration` has been paused, + +```bash +❯ kubectl get backupconfiguration -n demo sample-nats-backup +NAME TASK SCHEDULE PAUSED AGE +sample-nats-backup nats-backup-2.6.1 */5 * * * * true 2d18h +``` + +Notice the `PAUSED` column. Value `true` for this field means that the `BackupConfiguration` has been paused. + +Stash will also suspend the respective CronJob. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * True 0 56s 2d18h +``` + +#### Simulate Disaster + +Now, let's simulate a disaster scenario. Here, we are going to exec into the nats-box pod and delete the sample data we have inserted earlier. + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# delete the stream "ORDERS" +sample-nats-box-785f8458d7-wtnfx:~# nats stream rm ORDERS -f + +# verify that the stream has been deleted +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +No Streams defined + +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +#### Create RestoreSession + +To restore the streams, you have to create a `RestoreSession` object pointing to the `AppBinding` of the targeted NATS server. + +Here, is the YAML of the `RestoreSession` object that we are going to use for restoring the streams of the NATS server. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats + interimVolumeTemplate: + metadata: + name: nats-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] +``` + +Here, + +- `.spec.task.name` specifies the name of the Task object that specifies the necessary Functions and their execution order to restore NATS streams. +- `.spec.repository.name` specifies the Repository object that holds the backend information where our backed up data has been stored. +- `.spec.target.ref` refers to the respective AppBinding of the `sample-nats` cluster. +- `.spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the restored data temporarily before injecting into the NATS server. +- `.spec.rules` specifies that we are restoring data from the latest backup snapshot of the streams. + +Let's create the `RestoreSession` object object we have shown above, + +```bash +$ kubectl apply -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/helm/examples/restoresession.yaml +restoresession.stash.appscode.com/sample-nats-restore created +``` + +Once, you have created the `RestoreSession` object, Stash will create a restore Job. Run the following command to watch the phase of the `RestoreSession` object, + +```bash +❯ kubectl get restoresession -n demo -w +NAME REPOSITORY PHASE DURATION AGE +sample-nats-restore gcs-repo Succeeded 15s 55s +``` + +The `Succeeded` phase means that the restore process has been completed successfully. + +#### Verify Restored Data + +Now, let's exec into the nats-box pod and verify whether data actual data has been restored or not, + +```bash +❯ kubectl exec -n demo -it sample-nats-box-785f8458d7-wtnfx -- sh -l +... +# Verify that the stream has been restored successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream ls +Streams: + + ORDERS + +# Verify that the messages have been restored successfully +sample-nats-box-785f8458d7-wtnfx:~# nats stream info ORDERS +Information for Stream ORDERS created 2021-09-03T07:12:07Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: nats-0 + +State: + + Messages: 2 + Bytes: 98 B + FirstSeq: 1 @ 2021-09-03T08:55:39 UTC + LastSeq: 2 @ 2021-09-03T08:56:11 UTC + Active Consumers: 0 + +sample-nats-box-785f8458d7-wtnfx:~# exit +``` + +Hence, we can see from the above output that the deleted data has been restored successfully from the backup. + +#### Resume Backup + +Since our data has been restored successfully we can now resume our usual backup process. Resume the `BackupConfiguration` using following command, + +```bash +❯ kubectl patch backupconfiguration -n demo sample-nats-backup --type="merge" --patch='{"spec": {"paused": false}}' +backupconfiguration.stash.appscode.com/sample-nats-backup patched +``` + +Verify that the `BackupConfiguration` has been resumed, +```bash +❯ kubectl get backupconfiguration -n demo sample-nats-backup +NAME TASK SCHEDULE PAUSED AGE +sample-nats-backup nats-backup-2.6.1 */5 * * * * false 2d19h +``` + +Here, `false` in the `PAUSED` column means the backup has been resumed successfully. The CronJob also should be resumed now. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup */5 * * * * False 0 3m24s 4h54m +``` + +Here, `False` in the `SUSPEND` column means the CronJob is no longer suspended and will trigger in the next schedule. + +### Restore Into Different NATS Cluster of the Same Namespace + +If you want to restore the backed up data into a different NATS cluster of the same namespace, you have to create another `AppBinding` pointing to the desired NATS cluster. Then, you have to create the `RestoreSession` pointing to the new `AppBinding`. + +### Restore Into Different Namespace + +If you want to restore into a different namespace of the same cluster, you have to create the Repository, backend Secret, AppBinding, in the desired namespace. You can use [Stash kubectl plugin](https://stash.run/docs/latest/guides/latest/cli/cli/) to easily copy the resources into a new namespace. Then, you have to create the `RestoreSession` object in the desired namespace pointing to the Repository, AppBinding of that namespace. + +### Restore Into Different Cluster + +If you want to restore into a different cluster, you have to install Stash in the desired cluster. Then, you have to create the Repository, backend Secret, AppBinding, in the desired cluster. Finally, you have to create the `RestoreSession` object in the desired cluster pointing to the Repository, AppBinding of that cluster. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo backupconfiguration sample-nats-backup +kubectl delete -n demo restoresession sample-nats-restore +kubectl delete -n demo repository gcs-repo +# delete the nats chart +helm delete sample-nats -n demo +``` diff --git a/docs/addons/nats/overview/images/backup_overview.svg b/docs/addons/nats/overview/images/backup_overview.svg new file mode 100644 index 00000000..6d54b9c3 --- /dev/null +++ b/docs/addons/nats/overview/images/backup_overview.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/addons/nats/overview/images/restore_overview.svg b/docs/addons/nats/overview/images/restore_overview.svg new file mode 100644 index 00000000..e76fdc8c --- /dev/null +++ b/docs/addons/nats/overview/images/restore_overview.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/addons/nats/overview/index.md b/docs/addons/nats/overview/index.md new file mode 100644 index 00000000..59f25265 --- /dev/null +++ b/docs/addons/nats/overview/index.md @@ -0,0 +1,83 @@ +--- +title: NATS Backup & Restore Overview | Stash +description: How NATS Backup & Restore Works in Stash +menu: + docs_{{ .version }}: + identifier: stash-nats-overview + name: How does it works? + parent: stash-nats + weight: 10 +product_name: stash +menu_name: docs_{{ .version }} +section_menu_id: stash-addons +--- + +{{< notice type="warning" message="This is an Enterprise-only feature. Please install [Stash Enterprise Edition](/docs/setup/install/enterprise.md) to try this feature." >}} + +# How Stash Backups & Restores NATS Streams + +Stash `{{< param "info.version" >}}` supports backup and restore operation of NATS streams. This guide will give you an overview of how NATS stream backup and restore process works in Stash. + +## How Backup Works + +The following diagram shows how Stash takes a backup of NATS streams. Open the image in a new tab to see the enlarged version. + +
+ NATS Backup Overview +
Fig: NATS Backup Overview
+
+ +The backup process consists of the following steps: + +1. At first, a user creates a secret with access credentials of the backend where the backed up data will be stored. + +2. Then, she creates a `Repository` crd that specifies the backend information along with the secret that holds the credentials to access the backend. + +3. Then, she creates a `BackupConfiguration` crd targeting the [AppBinding](/docs/concepts/crds/appbinding.md) crd of the respective NATS server. The `BackupConfiguration` object also specifies the `Task` to use to backup the NATS streams. + +4. Stash operator watches for `BackupConfiguration` crd. + +5. Once Stash operator finds a `BackupConfiguration` crd, it creates a CronJob with the schedule specified in `BackupConfiguration` object to trigger backup periodically. + +6. On the next scheduled slot, the CronJob triggers a backup by creating a `BackupSession` crd. + +7. Stash operator also watches for `BackupSession` crd. + +8. When it finds a `BackupSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to backup. + +9. Then, it creates the Job to backup the targeted NATS server. + +10. The backup Job reads necessary information to connect with the NATS server from the `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +11. Then, the Job dumps the targeted streams and uploads the output to the backend. Stash stores the dumped files temporarily before uploading into the backend. Hence, you should provide a PVC template using `spec.interimVolumeTemplate` field of `BackupConfiguration` crd to use to store those dumped files temporarily. Make sure that the provided PVC size is capable of storing all (or, specified) the NATS streams. + +12. Finally, when the backup is completed, the Job sends Prometheus metrics to the Pushgateway running inside Stash operator pod. It also updates the `BackupSession` and `Repository` status to reflect the backup procedure. + +## How Restore Process Works + +The following diagram shows how Stash restores backed up data into a NATS streaming server. Open the image in a new tab to see the enlarged version. + +
+ NATS Restore Overview +
Fig: NATS Restore Process
+
+ +The restore process consists of the following steps: + +1. At first, a user creates a `RestoreSession` crd targeting the `AppBinding` of the desired NATS server where the backed up data will be restored. It also specifies the `Repository` crd which holds the backend information and the `Task` to use to restore the target. + +2. Stash operator watches for `RestoreSession` object. + +3. Once it finds a `RestoreSession` object, it resolves the respective `Task` and `Function` and prepares a Job definition to restore. + +4. Then, it creates the Job to restore the target. + +5. The Job reads necessary information to connect with the NATS server from respective `AppBinding` crd. It also reads backend information and access credentials from `Repository` crd and Storage Secret respectively. + +6. Then, the job downloads the backed up data from the backend and restore the streams. Stash stores the downloaded files temporarily before inserting into the targeted NATS server. Hence, you should provide a PVC template using `spec.interimVolumeTemplate` field of `RestoreSession` crd to use to store those restored files temporarily. Make sure that the provided PVC size is capable of storing all the backed up NATS streams. + +7. Finally, when the restore process is completed, the Job sends Prometheus metrics to the Pushgateway and update the `RestoreSession` status to reflect restore completion. + +## Next Steps + +- Backup your NATS using Stash following the guide from [here](/docs/addons/nats/helm/index.md). diff --git a/docs/addons/nats/tls/examples/appbinding.yaml b/docs/addons/nats/tls/examples/appbinding.yaml new file mode 100644 index 00000000..71b54b30 --- /dev/null +++ b/docs/addons/nats/tls/examples/appbinding.yaml @@ -0,0 +1,18 @@ +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/instance: sample-nats-tls + name: sample-nats-tls + namespace: demo +spec: + clientConfig: + caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4RENDQWRpZ0F3SUJBZ0lRUDZ1UXIxQVlFWnJzREF6ZHBRR09HekFOQmdrcWhraUc5dzBCQVFzRkFEQVMKTVJBd0RnWURWUVFERXdkdVlYUnpMV05oTUI0WERUSXhNRGt5TnpBMU5EWTBOVm9YRFRJeU1Ea3lOakExTkRZMApOVm93RWpFUU1BNEdBMVVFQXhNSGJtRjBjeTFqWVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDCkFRb0NnZ0VCQUovelJtSE1CUVdGMTNXZlJubzFsV0ZJajZmNmhyRGRVR0RTMXBrY0hmMDlqNS90bEpYSHpCbVMKZSs2YS9Qb1MrdkMyWEtyeVp3UVB0NW5BaUxXR1NxM3VBRnJ3TUJncVBBQktOa1hHL1hjamNvbU5lVTFaQlNYYgo1WmlJa2F6TUZPOGFqRWxYb3RmYnQ2cVc5MGNCTVduRW9pcnUyWkFyam50WjJpMmpPeGRodUJpRTkxamRsZWMyCkdZWGFKVlJ5RkF1eVdXanVEV3o0NjFKdXBMdXcxVWJyVHpmMExUenQxdk9ONnZNU1RQT0Z0S0tnd0RGenB5ZkgKVjFFQlZ5aG1KSk42QW13SkErZGEvMmsxMUJCeHFEWldsOEZMWE1TWUcvU0hKak5sQ3VsQTFvVVJWbFI3MVF6KwpTanB2bkxKVm9nL01sYVcvTzB5N0lRcTVQNUZGeDBNQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trCk1BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZNcm5ZN0Izek5VY1AvN3hHTzhkTFIwZVcxUnIKTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDQVZPQUZhTjRVOUQ0U1cxcWJUTDhkcWhFbklXTFd3YVBJdXJGSAo1MVVRMEUxenFTOWcvQ1gwUElOUmJ1bFpseHVKRGFBZEwweVYwYmZYZExLQnJacDNwS001eGRyaEoxQ3luNjV5CkRML0RTd3hTOHlxT3NwTXF2SkoyUTBhQ0JQTXhDRFZoOGVFZ2krOG9ISmdobkZzaTkvanNoZ0dUS09QbVVWdHcKTyszS1B0MFBiNVRDSVpJdlA1cXBybkU0U2hDWnRRZ0UyY0dJTEJPZEt5VEl6QlpuM3ZNZjc2Zjd4NU4rWEtINgpQN3Q4Yks0SUFSbzR1WUN0cDQ0K0dkY2FlcjlDL2RVNlpaMSs1Nm4xcUo3a3FTV3cwNFZqbi9CVWt5WnhIdFZPCkFLcUNCRWtnK3NBQytYUmNiOFdxTHkreEEzdmU0TmxqalE3T2MrVXVzanNrSndOVQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== + service: + name: sample-nats-tls + port: 4222 + scheme: nats + secret: + name: nats-client-tls + type: nats.io/nats + version: 2.6.1 diff --git a/docs/addons/nats/tls/examples/backupconfiguration.yaml b/docs/addons/nats/tls/examples/backupconfiguration.yaml new file mode 100644 index 00000000..ab6e7a70 --- /dev/null +++ b/docs/addons/nats/tls/examples/backupconfiguration.yaml @@ -0,0 +1,29 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup-tls + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats-tls + interimVolumeTemplate: + metadata: + name: nats-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true diff --git a/docs/addons/nats/tls/examples/ca.yaml b/docs/addons/nats/tls/examples/ca.yaml new file mode 100644 index 00000000..3ec8c1ef --- /dev/null +++ b/docs/addons/nats/tls/examples/ca.yaml @@ -0,0 +1,14 @@ +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: nats-ca + namespace: demo +spec: + secretName: nats-ca + duration: 8736h # 1 year + renewBefore: 240h # 10 days + issuerRef: + name: selfsigning + kind: ClusterIssuer + commonName: nats-ca + isCA: true diff --git a/docs/addons/nats/tls/examples/cert.yaml b/docs/addons/nats/tls/examples/cert.yaml new file mode 100644 index 00000000..45639bbc --- /dev/null +++ b/docs/addons/nats/tls/examples/cert.yaml @@ -0,0 +1,29 @@ +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: nats-server-tls + namespace: demo +spec: + secretName: nats-server-tls + duration: 2160h # 90 days + renewBefore: 240h # 10 days + issuerRef: + name: nats-ca + kind: Issuer + commonName: sample-nats-server + dnsNames: + - sample-nats-tls +--- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: nats-client-tls + namespace: demo +spec: + secretName: nats-client-tls + duration: 2160h # 90 days + renewBefore: 240h # 10 days + issuerRef: + name: nats-ca + kind: Issuer + commonName: sample-nats-client diff --git a/docs/addons/nats/tls/examples/clusterissuer.yaml b/docs/addons/nats/tls/examples/clusterissuer.yaml new file mode 100644 index 00000000..4c39e255 --- /dev/null +++ b/docs/addons/nats/tls/examples/clusterissuer.yaml @@ -0,0 +1,7 @@ +apiVersion: cert-manager.io/v1 +kind: ClusterIssuer +metadata: + name: selfsigning + namespace: demo +spec: + selfSigned: {} diff --git a/docs/addons/nats/tls/examples/issuer.yaml b/docs/addons/nats/tls/examples/issuer.yaml new file mode 100644 index 00000000..fea695a4 --- /dev/null +++ b/docs/addons/nats/tls/examples/issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: nats-ca + namespace: demo +spec: + ca: + secretName: nats-ca diff --git a/docs/addons/nats/tls/examples/repository.yaml b/docs/addons/nats/tls/examples/repository.yaml new file mode 100644 index 00000000..c1a9d7d7 --- /dev/null +++ b/docs/addons/nats/tls/examples/repository.yaml @@ -0,0 +1,11 @@ +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/nats/sample-nats-tls + storageSecretName: gcs-secret diff --git a/docs/addons/nats/tls/examples/restoresession.yaml b/docs/addons/nats/tls/examples/restoresession.yaml new file mode 100644 index 00000000..243d012d --- /dev/null +++ b/docs/addons/nats/tls/examples/restoresession.yaml @@ -0,0 +1,26 @@ +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore-tls + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats-tls + interimVolumeTemplate: + metadata: + name: nats-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] diff --git a/docs/addons/nats/tls/images/sample-nats-backup.png b/docs/addons/nats/tls/images/sample-nats-backup.png new file mode 100644 index 00000000..adc85980 Binary files /dev/null and b/docs/addons/nats/tls/images/sample-nats-backup.png differ diff --git a/docs/addons/nats/tls/index.md b/docs/addons/nats/tls/index.md new file mode 100644 index 00000000..284adc63 --- /dev/null +++ b/docs/addons/nats/tls/index.md @@ -0,0 +1,792 @@ +--- +title: TLS secured NATS +description: Backup TLS secured NATS using Stash +menu: + docs_{{ .version }}: + identifier: stash-nats-tls-auth + name: TLS secured NATS + parent: stash-nats + weight: 40 +product_name: stash +menu_name: docs_{{ .version }} +section_menu_id: stash-addons +--- + +# Backup TLS secured NATS using Stash + +Stash `{{< param "info.version" >}}` supports backup and restoration of NATS streams. This guide will show you how you can backup & restore a TLS secured NATS server using Stash. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. +- Install Stash Enterprise in your cluster following the steps [here](/docs/setup/install/enterprise.md). +- Install cert-manager in your cluster following the instruction [here](https://cert-manager.io/docs/installation/). +- If you are not familiar with how Stash backup and restore NATS streams, please check the following guide [here](/docs/addons/nats/overview/index.md). + +You have to be familiar with following custom resources: + +- [AppBinding](/docs/concepts/crds/appbinding.md) +- [Function](/docs/concepts/crds/function.md) +- [Task](/docs/concepts/crds/task.md) +- [BackupConfiguration](/docs/concepts/crds/backupconfiguration.md) +- [BackupSession](/docs/concepts/crds/backupsession.md) +- [RestoreSession](/docs/concepts/crds/restoresession.md) + +To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created already. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored [here](https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/tls/examples). + +## Prepare NATS + +In this section, we are going to deploy a TLS secured NATS cluster. Then, we are going to create a stream and publish some messages into it. + + +### Create Certificate +At first, let's create a ` ClusterIssuer` that we will be using to issue our CA certificates. Below is the YAML of `ClusterIssuer` object we are going to create. +```yaml +apiVersion: cert-manager.io/v1 +kind: ClusterIssuer +metadata: + name: selfsigning + namespace: demo +spec: + selfSigned: {} +``` +Let's create the `ClusterIssuer` we have shown above, +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/tls/examples/clusterissuer.yaml +clusterissuer.cert-manager.io/selfsigning created +``` + +Now, let's issue the CA certificate using the `ClusterIssuer` we have created above. Below is the YAML of `Certificate` object we are going to create. +```yaml +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: nats-ca + namespace: demo +spec: + secretName: nats-ca + duration: 8736h # 1 year + renewBefore: 240h # 10 days + issuerRef: + name: selfsigning + kind: ClusterIssuer + commonName: nats-ca + isCA: true +``` +Let's create the `Certificate` we have shown above, +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/tls/examples/ca.yaml +certificate.cert-manager.io/nats-ca created +``` + +Cert-manager will automatically create a Secret named specified by `spec.secretName` field with the desired CA certificate. Let's verify that the Secret has been created successfully, + +```bash +❯ kubectl get secret -n demo nats-ca +NAME TYPE DATA AGE +nats-ca kubernetes.io/tls 3 24h +``` + +Now, we are going create a `Issuer` with the above CA Secret. We are going to use this `Issuer` to issue server and client certificates for our NATS server. Below is the YAML of `Issuer` object we are going to create. + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: nats-ca +spec: + ca: + secretName: nats-ca +``` +Let's create the `Issuer` we have shown above, +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/tls/examples/issuer.yaml +issuer.cert-manager.io/nats-ca created +``` + +Now, lets create the server and client certificates. Below is the YAML of `Certificate` objects we are going to create. + +```yaml +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: nats-server-tls + namespace: demo +spec: + secretName: nats-server-tls + duration: 2160h # 90 days + renewBefore: 240h # 10 days + issuerRef: + name: nats-ca + kind: Issuer + commonName: sample-nats-server + dnsNames: + - sample-nats-tls +--- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: nats-client-tls + namespace: demo +spec: + secretName: nats-client-tls + duration: 2160h # 90 days + renewBefore: 240h # 10 days + issuerRef: + name: nats-ca + kind: Issuer + commonName: sample-nats-client +``` +Let's create the `Certificates` we have shown above, +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/tls/examples/cert.yaml +certificate.cert-manager.io/nats-server-tls created +certificate.cert-manager.io/nats-client-tls created +``` + +Cert-manager will automatically create `nats-server-tls` and `nats-client-tls` Secrets with the desired certificates. Let's verify the Secrets have been created successfully, + +```bash +❯ kubectl get secrets -n demo nats-client-tls nats-server-tls +NAME TYPE DATA AGE +nats-client-tls kubernetes.io/tls 3 24h +nats-server-tls kubernetes.io/tls 3 24h + +``` + +### Deploy NATS Cluster + +Now, let's deploy a NATS cluster. Here, we are going to use [NATS]( https://nats-io.github.io/k8s/helm/charts/) chart from [nats.io](https://nats.io/). + +Let's deploy a NATS cluster named `sample-nats` using Helm as below, + +```bash +# Add nats chart registry +$ helm repo add nats https://nats-io.github.io/k8s/helm/charts/ +# Update helm registries +$ helm repo update +# Install nats/nats chart into demo namespace +$ helm install sample-nats-tls nats/nats -n demo \ +--set nats.jetstream.enabled=true \ +--set nats.jetstream.fileStorage.enabled=true \ +--set nats.tls.secret.name=nats-server-tls \ +--set nats.tls.ca="ca.crt" \ +--set nats.tls.cert="tls.crt" \ +--set nats.tls.key="tls.key" \ +--set nats.tls.verify=true \ +--set cluster.enabled=true \ +--set cluster.replicas=3 + +``` + +This chart will create the necessary StatefulSet, Service, PVCs etc. for the NATS cluster. You can easily view all the resources created by chart using [ketall](https://github.com/corneliusweig/ketall) `kubectl` plugin as below, + +```bash +❯ kubectl get-all -n demo -l app.kubernetes.io/instance=sample-nats-tls +NAME NAMESPACE AGE +configmap/sample-nats-tls-config demo 9m40s +endpoints/sample-nats-tls demo 9m40s +persistentvolumeclaim/sample-nats-tls-js-pvc-sample-nats-tls-0 demo 9m40s +persistentvolumeclaim/sample-nats-tls-js-pvc-sample-nats-tls-1 demo 9m17s +persistentvolumeclaim/sample-nats-tls-js-pvc-sample-nats-tls-2 demo 8m54s +pod/sample-nats-tls-0 demo 9m40s +pod/sample-nats-tls-1 demo 9m17s +pod/sample-nats-tls-2 demo 8m54s +service/sample-nats-tls demo 9m40s +controllerrevision.apps/sample-nats-tls-76dfb9c75 demo 9m40s +statefulset.apps/sample-nats-tls demo 9m40s +endpointslice.discovery.k8s.io/sample-nats-tls-6lxps demo 9m40s + +``` + +Now, wait for the NATS server pods `sample-nats-tls-0`, `sample-nats-tls-1`, `sample-nats-tls-2` to go into `Running` state, + +```bash +❯ kubectl get pod -n demo -l app.kubernetes.io/instance=sample-nats +NAME READY STATUS RESTARTS AGE +sample-nats-tls-0 3/3 Running 0 11m +sample-nats-tls-1 3/3 Running 0 11m +sample-nats-tls-2 3/3 Running 0 11m +``` + +Once the pods are in `Running` state, verify that the NATS server is ready to accept the connections. + +```bash +❯ kubectl logs -n demo sample-nats-tls-0 -c nats +[7] 2021/09/06 08:33:53.111508 [INF] Starting nats-server +[7] 2021/09/06 08:33:53.111560 [INF] Version: 2.6.1 +... +[7] 2021/09/06 08:33:53.116004 [INF] Server is ready +``` + +From the above log, we can see the NATS server is ready to accept connections. + +### Insert Sample Data +The above Helm chart also deploy a pod with nats-box image which can be used to interact with the NATS server. Let's verify the nats-box pod has been created. + +```bash +❯ kubectl get pod -n demo -l app=sample-nats-tls-box +NAME READY STATUS RESTARTS AGE +sample-nats-tls-box-67fb4fb4f9-gtt9z 1/1 Running 0 13m +``` + +Now, we are going to exec into the nats-box pod and create some sample data, We are going to use the client certificates created in `nats-client-tls` Secret to connect with the NATS server. So, let's create the certificates files inside the nats-box pod. + +At first, let's get the certificates from the `nats-client-tls` Secret, + +```bash +❯ kubectl get secret -n demo nats-client-tls -o yaml +apiVersion: v1 +data: + ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4RENDQWRpZ0F3SUJBZ0lRUDZ1UXIxQVlFWnJzREF6ZHBRR09HekFOQmdrcWhraUc5dzBCQVFzRkFEQVMKTVJBd0RnWURWUVFERXdkdVlYUnpMV05oTUI0WERUSXhNRGt5TnpBMU5EWTBOVm9YRFRJeU1Ea3lOakExTkRZMApOVm93RWpFUU1BNEdBMVVFQXhNSGJtRjBjeTFqWVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDCkFRb0NnZ0VCQUovelJtSE1CUVdGMTNXZlJubzFsV0ZJajZmNmhyRGRVR0RTMXBrY0hmMDlqNS90bEpYSHpCbVMKZSs2YS9Qb1MrdkMyWEtyeVp3UVB0NW5BaUxXR1NxM3VBRnJ3TUJncVBBQktOa1hHL1hjamNvbU5lVTFaQlNYYgo1WmlJa2F6TUZPOGFqRWxYb3RmYnQ2cVc5MGNCTVduRW9pcnUyWkFyam50WjJpMmpPeGRodUJpRTkxamRsZWMyCkdZWGFKVlJ5RkF1eVdXanVEV3o0NjFKdXBMdXcxVWJyVHpmMExUenQxdk9ONnZNU1RQT0Z0S0tnd0RGenB5ZkgKVjFFQlZ5aG1KSk42QW13SkErZGEvMmsxMUJCeHFEWldsOEZMWE1TWUcvU0hKak5sQ3VsQTFvVVJWbFI3MVF6KwpTanB2bkxKVm9nL01sYVcvTzB5N0lRcTVQNUZGeDBNQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trCk1BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZNcm5ZN0Izek5VY1AvN3hHTzhkTFIwZVcxUnIKTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDQVZPQUZhTjRVOUQ0U1cxcWJUTDhkcWhFbklXTFd3YVBJdXJGSAo1MVVRMEUxenFTOWcvQ1gwUElOUmJ1bFpseHVKRGFBZEwweVYwYmZYZExLQnJacDNwS001eGRyaEoxQ3luNjV5CkRML0RTd3hTOHlxT3NwTXF2SkoyUTBhQ0JQTXhDRFZoOGVFZ2krOG9ISmdobkZzaTkvanNoZ0dUS09QbVVWdHcKTyszS1B0MFBiNVRDSVpJdlA1cXBybkU0U2hDWnRRZ0UyY0dJTEJPZEt5VEl6QlpuM3ZNZjc2Zjd4NU4rWEtINgpQN3Q4Yks0SUFSbzR1WUN0cDQ0K0dkY2FlcjlDL2RVNlpaMSs1Nm4xcUo3a3FTV3cwNFZqbi9CVWt5WnhIdFZPCkFLcUNCRWtnK3NBQytYUmNiOFdxTHkreEEzdmU0TmxqalE3T2MrVXVzanNrSndOVQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== + tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMrekNDQWVPZ0F3SUJBZ0lSQUlnc1laWkI2RU1MQjR6NGRnZTI2ZUV3RFFZSktvWklodmNOQVFFTEJRQXcKRWpFUU1BNEdBMVVFQXhNSGJtRjBjeTFqWVRBZUZ3MHlNVEE1TWpjd05UUTJORGxhRncweU1URXlNall3TlRRMgpORGxhTUIweEd6QVpCZ05WQkFNVEVuTmhiWEJzWlMxdVlYUnpMV05zYVdWdWREQ0NBU0l3RFFZSktvWklodmNOCkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFOZEdselNWRTBGS1cyY1M5V3VqZmk1Wk5oY3I3dXJsUjZZSGR1dkwKQ2ZDRGdEZTNBUVBpWG92YzI4elgxY3d2cVVQU3l2WXpGUFN4dUtVRXpoLzBGZE5kMGk2SkVRQUp4dVgwU2JSdwpjYXZXMVR5MkZFWExtYTNiMnBWVWJ6dUE1VVdGQzFwd3hZVCsvcERHNmI4YnFuQVJiaFNvdUowQUoxTGNGT08zCjg1V1RtWUFDRHY4dmFyRFQvM0xmYWtndXJqYWc4SWdMMURyd2hxNFNjRllveElYbXJJZjhMTVVERkN4Y251aE0KNnIraUl4OXFhWkJjMys2eU4yNXNvc2J6ZDlXbXRlT3J3Z2pKUzRLdU9ZaWl2VzBsSDNzQTlOSG9HU3cwSDVtWgpjcndsMHZxV0JvaWFoMXdWSjM5S1NrWlRvLzhUaGpMQUV5QUZKdzRuNzk1eHFwOENBd0VBQWFOQk1EOHdEZ1lEClZSMFBBUUgvQkFRREFnV2dNQXdHQTFVZEV3RUIvd1FDTUFBd0h3WURWUjBqQkJnd0ZvQVV5dWRqc0hmTTFSdy8KL3ZFWTd4MHRIUjViVkdzd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGRTE0OTJkdS9HSVpKS3BuMEdRNE1STApHbDI5bXc1Wi9nUWxwTTN3NGdYU1hxbmNGczREcHJFd2g5R25PTkEzcXpta1NIakFwWmwwQzdWZi84Y2RnNS8zCk90UVZSSGkxQVJFcGFHMlVUMnFJSXp1SUVLN0tRZE5maXpVYVVaMFgzb041Kyt4YWU4WSsxa3dZOXZxaXdWRlcKbS96T1JzSmRtcnRqNGZRSTVaVGRzVG1jRkxqUXBOcktOSWFVU2pHOFM1Q3pOMlJBekZHTTBCZWYzSWFzWTF2WQpDTWpOZlBxaUNtZFNlOFFxRE1UMURwRExjaFltQlQ3UjdyR0JBaEFXWEpEZlVMRXlXUk9XdmRrWlhnTk5ZMnlJCkhmbktUTy9TL1FpUUs4N0Y2SWtEM2tKalZPVDhUNVBmYjBwYTJnaDlnSkZmZUJPVW9FUkYrdG4zYWI3Z3BkQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= + tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBMTBhWE5KVVRRVXBiWnhMMWE2TitMbGsyRnl2dTZ1VkhwZ2QyNjhzSjhJT0FON2NCCkErSmVpOXpiek5mVnpDK3BROUxLOWpNVTlMRzRwUVRPSC9RVjAxM1NMb2tSQUFuRzVmUkp0SEJ4cTliVlBMWVUKUmN1WnJkdmFsVlJ2TzREbFJZVUxXbkRGaFA3K2tNYnB2eHVxY0JGdUZLaTRuUUFuVXR3VTQ3ZnpsWk9aZ0FJTwoveTlxc05QL2N0OXFTQzZ1TnFEd2lBdlVPdkNHcmhKd1ZpakVoZWFzaC93c3hRTVVMRnllNkV6cXY2SWpIMnBwCmtGemY3ckkzYm15aXh2TjMxYWExNDZ2Q0NNbExncTQ1aUtLOWJTVWZld0QwMGVnWkxEUWZtWmx5dkNYUytwWUcKaUpxSFhCVW5mMHBLUmxPai94T0dNc0FUSUFVbkRpZnYzbkdxbndJREFRQUJBb0lCQUZWZ0NvRnhDY1RmLzJYZQpiL1J6VDR5RUZ0NlRydG43ZWpIUFRndHZaNDY2S0RSd1lIZXc0L3dsNkFuU0kxa3FJYi9qTGxqN296ano3cDJMClRWQUExbE1RSjFZTFIvR3k3dTJ0dHpsWFNzMXlrdmpUNFRCWThhYXd4WHhwa3YrUE85NFpTSXBpcFFMOHVlcWkKNkhyQk54UGc1YjVOdDRHVVdRUVVnamhaY01JRm9DUUwrRmNkZGs4RkQ1UFkxNnprWDNReUpITlVkZXRoYmJKdQoveVAvMTk3b2l6aVdzbWJSQ3Z6Z3Q2bEtOUk51VjZJRitMdnM4RWxGUlAyeStLcmI2SXRjT3lwRjY5TGE1N1pZCjY0enJWSVJnc2FYZVBTTWpGVkN1ZHFDY2QrL0FEeFU3YVc2TzhFaXJBQ0pqcXFYcnBYamd3MVNUY21WR2ZYK1gKQithRjIza0NnWUVBNXFsYnJGaHhWQlZBVTlMR3JnVGNLZXRGUEJveTQyOWxVTnJXMU5xSUxEaExZVjgxRGZIQQpXQVdYK1l0NGJUTXZqTUd2TDhDM0pXL0NiNDNHSDhmU2lwUTZDUlR2dDBYNXlyNytWL0tVdWdpKzV4ZmVoMlIzCldiRUNBVWNKM0UxVmtvRkN2ZEFtbHZBMlNROGlRVHNkV3JuclJxWFhEVkFGQzlCNEtZQ0JiWk1DZ1lFQTd1eU4KNVVGMkg0dmZmSWtZQUZzS0k0YlM4blQ5UGw4dmNPeUNoTUFLNUlnSHQyQUk0RTRVa20zeHFiblV0cDdjdHoySgplbUxJaTJ3M2pVMXdjWittc0pIYnRyMmxDdGVMNjJjMENLYXVsaDA1YWhLZ0VZUjhlVzcyL1F6Skg1WDhPTkZsCkx4eW9vRUo0Vmo2T1VwcTZKZmJTUW03YUFKMWE1dVBHUzhZWmxrVUNnWUJhcm5oSThGaFZpeWxJQ3hScTg2UXUKb3IwTVhPeG10N09vTHZESXE4VmZSUjUxZ0gyV0p0WE1oUjV6VDk2Zlo4RW80RGhrV0twb0FHRDdoRXhBMEVrNAppLytvOUY4dHVVZno2bFNKOU9kOW45U1ZlNi9Ub0s2L1J6U1hsZnNOYmlYWFBCUW1GWUFtVlBleWowMlRRWTlQCnpNbnZjMkZ4YldVZWVPM1V1eDJuR3dLQmdCQkFreDVuSjR2WnplZ0F3MXN5MWl1NGZoejBERTN6MTV4TTJrd0IKYkR4RGJKTHl1MmZXcDl1V0V2eENvYytTV3QwMEdHZjAxRU4zcHdlN25zeDcyYkRsR3hjQkszcmpVcWMrcS9GeQp0U21NNzF6aHkzV2xsM29ETEZYbVNzQVZTY1RycVlCYzZMT09FZlY3NTk2Q20rcjlNU3hIc2hpY201UmRKaDM5CmFid3BBb0dBQmZKeC91cXk3RGpubnZPLzUxMW1CSEh5bS9pdG9TNzNVL2FOd1pqWmhyWlpzZGVxUUZNcm5xSTQKQU83S05ldE9oa3NmQnVndTg5Q3dXTjRYMkpYWjd1aFFlVWlKQWNhNFc4ZmJLdTNjYytsZUVMTisrVEZ5Um91MgphRlpRYnZSazdYU01nM3d0ekk5SUtoNzIyZXRPVXJPb0FaNWthRlcrRWt2WjBoNmlTTzg9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== +kind: Secret +metadata: + annotations: + cert-manager.io/alt-names: "" + cert-manager.io/certificate-name: nats-client-tls + cert-manager.io/common-name: sample-nats-client + cert-manager.io/ip-sans: "" + cert-manager.io/issuer-group: "" + cert-manager.io/issuer-kind: Issuer + cert-manager.io/issuer-name: nats-ca + cert-manager.io/uri-sans: "" + creationTimestamp: "2021-09-27T05:46:49Z" + name: nats-client-tls + namespace: demo + resourceVersion: "386072" + uid: 89931f0b-aa0d-499c-b7f9-d3b6ada4ab08 +type: kubernetes.io/tls +``` + +Now, let's create `tls.crt` and `tls.key` files in the local machine, + +```bash +❯ echo LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMrekNDQWVPZ0F3SUJBZ0lSQUlnc1laWkI2RU1MQjR6NGRnZTI2ZUV3RFFZSktvWklodmNOQVFFTEJRQXcKRWpFUU1BNEdBMVVFQXhNSGJtRjBjeTFqWVRBZUZ3MHlNVEE1TWpjd05UUTJORGxhRncweU1URXlNall3TlRRMgpORGxhTUIweEd6QVpCZ05WQkFNVEVuTmhiWEJzWlMxdVlYUnpMV05zYVdWdWREQ0NBU0l3RFFZSktvWklodmNOCkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFOZEdselNWRTBGS1cyY1M5V3VqZmk1Wk5oY3I3dXJsUjZZSGR1dkwKQ2ZDRGdEZTNBUVBpWG92YzI4elgxY3d2cVVQU3l2WXpGUFN4dUtVRXpoLzBGZE5kMGk2SkVRQUp4dVgwU2JSdwpjYXZXMVR5MkZFWExtYTNiMnBWVWJ6dUE1VVdGQzFwd3hZVCsvcERHNmI4YnFuQVJiaFNvdUowQUoxTGNGT08zCjg1V1RtWUFDRHY4dmFyRFQvM0xmYWtndXJqYWc4SWdMMURyd2hxNFNjRllveElYbXJJZjhMTVVERkN4Y251aE0KNnIraUl4OXFhWkJjMys2eU4yNXNvc2J6ZDlXbXRlT3J3Z2pKUzRLdU9ZaWl2VzBsSDNzQTlOSG9HU3cwSDVtWgpjcndsMHZxV0JvaWFoMXdWSjM5S1NrWlRvLzhUaGpMQUV5QUZKdzRuNzk1eHFwOENBd0VBQWFOQk1EOHdEZ1lEClZSMFBBUUgvQkFRREFnV2dNQXdHQTFVZEV3RUIvd1FDTUFBd0h3WURWUjBqQkJnd0ZvQVV5dWRqc0hmTTFSdy8KL3ZFWTd4MHRIUjViVkdzd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGRTE0OTJkdS9HSVpKS3BuMEdRNE1STApHbDI5bXc1Wi9nUWxwTTN3NGdYU1hxbmNGczREcHJFd2g5R25PTkEzcXpta1NIakFwWmwwQzdWZi84Y2RnNS8zCk90UVZSSGkxQVJFcGFHMlVUMnFJSXp1SUVLN0tRZE5maXpVYVVaMFgzb041Kyt4YWU4WSsxa3dZOXZxaXdWRlcKbS96T1JzSmRtcnRqNGZRSTVaVGRzVG1jRkxqUXBOcktOSWFVU2pHOFM1Q3pOMlJBekZHTTBCZWYzSWFzWTF2WQpDTWpOZlBxaUNtZFNlOFFxRE1UMURwRExjaFltQlQ3UjdyR0JBaEFXWEpEZlVMRXlXUk9XdmRrWlhnTk5ZMnlJCkhmbktUTy9TL1FpUUs4N0Y2SWtEM2tKalZPVDhUNVBmYjBwYTJnaDlnSkZmZUJPVW9FUkYrdG4zYWI3Z3BkQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= | base64 -d > /tmp/tls.crt +❯ echo LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBMTBhWE5KVVRRVXBiWnhMMWE2TitMbGsyRnl2dTZ1VkhwZ2QyNjhzSjhJT0FON2NCCkErSmVpOXpiek5mVnpDK3BROUxLOWpNVTlMRzRwUVRPSC9RVjAxM1NMb2tSQUFuRzVmUkp0SEJ4cTliVlBMWVUKUmN1WnJkdmFsVlJ2TzREbFJZVUxXbkRGaFA3K2tNYnB2eHVxY0JGdUZLaTRuUUFuVXR3VTQ3ZnpsWk9aZ0FJTwoveTlxc05QL2N0OXFTQzZ1TnFEd2lBdlVPdkNHcmhKd1ZpakVoZWFzaC93c3hRTVVMRnllNkV6cXY2SWpIMnBwCmtGemY3ckkzYm15aXh2TjMxYWExNDZ2Q0NNbExncTQ1aUtLOWJTVWZld0QwMGVnWkxEUWZtWmx5dkNYUytwWUcKaUpxSFhCVW5mMHBLUmxPai94T0dNc0FUSUFVbkRpZnYzbkdxbndJREFRQUJBb0lCQUZWZ0NvRnhDY1RmLzJYZQpiL1J6VDR5RUZ0NlRydG43ZWpIUFRndHZaNDY2S0RSd1lIZXc0L3dsNkFuU0kxa3FJYi9qTGxqN296ano3cDJMClRWQUExbE1RSjFZTFIvR3k3dTJ0dHpsWFNzMXlrdmpUNFRCWThhYXd4WHhwa3YrUE85NFpTSXBpcFFMOHVlcWkKNkhyQk54UGc1YjVOdDRHVVdRUVVnamhaY01JRm9DUUwrRmNkZGs4RkQ1UFkxNnprWDNReUpITlVkZXRoYmJKdQoveVAvMTk3b2l6aVdzbWJSQ3Z6Z3Q2bEtOUk51VjZJRitMdnM4RWxGUlAyeStLcmI2SXRjT3lwRjY5TGE1N1pZCjY0enJWSVJnc2FYZVBTTWpGVkN1ZHFDY2QrL0FEeFU3YVc2TzhFaXJBQ0pqcXFYcnBYamd3MVNUY21WR2ZYK1gKQithRjIza0NnWUVBNXFsYnJGaHhWQlZBVTlMR3JnVGNLZXRGUEJveTQyOWxVTnJXMU5xSUxEaExZVjgxRGZIQQpXQVdYK1l0NGJUTXZqTUd2TDhDM0pXL0NiNDNHSDhmU2lwUTZDUlR2dDBYNXlyNytWL0tVdWdpKzV4ZmVoMlIzCldiRUNBVWNKM0UxVmtvRkN2ZEFtbHZBMlNROGlRVHNkV3JuclJxWFhEVkFGQzlCNEtZQ0JiWk1DZ1lFQTd1eU4KNVVGMkg0dmZmSWtZQUZzS0k0YlM4blQ5UGw4dmNPeUNoTUFLNUlnSHQyQUk0RTRVa20zeHFiblV0cDdjdHoySgplbUxJaTJ3M2pVMXdjWittc0pIYnRyMmxDdGVMNjJjMENLYXVsaDA1YWhLZ0VZUjhlVzcyL1F6Skg1WDhPTkZsCkx4eW9vRUo0Vmo2T1VwcTZKZmJTUW03YUFKMWE1dVBHUzhZWmxrVUNnWUJhcm5oSThGaFZpeWxJQ3hScTg2UXUKb3IwTVhPeG10N09vTHZESXE4VmZSUjUxZ0gyV0p0WE1oUjV6VDk2Zlo4RW80RGhrV0twb0FHRDdoRXhBMEVrNAppLytvOUY4dHVVZno2bFNKOU9kOW45U1ZlNi9Ub0s2L1J6U1hsZnNOYmlYWFBCUW1GWUFtVlBleWowMlRRWTlQCnpNbnZjMkZ4YldVZWVPM1V1eDJuR3dLQmdCQkFreDVuSjR2WnplZ0F3MXN5MWl1NGZoejBERTN6MTV4TTJrd0IKYkR4RGJKTHl1MmZXcDl1V0V2eENvYytTV3QwMEdHZjAxRU4zcHdlN25zeDcyYkRsR3hjQkszcmpVcWMrcS9GeQp0U21NNzF6aHkzV2xsM29ETEZYbVNzQVZTY1RycVlCYzZMT09FZlY3NTk2Q20rcjlNU3hIc2hpY201UmRKaDM5CmFid3BBb0dBQmZKeC91cXk3RGpubnZPLzUxMW1CSEh5bS9pdG9TNzNVL2FOd1pqWmhyWlpzZGVxUUZNcm5xSTQKQU83S05ldE9oa3NmQnVndTg5Q3dXTjRYMkpYWjd1aFFlVWlKQWNhNFc4ZmJLdTNjYytsZUVMTisrVEZ5Um91MgphRlpRYnZSazdYU01nM3d0ekk5SUtoNzIyZXRPVXJPb0FaNWthRlcrRWt2WjBoNmlTTzg9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== | base64 -d > /tmp/tls.key +``` + +Then, let's copy these files from local machine to nats-box pod, + +```bash +❯ kubectl cp -n demo /tmp/tls.crt sample-nats-box-785f8458d7-wtnfx:/tmp/tls.crt +❯ kubectl cp -n demo /tmp/tls.key sample-nats-box-785f8458d7-wtnfx:/tmp/tls.key +``` + +Finally, Let's exec into the nats-box pod, + +```bash +❯ kubectl exec -n demo sample-nats-tls-box-67fb4fb4f9-gtt9z -it -- sh -l +... +# Let's export the tls.crt and tls.key file paths as environment variables to make further commands re-usable. +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# export NATS_CERT=/tmp/tls.crt +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# export NATS_KEY=/tmp/tls.key + +# Let's create a stream named "ORDERS" +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# nats stream add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --max-msgs-per-subject=-1 --discard old --dupe-window="0s" --replicas 1 +Stream ORDERS was created + +Information for Stream ORDERS created 2021-09-27T06:27:30Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: sample-nats-tls-2 + +State: + + Messages: 0 + Bytes: 0 B + FirstSeq: 0 + LastSeq: 0 + Active Consumers: 0 + +# Verify that the stream has been created successfully +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# nats stream ls +Streams: + + ORDERS + +# Lets add some messages to the stream "ORDERS" +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# nats pub ORDERS.scratch hello +06:29:18 Published 5 bytes to "ORDERS.scratch" + +# Add another message +ample-nats-tls-box-67fb4fb4f9-gtt9z:~# nats pub ORDERS.scratch world +06:29:41 Published 5 bytes to "ORDERS.scratch" + +# Verify that the messages have been published to the stream successfully +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# nats stream info ORDERS +Information for Stream ORDERS created 2021-09-27T06:27:30Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: sample-nats-tls-2 + +State: + + Messages: 2 + Bytes: 98 B + FirstSeq: 1 @ 2021-09-27T06:29:18 UTC + LastSeq: 2 @ 2021-09-27T06:29:41 UTC + Active Consumers: 0 + +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# exit +``` + +We have successfully deployed a NATS cluster, created a stream and publish some messages into the stream. In the subsequent sections, we are going to backup this sample data using Stash. + +## Prepare for Backup + +In this section, we are going to prepare the necessary resources (i.e. connection information, backend information, etc.) before backup. + +### Ensure NATS Addon + +When you install Stash Enterprise version, it will automatically install all the official addons. Make sure that NATS addon was installed properly using the following command. + +```bash +❯ kubectl get tasks.stash.appscode.com | grep nats +nats-backup-2.6.1 24m +nats-restore-2.6.1 24m +``` + +This addon should be able to take backup of the NATS streams with matching major versions as discussed in [Addon Version Compatibility](/docs/addons/nats/README.md#addon-version-compatibility). + + +### Create AppBinding + +Stash needs to know how to connect with the NATS server. An `AppBinding` exactly provides this information. It holds the Service and Secret information of the NATS server. You have to point to the respective `AppBinding` as a target of backup instead of the NATS server itself. + +Here, is the YAML of the `AppBinding` that we are going to create for the NATS server we have deployed earlier. + +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + labels: + app.kubernetes.io/instance: sample-nats-tls + name: sample-nats-tls + namespace: demo +spec: + clientConfig: + caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4RENDQWRpZ0F3SUJBZ0lRUDZ1UXIxQVlFWnJzREF6ZHBRR09HekFOQmdrcWhraUc5dzBCQVFzRkFEQVMKTVJBd0RnWURWUVFERXdkdVlYUnpMV05oTUI0WERUSXhNRGt5TnpBMU5EWTBOVm9YRFRJeU1Ea3lOakExTkRZMApOVm93RWpFUU1BNEdBMVVFQXhNSGJtRjBjeTFqWVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDCkFRb0NnZ0VCQUovelJtSE1CUVdGMTNXZlJubzFsV0ZJajZmNmhyRGRVR0RTMXBrY0hmMDlqNS90bEpYSHpCbVMKZSs2YS9Qb1MrdkMyWEtyeVp3UVB0NW5BaUxXR1NxM3VBRnJ3TUJncVBBQktOa1hHL1hjamNvbU5lVTFaQlNYYgo1WmlJa2F6TUZPOGFqRWxYb3RmYnQ2cVc5MGNCTVduRW9pcnUyWkFyam50WjJpMmpPeGRodUJpRTkxamRsZWMyCkdZWGFKVlJ5RkF1eVdXanVEV3o0NjFKdXBMdXcxVWJyVHpmMExUenQxdk9ONnZNU1RQT0Z0S0tnd0RGenB5ZkgKVjFFQlZ5aG1KSk42QW13SkErZGEvMmsxMUJCeHFEWldsOEZMWE1TWUcvU0hKak5sQ3VsQTFvVVJWbFI3MVF6KwpTanB2bkxKVm9nL01sYVcvTzB5N0lRcTVQNUZGeDBNQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trCk1BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZNcm5ZN0Izek5VY1AvN3hHTzhkTFIwZVcxUnIKTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDQVZPQUZhTjRVOUQ0U1cxcWJUTDhkcWhFbklXTFd3YVBJdXJGSAo1MVVRMEUxenFTOWcvQ1gwUElOUmJ1bFpseHVKRGFBZEwweVYwYmZYZExLQnJacDNwS001eGRyaEoxQ3luNjV5CkRML0RTd3hTOHlxT3NwTXF2SkoyUTBhQ0JQTXhDRFZoOGVFZ2krOG9ISmdobkZzaTkvanNoZ0dUS09QbVVWdHcKTyszS1B0MFBiNVRDSVpJdlA1cXBybkU0U2hDWnRRZ0UyY0dJTEJPZEt5VEl6QlpuM3ZNZjc2Zjd4NU4rWEtINgpQN3Q4Yks0SUFSbzR1WUN0cDQ0K0dkY2FlcjlDL2RVNlpaMSs1Nm4xcUo3a3FTV3cwNFZqbi9CVWt5WnhIdFZPCkFLcUNCRWtnK3NBQytYUmNiOFdxTHkreEEzdmU0TmxqalE3T2MrVXVzanNrSndOVQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== + service: + name: sample-nats-tls + port: 4222 + scheme: nats + secret: + name: nats-client-tls + type: nats.io/nats + version: 2.6.1 +``` + +Here, + +- `.spec.clientConfig.caBundle` specifies a PEM encoded CA bundle which will be used to validate the serving certificate of the NATS server. +- `.spec.clientConfig.service` specifies the Service information to use to connects with the NATS server. +- `.spec.secret` specifies the name of the Secret that holds necessary credentials to access the server. +- `.spec.type` specifies the type of the target. This is particularly helpful in auto-backup where you want to use different path prefixes for different types of target. + +Let's create the `AppBinding` we have shown above, + +```bash +$ kubectl apply -f https://github.com/stashed/docs/tree/{{< param "info.version" >}}/docs/addons/nats/tls/examples/appbinding.yaml +appbinding.appcatalog.appscode.com/sample-nats-tls created +``` + +### Prepare Backend + +We are going to store our backed up data into a GCS bucket. So, we need to create a Secret with GCS credentials and a `Repository` object with the bucket information. If you want to use a different backend, please read the respective backend configuration doc from [here](/docs/guides/latest/backends/overview.md). + +**Create Storage Secret:** + +At first, let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, + +```bash +$ echo -n 'changeit' > RESTIC_PASSWORD +$ echo -n '' > GOOGLE_PROJECT_ID +$ cat downloaded-sa-json.key > GOOGLE_SERVICE_ACCOUNT_JSON_KEY +$ kubectl create secret generic -n demo gcs-secret \ + --from-file=./RESTIC_PASSWORD \ + --from-file=./GOOGLE_PROJECT_ID \ + --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY +secret/gcs-secret created +``` + +**Create Repository:** + +Now, create a `Repository` object with the information of your desired bucket. Below is the YAML of `Repository` object we are going to create, + +```yaml +apiVersion: stash.appscode.com/v1alpha1 +kind: Repository +metadata: + name: gcs-repo + namespace: demo +spec: + backend: + gcs: + bucket: stash-testing + prefix: /demo/nats/sample-nats-tls + storageSecretName: gcs-secret +``` + +Let's create the `Repository` we have shown above, + +```bash +$ kubectl create -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/tls/examples/repository.yaml +repository.stash.appscode.com/gcs-repo created +``` + +Now, we are ready to backup our streams into our desired backend. + +### Backup + +To schedule a backup, we have to create a `BackupConfiguration` object targeting the respective `AppBinding` of our NATS server. Then, Stash will create a CronJob to periodically backup the streams. + +#### Create BackupConfiguration + +Below is the YAML for `BackupConfiguration` object that we are going to use to backup the streams of the NATS server we have created earlier, + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: BackupConfiguration +metadata: + name: sample-nats-backup-tls + namespace: demo +spec: + task: + name: nats-backup-2.6.1 + schedule: "*/5 * * * *" + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats-tls + interimVolumeTemplate: + metadata: + name: nats-backup-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + retentionPolicy: + name: keep-last-5 + keepLast: 5 + prune: true +``` + +Here, + +- `.spec.schedule` specifies that we want to backup the streams at 5 minutes intervals. +- `.spec.task.name` specifies the name of the Task object that specifies the necessary Functions and their execution order to backup NATS streams. +- `.spec.repository.name` specifies the Repository CR name we have created earlier with backend information. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted NATS server. +- `spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the dumped data temporarily before uploading it into the cloud bucket. +- `.spec.retentionPolicy` specifies a policy indicating how we want to cleanup the old backups. + +Let's create the `BackupConfiguration` object we have shown above, + +```bash +$ kubectl create -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/tls/examples/backupconfiguration.yaml +backupconfiguration.stash.appscode.com/sample-nats-backup-tls created +``` + +#### Verify CronJob + +If everything goes well, Stash will create a CronJob with the schedule specified in `spec.schedule` field of `BackupConfiguration` object. + +Verify that the CronJob has been created using the following command, + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup-tls */5 * * * * False 0 14s +``` + +#### Wait for BackupSession + +The `stash-sample-nats-backup` CronJob will trigger a backup on each scheduled slot by creating a `BackupSession` object. + +Now, wait for a schedule to appear. Run the following command to watch for `BackupSession` object, + +```bash +❯ kubectl get backupsession -n demo -w +NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE +sample-nats-backup-tls-prszs BackupConfiguration sample-nats-backup-tls Succeeded 35s 84s +``` + +Here, the phase `Succeeded` means that the backup process has been completed successfully. + +#### Verify Backup + +Now, we are going to verify whether the backed up data is present in the backend or not. Once a backup is completed, Stash will update the respective `Repository` object to reflect the backup completion. Check that the repository `gcs-repo` has been updated by the following command, + +```bash +❯ kubectl get repository -n demo +NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE +gcs-repo true 4.156 KiB 3 2m2s 97m +``` + +Now, if we navigate to the GCS bucket, we will see the backed up data has been stored in `demo/nats/sample-nats-tls` directory as specified by `.spec.backend.gcs.prefix` field of the `Repository` object. + +
+ Backup data in GCS Bucket +
Fig: Backup data in GCS Bucket
+
+ + + +> Note: Stash keeps all the backed up data encrypted. So, data in the backend will not make any sense until they are decrypted. + +## Restore + +If you have followed the previous sections properly, you should have a successful backup of your nats streams. Now, we are going to show how you can restore the streams from the backed up data. + +### Restore Into the Same NATS Cluster + +You can restore your data into the same NATS cluster you have backed up from or into a different NATS cluster in the same cluster or a different cluster. In this section, we are going to show you how to restore in the same NATS cluster which may be necessary when you have accidentally lost any data. + +#### Temporarily Pause Backup + +At first, let's stop taking any further backup of the NATS streams so that no backup runs after we delete the sample data. We are going to pause the `BackupConfiguration` object. Stash will stop taking any further backup when the `BackupConfiguration` is paused. + +Let's pause the `sample-nats-backup` BackupConfiguration, + +```bash +$ kubectl patch backupconfiguration -n demo sample-nats-backup-tls --type="merge" --patch='{"spec": {"paused": true}}' +``` + +Verify that the `BackupConfiguration` has been paused, + +```bash +❯ kubectl get backupconfiguration -n demo sample-nats-backup-tls +NAME TASK SCHEDULE PAUSED AGE +sample-nats-backup-tls nats-backup-2.6.1 */5 * * * * true 4m26s +``` + +Notice the `PAUSED` column. Value `true` for this field means that the `BackupConfiguration` has been paused. + +Stash will also suspend the respective CronJob. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup-tls */5 * * * * True 0 2m12s 5m4s +``` + +#### Simulate Disaster + +Now, let's simulate a disaster scenario. Here, we are going to exec into the nats-box pod and delete the sample data we have inserted earlier. + +```bash +❯ kubectl exec -n demo sample-nats-tls-box-67fb4fb4f9-gtt9z -it -- sh -l +... +# Let's export the tls.crt and tls.key file paths as environment variables to make further commands re-usable. +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# export NATS_CERT=/tmp/tls.crt +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# export NATS_KEY=/tmp/tls.key + +# delete the stream "ORDERS" +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# nats stream rm ORDERS -f + +# verify that the stream has been deleted +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# nats stream ls +No Streams defined + +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# exit +``` + +#### Create RestoreSession + +To restore the streams, you have to create a `RestoreSession` object pointing to the `AppBinding` of the targeted NATS server. + +Here, is the YAML of the `RestoreSession` object that we are going to use for restoring the streams of the NATS server. + +```yaml +apiVersion: stash.appscode.com/v1beta1 +kind: RestoreSession +metadata: + name: sample-nats-restore-tls + namespace: demo +spec: + task: + name: nats-restore-2.6.1 + repository: + name: gcs-repo + target: + ref: + apiVersion: appcatalog.appscode.com/v1alpha1 + kind: AppBinding + name: sample-nats-tls + interimVolumeTemplate: + metadata: + name: nats-restore-tmp-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "standard" + resources: + requests: + storage: 1Gi + rules: + - snapshots: [latest] +``` + +Here, + +- `.spec.task.name` specifies the name of the Task object that specifies the necessary Functions and their execution order to restore NATS streams. +- `.spec.repository.name` specifies the Repository object that holds the backend information where our backed up data has been stored. +- `.spec.target.ref` refers to the AppBinding object that holds the connection information of our targeted NATS server. +- `.spec.interimVolumeTemplate` specifies a PVC template that will be used by Stash to hold the restored data temporarily before injecting into the NATS server. +- `.spec.rules` specifies that we are restoring data from the latest backup snapshot of the streams. + +Let's create the `RestoreSession` object object we have shown above, + +```bash +$ kubectl apply -f https://github.com/stashed/docs/raw/{{< param "info.version" >}}/docs/addons/nats/tls/examples/restoresession.yaml +restoresession.stash.appscode.com/sample-nats-restore-tls created +``` + +Once, you have created the `RestoreSession` object, Stash will create a restore Job. Run the following command to watch the phase of the `RestoreSession` object, + +```bash +❯ kubectl get restoresession -n demo -w +NAME REPOSITORY PHASE DURATION AGE +sample-nats-restore-tls gcs-repo Succeeded 15s 55s +``` + +The `Succeeded` phase means that the restore process has been completed successfully. + +#### Verify Restored Data + +Now, let's exec into the nats-box pod and verify whether data actual data has been restored or not, + +```bash +❯ kubectl exec -n demo sample-nats-tls-box-67fb4fb4f9-gtt9z -it -- sh -l +... +# Let's export the tls.crt and tls.key file paths as environment variables to make further commands re-usable. +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# export NATS_CERT=/tmp/tls.crt +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# export NATS_KEY=/tmp/tls.key + +# Verify that the stream has been restored successfully +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# nats str ls +Streams: + + ORDERS + +# Verify that the messages have been restored successfully +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# nats stream info ORDERS +Information for Stream ORDERS created 2021-09-27T08:23:58Z + +Configuration: + + Subjects: ORDERS.* + Acknowledgements: true + Retention: File - Limits + Replicas: 1 + Discard Policy: Old + Duplicate Window: 2m0s + Maximum Messages: unlimited + Maximum Bytes: unlimited + Maximum Age: 1y0d0h0m0s + Maximum Message Size: unlimited + Maximum Consumers: unlimited + + +Cluster Information: + + Name: nats + Leader: sample-nats-tls-2 + +State: + + Messages: 2 + Bytes: 98 B + FirstSeq: 1 @ 2021-09-27T06:29:18 UTC + LastSeq: 2 @ 2021-09-27T06:29:41 UTC + Active Consumers: 0 + +sample-nats-tls-box-67fb4fb4f9-gtt9z:~# exit +``` + +Hence, we can see from the above output that the deleted data has been restored successfully from the backup. + +#### Resume Backup + +Since our data has been restored successfully we can now resume our usual backup process. Resume the `BackupConfiguration` using following command, + +```bash +❯ kubectl patch backupconfiguration -n demo sample-nats-backup-tls --type="merge" --patch='{"spec": {"paused": false}}' +backupconfiguration.stash.appscode.com/sample-nats-backup-tls patched +``` + +Verify that the `BackupConfiguration` has been resumed, +```bash +❯ kubectl get backupconfiguration -n demo sample-nats-backup-tls +NAME TASK SCHEDULE PAUSED AGE +sample-nats-backup-tls nats-backup-2.6.1 */5 * * * * false 16m +``` + +Here, `false` in the `PAUSED` column means the backup has been resumed successfully. The CronJob also should be resumed now. + +```bash +❯ kubectl get cronjob -n demo +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +stash-backup-sample-nats-backup-tls */5 * * * * False 0 23s 17m +``` + +Here, `False` in the `SUSPEND` column means the CronJob is no longer suspended and will trigger in the next schedule. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete -n demo backupconfiguration sample-nats-backup-tls +kubectl delete -n demo restoresession sample-nats-restore-tls +kubectl delete -n demo repository gcs-repo +# delete the nats chart +helm delete sample-nats-tls -n demo +```