Skip to content

Commit

Permalink
Documentation updates
Browse files Browse the repository at this point in the history
  • Loading branch information
balaramesh authored Jun 21, 2021
1 parent 6a37931 commit bed7323
Show file tree
Hide file tree
Showing 8 changed files with 57 additions and 55 deletions.
50 changes: 25 additions & 25 deletions docs/dag/kubernetes/backup_disaster_recovery.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,14 @@ All Kubernetes objects are stored in the cluster's etcd. Periodically backing up
data is important to recover Kubernetes clusters under disaster scenarios.

This example provides a sample workflow to create etcd snapshots on a Kubernetes cluster using
``etcdctl``.
``etcdctl``.

etcdctl snapshot backup
-----------------------

The command ``etcdctl snapshot save`` enables us to take a point-in-time snapshot of the etcd cluster.

.. code-block:: console
.. code-block:: console
sudo docker run --rm -v /backup:/backup \
--network host \
Expand Down Expand Up @@ -57,13 +57,13 @@ Before you initialize the Kubernetes cluster, copy all the necessary certificate
Create the cluster with the ``--ignore-preflight-errors=DirAvailable--var-lib-etcd`` flag.
After the cluster comes up make sure that the kube-system pods have started. Use the ``kubectl get crd``
command to verify if the custom resources created by Trident are present and retrieve Trident objects
to make sure that all the data is available.
to make sure that all the data is available.


ONTAP Snapshots
===============

The following sections talk in general about how ONTAP Snapshot technology can be used to take backups of the volume and how these snapshots can be restored. Snapshots play an important role by providing point-in-time recovery options for application data. However, snapshots are not backups by themselves, they will not protect against storage system failure or other catastrophes. But, they are a convenient, quick, and easy way to recover data in most scenarios.
The following sections talk in general about how ONTAP Snapshot technology can be used to take backups of the volume and how these snapshots can be restored. Snapshots play an important role by providing point-in-time recovery options for application data. However, snapshots are not backups by themselves, they will not protect against storage system failure or other catastrophes. But, they are a convenient, quick, and easy way to recover data in most scenarios.

Using ONTAP snapshots with containers
-------------------------------------
Expand All @@ -75,27 +75,27 @@ The snapshot directory is hidden by default. This helps facilitate maximum compa
Accessing the snapshot directory
--------------------------------

Enable the ``.snapshot`` directory when using the ``ontap-nas`` and ``ontap-nas-economy`` drivers to allow applications to recover data from snapshots directly.
Enable the ``.snapshot`` directory when using the ``ontap-nas`` and ``ontap-nas-economy`` drivers to allow applications to recover data from snapshots directly.

Restoring the snapshots
-----------------------

Restore a volume to a state recorded in a prior snapshot using the ``volume snapshot restore`` ONTAP CLI command. When you restore a Snapshot copy, the restore operation overwrites the existing volume configuration. Any changes made to the data in the volume after the Snapshot copy was created are lost.

.. code-block:: python
.. code-block:: python
cluster1::*> volume snapshot restore -vserver vs0 -volume vol3 -snapshot vol3_snap_archive
Data replication using ONTAP
============================

Replicating data can play an important role in protecting against data loss due to storage array failure. Snapshots are a point-in-time recovery which provides a very quick and easy method of recovering data which has been corrupted or accidentally lost as a result of human or technological error. However, they cannot protect against catastrophic failure of the storage array itself.
Replicating data can play an important role in protecting against data loss due to storage array failure. Snapshots are a point-in-time recovery which provides a very quick and easy method of recovering data which has been corrupted or accidentally lost as a result of human or technological error. However, they cannot protect against catastrophic failure of the storage array itself.

ONTAP SnapMirror SVM Replication
--------------------------------

SnapMirror can be used to replicate a complete SVM which includes its configuration settings and its volumes. In the event of a disaster, SnapMirror destination SVM can be activated to start serving data and switch back to the primary when the systems are restored.
Since Trident is unable to configure replication relationships itself, the storage administrator can use ONTAP’s SnapMirror SVM Replication feature to automatically replicate volumes to a Disaster Recovery (DR) destination.
Since Trident is unable to configure replication relationships itself, the storage administrator can use ONTAP’s SnapMirror SVM Replication feature to automatically replicate volumes to a Disaster Recovery (DR) destination.

* A distinct backend should be created for each SVM which has SVM-DR enabled.

Expand All @@ -108,7 +108,7 @@ Since Trident is unable to configure replication relationships itself, the stora
* Trident does not automatically detect SVM failures. Therefore, upon a failure, the administrator needs to run the command tridentctl backend update to trigger Trident’s failover to the new backend.


ONTAP SnapMirror SVM Replication Setup
ONTAP SnapMirror SVM Replication Setup
**************************************

* Set up peering between the Source and Destination Cluster and SVM.
Expand All @@ -128,10 +128,10 @@ ONTAP SnapMirror SVM Replication Setup

SnapMirror SVM Replication Setup

SnapMirror SVM Disaster Recovery Workflow for Trident
SnapMirror SVM Disaster Recovery Workflow for Trident
*****************************************************

The following steps describe how Trident and other containerized applications can resume functioning during a catastrophe using the SnapMirror SVM replication.
The following steps describe how Trident and other containerized applications can resume functioning during a catastrophe using the SnapMirror SVM replication.

**Disaster Recovery Workflow for Trident**

Expand All @@ -145,7 +145,7 @@ Trident v19.07 and beyond will now utilize Kubernetes CRDs to store and manage i

4. Now create a Kubernetes cluster with the ``kubeadm init`` command along with the ``--ignore-preflight-errors=DirAvailable--var-lib-etcd`` flag. Please note that the hostnames used for the Kubernetes nodes must same as the source Kubernetes cluster.

5. Use the ``kubectl get crd`` command to verify if all the Trident custom resources have come up and retrieve Trident objects to make sure that all the data is available.
5. Use the ``kubectl get crd`` command to verify if all the Trident custom resources have come up and retrieve Trident objects to make sure that all the data is available.

6. Update all the required backends to reflect the new destination SVM name using the ``./tridentctl update backend <backend-name> -f <backend-json-file> -n <namespace>`` command.

Expand All @@ -154,7 +154,7 @@ Trident v19.07 and beyond will now utilize Kubernetes CRDs to store and manage i
When the destination SVM is activated, all the volumes provisioned by Trident will start serving data. Once the Kubernetes cluster is setup on the destination side using the above mentioned procedure, all the deployments and pods are started and the containerized applications should run without any issues.

ONTAP SnapMirror Volume Replication
-----------------------------------
-----------------------------------

ONTAP SnapMirror Volume Replication is a disaster recovery feature which enables failover to destination storage from primary storage on a volume level. SnapMirror creates a volume replica or mirror of the primary storage on to the secondary storage by syncing snapshots.

Expand All @@ -163,9 +163,9 @@ ONTAP SnapMirror Volume Replication Setup

* The clusters in which the volumes reside and the SVMs that serve data from the volumes must be peered.

* Create a SnapMirror policy which controls the behavior of the relationship and specifies the configuration attributes for that relationship.
* Create a SnapMirror policy which controls the behavior of the relationship and specifies the configuration attributes for that relationship.

* Create a SnapMirror relationship between the destination volume and the source volume using the "snapmirror create" volume and assign the appropriate SnapMirror policy.
* Create a SnapMirror relationship between the destination volume and the source volume using the "snapmirror create" volume and assign the appropriate SnapMirror policy.

* After the SnapMirror relationship is created, initialize the relationship so that a baseline transfer from the source volume to the destination volume will be completed.

Expand Down Expand Up @@ -196,30 +196,30 @@ Trident v19.07 and beyond will now utilize Kubernetes CRDs to store and manage i

4. Now create a Kubernetes cluster with the ``kubeadm init`` command along with the ``--ignore-preflight-errors=DirAvailable--var-lib-etcd`` flag. Please note that the hostnames must same as the source Kubernetes cluster.

5. Use the ``kubectl get crd`` command to verify if all the Trident custom resources have come up and retrieve Trident objects to make sure that all the data is available.
5. Use the ``kubectl get crd`` command to verify if all the Trident custom resources have come up and retrieve Trident objects to make sure that all the data is available.

6. Clean up the previous backends and create new backends on Trident. Specify the new Management and Data LIF, new SVM name and password of the destination SVM.

**Disaster Recovery Workflow for Application Persistent Volumes**

In this section, let us examine how SnapMirror destination volumes can be made available for containerized workloads in the event of a disaster.

1. Stop all the scheduled SnapMirror transfers and abort all ongoing SnapMirror transfers. Break the replication relationship between the destination and source volume so that the destination volume becomes Read/Write. Clean up the deployments which were consuming PVC bound to volumes on the source SVM.

2. Once the Kubernetes cluster is setup on the destination side using the above mentioned procedure, clean up the deployments, PVCs and PV, from the Kubernetes cluster.

3. Create new backends on Trident by specifying the new Management and Data LIF, new SVM name and password of the destination SVM.

4. Now import the required volumes as a PV bound to a new PVC using the Trident import feature.
4. Now import the required volumes as a PV bound to a new PVC using the Trident import feature.

5. Re-deploy the application deployments with the newly created PVCs.

5. Re-deploy the application deployments with the newly created PVCs.

ElementOS snapshots
===================
Element Software snapshots
==========================

Backup data on an Element volume by setting a snapshot schedule for the volume, ensuring the snapshots are taken at the required intervals. Currently, it is not possible to set a snapshot schedule to a volume through the ``solidfire-san`` driver. Set it using the Element OS Web UI or Element OS APIs.
Backup data on an Element volume by setting a snapshot schedule for the volume, ensuring the snapshots are taken at the required intervals. Currently, it is not possible to set a snapshot schedule to a volume through the ``solidfire-san`` driver. Set it using the Element software web UI or using Element APIs.

In the event of data corruption, we can choose a particular snapshot and rollback the volume to the snapshot manually using the Element OS Web UI or Element OS APIs. This reverts any changes made to the volume since the snapshot was created.
In the event of data corruption, we can choose a particular snapshot and rollback the volume to the snapshot manually. This reverts any changes made to the volume since the snapshot was created.

The :ref:`Creating Snapshots of Persistent Volumes <On-Demand Volume Snapshots>` section details a complete workflow
for creating Volume Snapshots and then using them to create PVCs.
6 changes: 3 additions & 3 deletions docs/dag/kubernetes/integrating_trident.rst
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,9 @@ Cloud Volumes ONTAP provides data control along with enterprise-class storage fe
``ontap-san-economy``. These are applicable for Cloud Volume ONTAP for AWS, Cloud Volume ONTAP for Azure, Cloud Volume ONTAP for GCP.


Element (HCI/SolidFire)
-----------------------
The ``solidfire-san`` driver used with the HCI/SolidFire platforms, helps the admin configure an Element backend for Trident on the basis of QoS limits. If you would like to design your backend to set the specific QoS limits on the volumes provisioned by Trident, use the ``type`` parameter in the backend file. The admin also can restrict the volume size that could be created on the storage using the `limitVolumeSize` parameter. Currently, Element OS storage features like volume resize and volume replication are not supported through the ``solidfire-san`` driver. These operations should be done manually through Element OS Web UI.
Element software (NetApp HCI/SolidFire)
---------------------------------------
The ``solidfire-san`` driver used with the NetApp HCI/SolidFire platforms, helps the admin configure an Element backend for Trident on the basis of QoS limits. If you would like to design your backend to set the specific QoS limits on the volumes provisioned by Trident, use the ``type`` parameter in the backend file. The admin also can restrict the volume size that could be created on the storage using the `limitVolumeSize` parameter. Currently, Element storage features like volume resize and volume replication are not supported through the ``solidfire-san`` driver. These operations should be done manually through Element software web UI.

.. table:: SolidFire SAN driver capabilities

Expand Down
12 changes: 6 additions & 6 deletions docs/dag/kubernetes/netapp_products_integrations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,20 +18,20 @@ For more information about Trident, visit `ThePub <https://netapp.io/persistent-
ONTAP
-----

ONTAP is NetApp’s multiprotocol, unified storage operating system that provides advanced data management capabilities for any application. ONTAP systems may have all-flash, hybrid, or all-HDD configurations and offer many different deployment models, including engineered hardware (FAS and AFF), white-box (ONTAP Select), and cloud-only (Cloud Volumes ONTAP). Trident supports all the above mentioned ONTAP deployment models.
ONTAP is NetApp’s multiprotocol, unified storage operating system that provides advanced data management capabilities for any application. ONTAP systems may have all-flash, hybrid, or all-HDD configurations and offer many different deployment models, including engineered hardware (FAS and AFF), white-box (ONTAP Select), and cloud-only (Cloud Volumes ONTAP). Trident supports all the above mentioned ONTAP deployment models.

Cloud Volumes ONTAP
Cloud Volumes ONTAP
===================

`Cloud Volumes ONTAP <http://cloud.netapp.com/ontap-cloud?utm_source=GitHub&utm_campaign=Trident>`_ is a software-only storage appliance that runs the ONTAP data management software in the cloud. You can use Cloud Volumes ONTAP for production workloads, disaster recovery, DevOps, file shares, and database management.It extends enterprise storage to the cloud by offering storage efficiencies, high availability, data replication, data tiering and application consistency.


Element OS
----------
Element Software
----------------

Element OS enables the storage administrator to consolidate workloads by guaranteeing performance and enabling a simplified and streamlined storage footprint. Coupled with an API to enable automation of all aspects of storage management, Element OS enables storage administrators to do more with less effort.
Element Software enables the storage administrator to consolidate workloads by guaranteeing performance and enabling a simplified and streamlined storage footprint. Coupled with an API to enable automation of all aspects of storage management, Element enables storage administrators to do more with less effort.

Trident supports all Element OS clusters, more information can be found at `Element Software <https://www.netapp.com/us/products/data-management-software/element-os.aspx>`_.
More information can be found `here <https://www.netapp.com/data-management/element-software/>`_.

NetApp HCI
==========
Expand Down
2 changes: 2 additions & 0 deletions docs/docker/install/ndvp_global_config.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _ndvp-global-config:

Global Configuration
====================

Expand Down
12 changes: 6 additions & 6 deletions docs/docker/install/ndvp_sf_config.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Element OS/SolidFire Configuration
==================================
Element Software Configuration
==============================

In addition to the global configuration values above, when using SolidFire, these options are available.
In addition to the :ref:`global configuration values <ndvp-global-config>`, when using Element software (NetApp HCI/SolidFire), these options are available.

+-----------------------+-------------------------------------------------------------------------------+----------------------------+
| Option | Description | Example |
Expand All @@ -19,14 +19,14 @@ In addition to the global configuration values above, when using SolidFire, thes
| ``LegacyNamePrefix`` | Prefix for upgraded Trident installs | "netappdvp-" |
+-----------------------+-------------------------------------------------------------------------------+----------------------------+

The SolidFire driver does not support Docker Swarm.
The ``solidfire-san`` driver does not support Docker Swarm.

**LegacyNamePrefix** If you used a version of Trident prior to 1.3.2 and perform an
upgrade with existing volumes, you'll need to set this value in order to access
your old volumes that were mapped via the ``volume-name`` method.

Example Solidfire Config File
-----------------------------
Example Element Software Config File
------------------------------------

.. code-block:: json
Expand Down
14 changes: 7 additions & 7 deletions docs/docker/use/backends/solidfire_options.rst
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
.. _sf_vol_opts:

Element OS/SolidFire Volume Options
===================================
Element Software Volume Options
===============================

The SolidFire driver options expose the size and quality of service (QoS) policies associated with the volume. When the volume is created, the QoS policy associated with it is specified using the ``-o type=service_level`` nomenclature.
The Element Software driver options expose the size and quality of service (QoS) policies associated with the volume. When the volume is created, the QoS policy associated with it is specified using the ``-o type=service_level`` nomenclature.

The first step to defining a QoS service level with the SolidFire driver is to create at least one type and specify the minimum, maximum, and burst IOPS associated with a name in the configuration file.
The first step to defining a QoS service level with the Element driver is to create at least one type and specify the minimum, maximum, and burst IOPS associated with a name in the configuration file.

**Example Configuration File with QoS Definitions**

Expand Down Expand Up @@ -51,10 +51,10 @@ In the above configuration we have three policy definitions: *Bronze*, *Silver*,
# create a 100GiB Bronze volume
docker volume create -d solidfire --name sfBronze -o type=Bronze -o size=100G
Other SolidFire Create Options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Other Element Software Create Options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Volume create options for SolidFire:
Volume create options for Element include:

* ``size`` - the size of the volume, defaults to 1GiB or config entry ``... "defaults": {"size": "5G"}``
* ``blocksize`` - use either ``512`` or ``4096``, defaults to 512 or config entry ``DefaultBlockSize``
Loading

0 comments on commit bed7323

Please sign in to comment.