Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

csi powerscale 2.4 related updates #313

Merged
merged 3 commits into from
Aug 17, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions content/docs/csidriver/installation/helm/isilon.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
## Install the Driver

**Steps**
1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace isilon` to create a new one. The use of "isilon" as the namespace is just an example. You can choose any name for the namespace.
3. Collect information from the PowerScale Systems like IP address, IsiPath, username, and password. Make a note of the value for these parameters as they must be entered in the *secret.yaml*.
4. Copy *the helm/csi-isilon/values.yaml* into a new location with name say *my-isilon-settings.yaml*, to customize settings for installation.
Expand Down Expand Up @@ -267,7 +267,7 @@ The CSI driver for Dell PowerScale version 1.5 and later, `dell-csi-helm-install

### What happens to my existing storage classes?

*Upgrading from CSI PowerScale v2.2 driver*:
*Upgrading from CSI PowerScale v2.3 driver*:
The storage classes created as part of the installation have an annotation - "helm.sh/resource-policy": keep set. This ensures that even after an uninstall or upgrade, the storage classes are not deleted. You can continue using these storage classes if you wish so.

*NOTE*:
Expand All @@ -289,9 +289,9 @@ Starting CSI PowerScale v1.6, `dell-csi-helm-installer` will not create any Volu

### What happens to my existing Volume Snapshot Classes?

*Upgrading from CSI PowerScale v2.2 driver*:
*Upgrading from CSI PowerScale v2.3 driver*:
The existing volume snapshot class will be retained.

*Upgrading from an older version of the driver*:
It is strongly recommended to upgrade the earlier versions of CSI PowerScale to 1.6 or higher before upgrading to 2.2.
It is strongly recommended upgrading the earlier versions of CSI PowerScale to 1.6 or higher before upgrading to 2.3.

8 changes: 2 additions & 6 deletions content/docs/csidriver/release/powerscale.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,11 @@ title: PowerScale
description: Release notes for PowerScale CSI driver
---

## Release Notes - CSI Driver for PowerScale v2.3.0
## Release Notes - CSI Driver for PowerScale v2.4.0

### New Features/Changes

- Removed beta volumesnapshotclass sample files.
- Added support for Kubernetes 1.24.
- Added support to increase volume path limit.
- Added support for OpenShift 4.10.
- Added support for CSM Resiliency sidecar via Helm.
- Added support to add client only to root clients when RO volume is created from snapshot and RootClientEnabled is set to true.
shanmydell marked this conversation as resolved.
Show resolved Hide resolved

### Fixed Issues

Expand Down
4 changes: 2 additions & 2 deletions content/docs/csidriver/upgradation/drivers/isilon.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,12 @@ Description: Upgrade PowerScale CSI driver
---
You can upgrade the CSI Driver for Dell PowerScale using Helm or Dell CSI Operator.

## Upgrade Driver from version 2.2.0 to 2.3.0 using Helm
## Upgrade Driver from version 2.3.0 to 2.4.0 using Helm

**Note:** While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes.

**Steps**
1. Clone the repository using `git clone -b v2.3.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements.
1. Clone the repository using `git clone -b v2.4.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements.
2. Change to directory dell-csi-helm-installer to install the Dell PowerScale `cd dell-csi-helm-installer`
3. Upgrade the CSI Driver for Dell PowerScale using following command:

Expand Down
8 changes: 7 additions & 1 deletion content/docs/resiliency/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,13 @@ pmtu3 podmontest-0 1/1 Running 0 3m6s
...
```

CSM for Resiliency may also generate events if it is unable to cleanup a pod for some reason. For example, it may not clean up a pod because the pod is still doing I/O to the array.
CSM for Resiliency may also generate events if it is unable to clean up a pod for some reason. For example, it may not clean up a pod because the pod is still doing I/O to the array.

Similarly, the label selector for csi-powerscale and csi-unity would be as shown respectively.
```
labelSelector: {map[podmon.dellemc.com/driver:csi-isilon]
labelSelector: {map[podmon.dellemc.com/driver:csi-unity]
```

#### Important
Before putting an application into production that relies on CSM for Resiliency monitoring, it is important to do a few test failovers first. To do this take the node that is running the pod offline for at least 2-3 minutes. Verify that there is an event message similar to the one above is logged, and that the pod recovers and restarts normally with no loss of data. (Note that if the node is running many CSM for Resiliency protected pods, the node may need to be down longer for CSM for Resiliency to have time to evacuate all the protected pods.)
Expand Down