Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update release notes unity #1191

Merged
merged 6 commits into from
Jul 23, 2024
Merged

Conversation

shanmydell
Copy link
Collaborator

Description

A few sentences describing the overall goals of the pull request's commits.

GitHub Issues

List the GitHub issues impacted by this PR:

GitHub Issue #
dell/csm#1386

Checklist:

  • Have you run a grammar and spell checks against your submission?
  • Have you tested the changes locally?
  • Have you tested whether the hyperlinks are working properly?
  • Did you add the examples wherever applicable?
  • Have you added high-resolution images?

Copy link

github-actions bot commented Jul 22, 2024

Test Results

76 tests   76 ✅  3s ⏱️
 3 suites   0 💤
 1 files     0 ❌

Results for commit 58cca45.

♻️ This comment has been updated with latest results.

@@ -36,7 +36,7 @@ description: Release notes for Unity XT CSI driver
| A CSI ephemeral pod may not get created in OpenShift 4.13 and fail with the error `"error when creating pod: the pod uses an inline volume provided by CSIDriver csi-unity.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged."` | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission. Therefore, an additional label `security.openshift.io/csi-ephemeral-volume-profile` in [csidriver.yaml](https://github.com/dell/helm-charts/blob/csi-unity-2.8.0/charts/csi-unity/templates/csidriver.yaml) file with the required security profile value should be provided. Follow [OpenShift 4.13 documentation for CSI Ephemeral Volumes](https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html) for more information. |
| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with Kubernetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
| fsGroupPolicy may not work as expected without root privileges for NFS only [https://github.com/kubernetes/examples/issues/260](https://github.com/kubernetes/examples/issues/260) | To get the desired behavior set “RootClientEnabled” = “true” in the storage class parameter |
| Controller publish is taking too long to complete/ Health monitoring is causing Unity array to panic by opening multiple sessions | Disable Volume health monitoring on the node and keep it only at the controller level.|
| Controller publish is taking too long to complete/ Health monitoring is causing Unity array to panic by opening multiple sessions | Disable Volume health monitoring on the node and keep it only at the controller level. Refer [here](https://dell.github.io/csm-docs/docs/csidriver/features/unity/#volume-health-monitoring) for more information about enabling/disabling volume health monitoring|
Copy link
Contributor

@gallacher gallacher Jul 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Disable Volume health... " - Volume should be volume. What other information can be provided besides "Controller publish is taking too long?". What about things that would be observed in the logs? Under what circumstances would this be seen?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gallacher : Health monitoring is causing Unity array to panic by opening multiple sessions - This is the reason why it has been seen, it is also added

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shanmydell - what errors are seen in the logs? What's observed in k8s? Please provide more details.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @shanmydell for mentioning the error we are receiving in k8s logs, reason and the roundabout to circumvent the issue.

Copy link
Contributor

@chimanjain chimanjain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@chimanjain chimanjain force-pushed the update_release_notes_unity branch from a47e9d5 to 58cca45 Compare July 23, 2024 08:08
@chimanjain chimanjain merged commit 9fc6a96 into release-1.11.0 Jul 23, 2024
7 checks passed
@chimanjain chimanjain deleted the update_release_notes_unity branch July 23, 2024 12:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants