-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: Powerscale CSI driver RO PVC-from-snapshot wrong zone #487
Comments
@danthem This is the expected behavior as on Isilon snapshot is always created under the system access zone. |
Snapshots are kind of outside the access zones in a way: their path can be accessed both through the system zone (if going via It is fully possible to create an NFS export to a snapshot that was created in a particular access zone path and then access that NFS export through that access zone (without going through the System zone), as I have demonstrated when opening this issue. Currently CSI driver creates the NFS export in the System zone with a System zone path: But it does not have to do that, if the CSI driver had instead created the export with a path like this: Here I am mounting this snapshot directly through the 'csizone' access zone IP: |
I tried creating snapshot of pvc on isilon . I could see pvc in the /ifs/NFS/integration/k8s-611993c2ac Is there any extra params needs to be used while creating snapshot? Below are the details of |
As referred by you that a specific snapshot directory is getting created in specific access zone . I could confirm that |
@rajkumar-palani Pls help update and provide clarification. |
@bharathsreekanth - we have already created internal JIRA ticket to address this issue in Q1. |
@rajkumar-palani @bharathsreekanth have any updates been made here? |
@rajkumar-palani Please close if the issue has been resolved in Q1 |
Bug is fixed in CSM 1.8 |
Bug Description
In an environment where the storageclass Access Zone/AzServiceIP is different from the System zone (which is used as API endpoint), a Read-Only PVC from snapshot gets its NFS export on PowerScale created incorrectly in the wrong zone (System). This in turn results in pods being created using this PVC failing to start as they're unable to mount the NFS export. This is because the pods will try to mount using the correct AzServiceIP but the export is not created in that zone.
When creating a Read-Write PVC from snapshot it's working as the PVC gets correctly created in my 'csizone', however since it's RW it must create the new path and then copy the data from the snapshot to this new path. Depending on the use case for the RO PVC this may be inefficient and take a lot of time. A RO PVC would be able to point directly to the snapshot (since the snapshot itself is RO) and because of this there would be no need for any copy of data to happen, instead the PVC can be ready immediately.
Logs
Pod that fails to deploy indicates that it's being rejected to mount. This is because it's trying to access an NFS exports that was (incorrectly) created in System zone via an IP in my CSI Access Zone
Screenshots
No response
Additional Environment Information
<see 'steps to reproduce'>
Steps to Reproduce
I have included my storageclass, original PVC, original pod, snapshotclass, volumsnap, rw_pvc_from_snap, ro_pvc_from_snap .yaml files as well as two additional pod yamls, one for creating an nginx instance from a rw pvc from snapshot and one for creating an nginx instance from a ro pvc from snapshot.
Find them below:
RO_snap.tar.gz
And my values.yaml:
values.yaml.txt
As you can see I use System zone for endpoint but then I have set csizone and an IP in that zone for the storageclass.
The whole setup can quickly be done by extracting them to a directory and then running:
$ for file in {1..9}_*.yaml; do kubectl create -f $file ;sleep 1; done
What you will see is:
and from kubectl we can see that the last pod fails to start:
If we describe the pod we can see the problem:
We're getting access denied when trying to mount.. So let's look at PowerScale for that export:
I only have two NFS exports created by CSI driver in this zone, the first one is the original PVC and the second one is the RW PVC from snapshot... So where's my RO PVC from snapshot? Let's look in the system zone:
So my newly RO PVC from snapshot was created in the system zone... This is why my new pod is also unable to access it, the pod is trying to mount via an IP in the csizone.
Expected Behavior
The expected behavior is for the RO PVC to create an NFS export on PowerScale within the correct access zone. This is done by creating the export under
/ifs/<AZ>/<path-to-original-pvc>/.snapshot/<snapshot name>
and not under/ifs/.snapshot/~~~
.For my example above the NFS export should have been created on path:
/ifs/csi_zone/k8s-2c5069e508/.snapshot/snapshot-078ad18e-cac2-4711-a011-ba8b4e6947e3
and with parameter--zone=csizone
This export is created in the correct zone which means it will be possible to mount it via the IPs in my csizone, example:
CSM Driver(s)
CSI Driver for PowerScale 2.4.0
Installation Type
Helm
Container Storage Modules Enabled
registry.k8s.io/sig-storage/snapshot-controller:v6.0.1
Container Orchestrator
Kubernetes v1.24.3 / minikube 1.26.1
Operating System
RHEL 8.6
The text was updated successfully, but these errors were encountered: