You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is not possible to restore a snapshot of a XFS volume and use it on the same node as the original volume.
Steps to reproduce:
Create a StorageClass with csi.storage.k8s.io/fstype: xfs.
Create PVC A + Pod A using it, store some data on the provisioned volume.
Stop the Pod A
Take a snapshot of the PVC A, restore it into a new volume as PVC B.
Run both Pod A (with PVC A) and Pod B (with PVC B) on the same node.
Actual result: one of the pods can't be started:
MountVolume.MountDevice failed for volume "pvc-25a7129e-2cd7-4170-9050-550057ed0e20" : rpc error: code = Internal desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t xfs -o shared,defaults /dev/vde /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-25a7129e-2cd7-4170-9050-550057ed0e20/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-25a7129e-2cd7-4170-9050-550057ed0e20/globalmount: wrong fs type, bad option, bad superblock on /dev/vde, missing codepage or helper program, or other error.
Kernel says:
[17567.886004] XFS (vde): Filesystem has duplicate UUID b5820d11-d42d-4fc1-8e4a-ae431b33039d - can't mount
It is not possible to restore a snapshot of a XFS volume and use it on the same node as the original volume.
Steps to reproduce:
csi.storage.k8s.io/fstype: xfs
.Actual result: one of the pods can't be started:
Kernel says:
Expected result: both pods can run.
See container-storage-interface/spec#482 for details.
There is a simple fix, add
nouuid
mount option to all XFS mounts to ignore the UUID. Here is how we fixed it in AWS EBS CSI driver: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/913/files#diff-4f4d9c2e0ec5c4d3b4ac2d79abd70ee21bacff65ba05c059396921996dbf6607R673The text was updated successfully, but these errors were encountered: