Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disk snapshots with XFS filesystem cannot be used #566

Closed
jsafrane opened this issue Nov 30, 2021 · 1 comment · Fixed by #570
Closed

Disk snapshots with XFS filesystem cannot be used #566

jsafrane opened this issue Nov 30, 2021 · 1 comment · Fixed by #570

Comments

@jsafrane
Copy link
Contributor

jsafrane commented Nov 30, 2021

It is not possible to restore a snapshot of a XFS volume and use it on the same node as the original volume.

Steps to reproduce:

  1. Create a StorageClass with csi.storage.k8s.io/fstype: xfs.
  2. Create PVC A + Pod A using it, store some data on the provisioned volume.
  3. Stop the Pod A
  4. Take a snapshot of the PVC A, restore it into a new volume as PVC B.
  5. Run both Pod A (with PVC A) and Pod B (with PVC B) on the same node.

Actual result: one of the pods can't be started:

MountVolume.MountDevice failed for volume "pvc-25a7129e-2cd7-4170-9050-550057ed0e20" : rpc error: code = Internal desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t xfs -o shared,defaults /dev/vde /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-25a7129e-2cd7-4170-9050-550057ed0e20/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-25a7129e-2cd7-4170-9050-550057ed0e20/globalmount: wrong fs type, bad option, bad superblock on /dev/vde, missing codepage or helper program, or other error.

Kernel says:

[17567.886004] XFS (vde): Filesystem has duplicate UUID b5820d11-d42d-4fc1-8e4a-ae431b33039d - can't mount

Expected result: both pods can run.

See container-storage-interface/spec#482 for details.

There is a simple fix, add nouuid mount option to all XFS mounts to ignore the UUID. Here is how we fixed it in AWS EBS CSI driver: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/913/files#diff-4f4d9c2e0ec5c4d3b4ac2d79abd70ee21bacff65ba05c059396921996dbf6607R673

@mowangdk
Copy link
Contributor

ok, i see, i will fix it ASAP

mowangdk added a commit to mowangdk/alibaba-cloud-csi-driver that referenced this issue Dec 3, 2021
jsafrane pushed a commit to jsafrane/alibaba-cloud-csi-driver that referenced this issue Dec 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants