-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems mounting XFS volume clones / restored snapshots #482
Comments
This sounds like a plug-in issue, not a CO issue.
…On Wed, May 26, 2021, 6:24 AM Jan Šafránek ***@***.***> wrote:
XFS does not allow to mount two volumes that have the same UUID on the
same machine. The second mount fails with:
[44557.612032] XFS (vde): Filesystem has duplicate UUID fadf19ab-bbcc-4f40-8d4f-44550e822db1 - can't mount
This is problematic when using a cloned volume or a restored snapshot -
the original volume and the new volume cannot be mounted on the same
compute node.
On what level should be the issue solved?
-
CSI spec could mention that it's CSI plugin problem to make sure
cloned / restored volumes are usable on the same node as the original vol
(e.g. by using -o nouuid mount opt. for xfs volumes or using xfs_admin
-U genereate to re-generate UUID on the first mount after volume
restore / clone).
-
CSI spec could mention that it's CO problem to pass e.g. -o nouuid
mount option to all XFS NodeStage/NodePublish calls.
In both cases, someone must check that XFS is used and know that it needs
a special handling.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#482>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAR5KLGBUMHJBN4LGK4USBTTPTD4FANCNFSM45RWFRPQ>
.
|
Agreed with @jdef. We discussed this at the community meeting today, see Notes Conclusion:
|
Ceph CSI was hit on this issue at clone time , ceph/ceph-csi#966 (comment) , |
I'm experimenting with a Kubernetes e2e test and AWS EBS CSI driver fix in kubernetes-sigs/aws-ebs-csi-driver#913 |
Turned into real e2e test in Kubernetes: kubernetes/kubernetes#102538 |
As mentioned in a CSI issue [1], XFS does not allow to mount two volumes that have the same UUID on the same machine. This is problematic when using a cloned volume or a restored snapshot - the original volume and the new volume cannot be mounted on the same compute node. This patch fixes this issue by mounting XFS volumes with the "nouuid" option, since regenerating the UUID requires mounting the volume on the controller, and now the latest e2e tests [1] pass. [1]: container-storage-interface/spec#482 [2]: kubernetes/kubernetes#102538
XFS does not allow to mount two volumes that have the same UUID on the same machine. The second mount fails with:
This is problematic when using a cloned volume or a restored snapshot - the original volume and the new volume cannot be mounted on the same compute node.
On what level should be the issue solved?
CSI spec could mention that it's CSI plugin problem to make sure cloned / restored volumes are usable on the same node as the original vol (e.g. by using
-o nouuid
mount opt. for xfs volumes or usingxfs_admin -U genereate
to re-generate UUID on the first mount after volume restore / clone).CSI spec could mention that it's CO problem to pass e.g.
-o nouuid
mount option to all XFS NodeStage/NodePublish calls.In both cases, someone must check that XFS is used and know that it needs a special handling.
The text was updated successfully, but these errors were encountered: