You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Our plan was to test the latest driver's xfs capabilities.
Using the v2.3.0-rc.3 we were able to dynamically provision a CSI xfs volume.
Using the vslm global object manager we can take a snapshot using the CreateSnapshot method.
We are also able to create a volume from this snapshot using the CreateDiskFromSnapshot method.
The issues arise when we try to mount the newly created volume. We statically create a PV, PVC and a Pod to use this new volume. However the Pod is stuck in the ContainerCreating state with the following error-
MountVolume.MountDevice failed for volume "<pvname>" : rpc error: code = Internal desc = error in formating and mounting volume. Parameters: {20d6625e-dac2-4952-9ef4-dcce53a0c0d0 xfs /var/vcap/data/kubelet/plugins/kubernetes.io/csi/pv/<pvname>/globalmount [] false} err: mount failed: exit status 32
mount: /var/vcap/data/kubelet/plugins/kubernetes.io/csi/pv/kio-d08227bdeb2a11ebba85deec1d624672-0/globalmount: wrong fs type, bad option, bad superblock on /dev/sdm, missing codepage or helper program, or other error.
The interesting this is that this error isn't consistent. On some nodes it succeeds in mounting the volume and others it doesn't.
What you expected to happen:
The expectation is that this mount succeeds all the time. There is no discernible difference between the nodes. Also the error message is vague.
How to reproduce it (as minimally and precisely as possible):
As mentioned above we use the v2.3.0-rc.3 release for the driver and the syncer on the controller and the driver running on the nodes. Using the vlsm GOM we do a CreateSnapshot and a CreateDiskFromSnapshot to get a copy of the volume. Then we attempt to use this volume to do static provisioning.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Our plan was to test the latest driver's xfs capabilities.
CreateSnapshot
method.CreateDiskFromSnapshot
method.The issues arise when we try to mount the newly created volume. We statically create a PV, PVC and a Pod to use this new volume. However the Pod is stuck in the ContainerCreating state with the following error-
The interesting this is that this error isn't consistent. On some nodes it succeeds in mounting the volume and others it doesn't.
What you expected to happen:
The expectation is that this mount succeeds all the time. There is no discernible difference between the nodes. Also the error message is vague.
How to reproduce it (as minimally and precisely as possible):
As mentioned above we use the
v2.3.0-rc.3
release for the driver and the syncer on the controller and the driver running on the nodes. Using the vlsm GOM we do a CreateSnapshot and a CreateDiskFromSnapshot to get a copy of the volume. Then we attempt to use this volume to do static provisioning.Anything else we need to know?:
Environment:
uname -a
): 4.19.150-1.ph3The text was updated successfully, but these errors were encountered: