Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RBD: RWX pods are not getting attached to multiple pods #3604

Closed
Madhu-1 opened this issue Jan 11, 2023 · 1 comment · Fixed by #3605
Closed

RBD: RWX pods are not getting attached to multiple pods #3604

Madhu-1 opened this issue Jan 11, 2023 · 1 comment · Fixed by #3605
Labels
bug Something isn't working component/rbd Issues related to RBD

Comments

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 11, 2023

[🎩︎]mrajanna@fedora rbd $]kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
raw-block-pvc   Bound    pvc-391a180d-3ad9-44ad-b588-381e269324b7   1Gi        RWX            rook-ceph-block   2m45s
[🎩︎]mrajanna@fedora rbd $]kubectl get po
NAME                          READY   STATUS              RESTARTS   AGE
pod-with-raw-block-volume-1   1/1     Running             0          2m40s
pod-with-raw-block-volume-2   0/1     ContainerCreating   0          111s
Events:
  Type     Reason                  Age                From                     Message
  ----     ------                  ----               ----                     -------
  Normal   SuccessfulAttachVolume  2m1s               attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-391a180d-3ad9-44ad-b588-381e269324b7"
  Warning  FailedMapVolume         17s (x2 over 67s)  kubelet                  MapVolume.SetUpDevice failed for volume "pvc-391a180d-3ad9-44ad-b588-381e269324b7" : rpc error: code = Internal desc = rbd image replicapool/csi-vol-657f4f6d-216c-4ce3-8180-ccc4980b112b is still being used
I0111 11:59:43.933307    7004 utils.go:195] ID: 8 Req-ID: 0001-0009-rook-ceph-0000000000000002-657f4f6d-216c-4ce3-8180-ccc4980b112b GRPC call: /csi.v1.Node/NodeStageVolume
I0111 11:59:43.933388    7004 utils.go:206] ID: 8 Req-ID: 0001-0009-rook-ceph-0000000000000002-657f4f6d-216c-4ce3-8180-ccc4980b112b GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-391a180d-3ad9-44ad-b588-381e269324b7","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":5}},"volume_context":{"clusterID":"rook-ceph","imageFeatures":"layering","imageFormat":"2","imageName":"csi-vol-657f4f6d-216c-4ce3-8180-ccc4980b112b","journalPool":"replicapool","mounter":"rbd-nbd","pool":"replicapool","storage.kubernetes.io/csiProvisionerIdentity":"1673438245124-8081-rook-ceph.rbd.csi.ceph.com"},"volume_id":"0001-0009-rook-ceph-0000000000000002-657f4f6d-216c-4ce3-8180-ccc4980b112b"}
I0111 11:59:43.934215    7004 omap.go:88] ID: 8 Req-ID: 0001-0009-rook-ceph-0000000000000002-657f4f6d-216c-4ce3-8180-ccc4980b112b got omap values: (pool="replicapool", namespace="", name="csi.volume.657f4f6d-216c-4ce3-8180-ccc4980b112b"): map[csi.imageid:10f1e1dabd65 csi.imagename:csi-vol-657f4f6d-216c-4ce3-8180-ccc4980b112b csi.volname:pvc-391a180d-3ad9-44ad-b588-381e269324b7 csi.volume.owner:default]
I0111 11:59:43.965784    7004 rbd_util.go:352] ID: 8 Req-ID: 0001-0009-rook-ceph-0000000000000002-657f4f6d-216c-4ce3-8180-ccc4980b112b checking for ImageFeatures: [layering]
I0111 11:59:44.119596    7004 cephcmds.go:105] ID: 8 Req-ID: 0001-0009-rook-ceph-0000000000000002-657f4f6d-216c-4ce3-8180-ccc4980b112b command succeeded: rbd [device list --format=json --device-type nbd]
E0111 12:00:33.595680    7004 utils.go:210] ID: 8 Req-ID: 0001-0009-rook-ceph-0000000000000002-657f4f6d-216c-4ce3-8180-ccc4980b112b GRPC error: rpc error: code = Internal desc = rbd image replicapool/csi-vol-657f4f6d-216c-4ce3-8180-ccc4980b112b is still being used

tested with canary cephcsi image.

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jan 11, 2023

JFYI with v3.7.2 image its working without any issue

[🎩︎]mrajanna@fedora rbd $]kubectl get po
NAME                          READY   STATUS    RESTARTS   AGE
pod-with-raw-block-volume-1   1/1     Running   0          4m25s
pod-with-raw-block-volume-2   1/1     Running   0          3m36s

@Madhu-1 Madhu-1 added bug Something isn't working component/rbd Issues related to RBD labels Jan 11, 2023
Madhu-1 added a commit to Madhu-1/ceph-csi that referenced this issue Jan 11, 2023
set disableInUseChecks on rbd volume struct
as it will be used later to check whether
the rbd image is allowed to mount on multiple
nodes.

fixes: ceph#3604

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
@mergify mergify bot closed this as completed in #3605 Jan 11, 2023
mergify bot pushed a commit that referenced this issue Jan 11, 2023
set disableInUseChecks on rbd volume struct
as it will be used later to check whether
the rbd image is allowed to mount on multiple
nodes.

fixes: #3604

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working component/rbd Issues related to RBD
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant