Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rook-agent fails to unmount PV because the PV is not found #1415

Closed
jbw976 opened this issue Jan 17, 2018 · 4 comments
Closed

rook-agent fails to unmount PV because the PV is not found #1415

jbw976 opened this issue Jan 17, 2018 · 4 comments
Labels

Comments

@jbw976
Copy link
Member

jbw976 commented Jan 17, 2018

Is this a bug report or feature request?

  • Bug Report

Bug Report

What happened:
When deleting a postgres instance that uses a Rook block PVC using kubectl delete -f postgres.yaml, the postgres pod gets stuck in the terminating state.

The operator is failing to delete the rbd block image because image has watchers - not removing.

The agent is failing to unmount/unmap the rbd device because failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found. The agent needs to look up information about the device/image using the PV, but the PV object has already been deleted from the k8s API.

There may be a race where the PV gets deleted before the agent is able to look up information about it:

pv, err := c.context.Clientset.CoreV1().PersistentVolumes().Get(attachOptions.VolumeName, metav1.GetOptions{})

Full logs can be found in the following gist: https://gist.github.com/jbw976/50b5446751a9529da1cbdf8aceb05796

What you expected to happen: The rbd device to be unmapped and the pod to be terminated.

How to reproduce it (minimal and precise):
Create a rook cluster and storage class, then run the following using postgres.yaml from the gist above:

kubectl create -f postgres.yaml
# wait for pod to be running
kubectl delete -f postgres.yaml

Environment:

  • OS (e.g. from /etc/os-release): Container Linux by CoreOS 1492.6.0 (Ladybug)
  • Kernel (e.g. uname -a): Linux core-02 4.12.10-coreos Monitor bootstrapping with libcephd #1 SMP Tue Sep 5 20:43:55 UTC 2017 x86_64 Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz GenuineIntel GNU/Linux
  • Cloud provider or hardware configuration: CoreOS VM's using Parallels for Mac from https://github.com/quantum/coreos-vagrant
  • Rook version (use rook version inside of a Rook Pod): v0.6.0-150.g2b5acad.dirty
  • Kubernetes version (use kubectl version): v1.7.11
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): kubeadm on CoreOS
  • Ceph status (use ceph health in the Rook toolbox): HEALTH_OK
@kokhang
Copy link
Member

kokhang commented Jan 17, 2018

This might be a bigger issue than we think. There is on-going work to make this being handled better in Kubernetes with finalizers. kubernetes/enhancements#498.

More details here: kubernetes/kubernetes#45143

@stale
Copy link

stale bot commented Aug 6, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Aug 6, 2018
@stale
Copy link

stale bot commented Aug 14, 2018

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

@stale stale bot closed this as completed Aug 14, 2018
@calder
Copy link

calder commented Sep 28, 2018

Still seeing this with Rook v0.8.2 on Kubernetes v1.10.3:

E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/bea037c4-c0f5-11e8-9964-005056b57c89/volumes/ceph.rook.io~rook-ceph-system/pvc-be7ff46d-c0f5-11e8-9964-005056b57c89 failed: failed to get persistent volume pvc-be7ff46d-c0f5-11e8-9964-005056b57c89: persistentvolumes "pvc-be7ff46d-c0f5-11e8-9964-005056b57c89" not found

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants