-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rook-agent fails to unmount PV because the PV is not found #1415
Comments
This might be a bigger issue than we think. There is on-going work to make this being handled better in Kubernetes with finalizers. kubernetes/enhancements#498. More details here: kubernetes/kubernetes#45143 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation. |
Still seeing this with Rook v0.8.2 on Kubernetes v1.10.3:
|
Is this a bug report or feature request?
Bug Report
What happened:
When deleting a postgres instance that uses a Rook block PVC using
kubectl delete -f postgres.yaml
, the postgres pod gets stuck in the terminating state.The operator is failing to delete the rbd block image because
image has watchers - not removing
.The agent is failing to unmount/unmap the rbd device because
failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
. The agent needs to look up information about the device/image using the PV, but the PV object has already been deleted from the k8s API.There may be a race where the PV gets deleted before the agent is able to look up information about it:
rook/pkg/daemon/agent/flexvolume/controller.go
Line 271 in 70f21fd
Full logs can be found in the following gist: https://gist.github.com/jbw976/50b5446751a9529da1cbdf8aceb05796
What you expected to happen: The rbd device to be unmapped and the pod to be terminated.
How to reproduce it (minimal and precise):
Create a rook cluster and storage class, then run the following using postgres.yaml from the gist above:
Environment:
uname -a
): Linux core-02 4.12.10-coreos Monitor bootstrapping with libcephd #1 SMP Tue Sep 5 20:43:55 UTC 2017 x86_64 Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz GenuineIntel GNU/Linuxrook version
inside of a Rook Pod): v0.6.0-150.g2b5acad.dirtykubectl version
): v1.7.11ceph health
in the Rook toolbox): HEALTH_OKThe text was updated successfully, but these errors were encountered: