Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trident CSI Node plugin is unregistered after Kubernetes version was updated #487

Closed
tksm opened this issue Nov 25, 2020 · 4 comments
Closed

Comments

@tksm
Copy link

tksm commented Nov 25, 2020

Describe the bug

Trident CSI Node plugin (csi.trident.netapp.io) on one node is now unregistered after the Kubernetes version was updated from v1.18.9 to v1.19.4. Pods on this node can no longer mount and unmount Trident volumes.

Error messages

We see the following messages in the kubelet log.

csi.trident.netapp.io was unregistered since the registration socket (/var/lib/kubelet/plugins_registry/csi.trident.netapp.io-reg.sock) had been removed.

I1119 05:47:54.246972 6550 plugin_watcher.go:212] Removing socket path /var/lib/kubelet/plugins_registry/csi.trident.netapp.io-reg.sock from desired state cache
I1119 05:47:53.162305 6550 reconciler.go:139] operationExecutor.UnregisterPlugin started for plugin at "/var/lib/kubelet/plugins_registry/csi.trident.netapp.io-reg.sock" (plugin details: &{/var/lib/kubelet/plugins_registry/csi.trident.netapp.io-reg.sock 2020-11-04 05:08:19.553684094 +0000 UTC m=+38.893901704 0x704c200 csi.trident.netapp.io})
I1119 05:47:53.163390 6550 csi_plugin.go:177] kubernetes.io/csi: registrationHandler.DeRegisterPlugin request for plugin csi.trident.netapp.io

The pod could not unmount the volume because csi.trident.netapp.io was not found.

E1119 09:02:52.819122 6550 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/csi.trident.netapp.io^pvc-75a6fd7f-7aee-45e8-a5fa-d4500272528e podName:ad18a7d1-4090-4e0c-9e71-cba46dfc3657 nodeName:}" failed. No retries permitted until 2020-11-19 09:04:54.819071328 +0000 UTC m=+1310234.159288938 (durationBeforeRetry 2m2s). Error: "UnmountVolume.TearDown failed for volume "data" (UniqueName: "kubernetes.io/csi/csi.trident.netapp.io^pvc-75a6fd7f-7aee-45e8-a5fa-d4500272528e") pod "ad18a7d1-4090-4e0c-9e71-cba46dfc3657" (UID: "ad18a7d1-4090-4e0c-9e71-cba46dfc3657") : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi.trident.netapp.io not found in the list of registered CSI drivers"

Two trident-csi were running simultaneously

We found that two trident-csi (Node Plugin) pods on this node were running simultaneously for a very short time, and that the old driver-registrar had stopped after a new one had started.

driver-registrar removes the registration socket (/var/lib/kubelet/plugins_registry/csi.trident.netapp.io-reg.sock) when it recieves SIGTERM (node_register.go#L113-L116). Removing the socket causes the kubelet to unregister the Trident plugin. I believe this is the cause of the problem.

image

DaemonSet was recreated after updating

Trident-csi (Node Plugin) pods are managed by DaemonSet. Normally, only one pod runs on every node. But after Kubernetes was updated, trident-csi Daemonset was recreated by trident-operator. Deleting DaemonSet allows two pods (old and new) to run simultaneously.

We confirmed this on the trident-operator log.

Here, the trident-csi Daemonset was deleted.

time="2020-11-19T05:47:45Z" level=debug msg="Deleted Kubernetes DaemonSet." DaemonSet=trident-csi namespace=trident

The trident-csi Daemonset was then created soon after.

time="2020-11-19T05:47:45Z" level=debug msg="Creating object." kind=DaemonSet name=trident-csi namespace=trident

After Kubernetes was updated, the shouldUpdate flag was set to true (controller.go#L1110). It seems that the shouldUpdate flag causes the trident-csi Daemonset to be deleted(installer.go#L1489-L1494).

Environment

  • Trident version: 20.10.0 with trident-operator
  • Trident installation flags used: silenceAutosupport: true (Trident Operator)
  • Container runtime: Docker 19.03.13
  • Kubernetes version: v1.19.4
  • Kubernetes orchestrator: Kubernetes
  • Kubernetes enabled feature gates:
  • OS: Ubuntu 18.04
  • NetApp backend types: ONTAP AFF 9.1P14
  • Other:

To Reproduce

Updating the Kubernetes version may reproduce this problem. Since updating Kubernetes takes a long time and does not always happen, we confirmed the following behaviors that cause this problem through different demonstrations.

Two trident-csi causes the kubelet to unregister Trident plugin

  1. Confirm that the Trident CSI driver is registered on the node.
$ kubectl describe csinodes.storage.k8s.io <NODE_NAME>
...
Spec:
  Drivers:
    csi.trident.netapp.io:
      Node ID:        <NODE_NAME>
      Topology Keys:  [topology.kubernetes.io/zone]
  1. Copy trident-csi DaemonSet to run two trident-csi pods on each node.
$ kubectl get ds -n trident trident-csi -o json | jq '.metadata.name|="trident-csi-2"' | kubectl apply -f -
  1. Wait for them to run, then delete copied trident-csi-2 DaemonSet.
$ kubectl delete ds -n trident trident-csi-2
  1. Confirm that the Trident CSI driver has disappeared in the Drivers section on the node. (This will take some time)
$ kubectl describe csinodes.storage.k8s.io <NODE_NAME>
Spec:

Recreating DaemonSet allows two pods (old and new) to run simultaneously

  1. Delete trident-csi DaemonSet. The DaemonSet will be recreated soon after by the trident-operator.
$ kubectl delete ds -n trident trident-csi
  1. You will see two trident-csi pods on each node.
$ kubectl get pods -n trident -o wide

Expected behavior
Pods can mount and unmount Trident volumes after Kubernetes version is updated.

Additional context
None

@tksm tksm added the bug label Nov 25, 2020
@gnarl gnarl added the tracked label Nov 30, 2020
@rohit-arora-dev
Copy link
Contributor

Hello @tksm

Thank you for providing details of this issue and looking closely at the underlying cause, your analysis is very helpful. The window between the daemonset pod's termination and getting recreation is critical, and the latter should only occur only when the former has completed. Therefore, the operator should ensure that before daemonset creation the pods belonging to the previous daemonset are all deleted and then only create a new a daemonset.

Out of curiosity, do you mind me asking the number of clusters that have run into this issue during an upgrade?

Thank you!

@tksm
Copy link
Author

tksm commented Nov 30, 2020

Hi, @ntap-arorar

Thank you for confirming. I think your idea will fix this issue.

We have run into this issue on just one cluster so far, since we had upgraded only a few clusters as a test.

@gnarl
Copy link
Contributor

gnarl commented Dec 1, 2020

We will include this fix in the Trident v21.01 release.

@gnarl
Copy link
Contributor

gnarl commented Jan 30, 2021

This issue was fixed with commit 820579d.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants