You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For some weeks now we have sporadic issues during mount command at pod startup:
MountVolume.SetUp failed for volume "my-volume" : rpc error: code = Internal desc = Could not mount "fs-**********:/" at "/var/lib/kubelet/pods/b9a74dfa-a97c-4e2f-8f99-454be86a52b1/volumes/kubernetes.io~csi/my-volume/mount": mount failed: exit status 255 Mounting command: mount Mounting arguments: -t efs -o accesspoint=fsap-*******,tls fs-0993325f4ff5304d8:/ /var/lib/kubelet/pods/b9a74dfa-a97c-4e2f-8f99-454be86a52b1/volumes/kubernetes.io~csimy-volume/mount Output:
The pod is then stuck in ContainerCreating status. Usually killing the pod solves the problem. I cannot tell for now exactly since which version we have the problem, but we currently have the v3.0.8
It might be linked to the fact that for any reason some members of the efs-csi-node seems to use more memory than others and are hitting the memory limit (and restarting) a lot:
I don't see anything specific from the efs-csi-node pod itself except this one:
Failed to establish connection to CSI driver" err="context deadline exceeded"
How to reproduce it (as minimally and precisely as possible)?
I don't really know how exactly to reproduce the error as it only appears sporadically. I'm still investigating on that point
Anything else we need to know?:
Environment
Kubernetes version (use kubectl version): v1.30
Driver version: v3.0.8
The text was updated successfully, but these errors were encountered:
/kind bug
What happened?
For some weeks now we have sporadic issues during mount command at pod startup:
The pod is then stuck in
ContainerCreating
status. Usually killing the pod solves the problem. I cannot tell for now exactly since which version we have the problem, but we currently have the v3.0.8It might be linked to the fact that for any reason some members of the efs-csi-node seems to use more memory than others and are hitting the memory limit (and restarting) a lot:
I don't see anything specific from the efs-csi-node pod itself except this one:
How to reproduce it (as minimally and precisely as possible)?
I don't really know how exactly to reproduce the error as it only appears sporadically. I'm still investigating on that point
Anything else we need to know?:
Environment
kubectl version
): v1.30The text was updated successfully, but these errors were encountered: