-
Notifications
You must be signed in to change notification settings - Fork 559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce scope of default tolerations #363
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/charts/aws-efs-csi-driver/values.yaml#L105 still relevant |
This is a major issue that prevents the EFS CSI pods from being evicted when a node is scaled down by the Kubernetes cluster-autoscaler. Allowing the pod to tolerate all taints seems to be in direct opposition to the way NotReady and NoExecute taints are designed in Kubernetes and effects the operation of other Kubernetes system pods. |
It looks like the default toleration was removed from controller-deployment.yaml as part of release v1.3.1 - this issue can likely be closed. 494d75e#diff-5d40e4554aa98fe9c294f9ad03acc2e515a612f31678f3c2282d6c8f19415b51 |
yes it should be fixed by latest driver + helm chart |
/kind bug
What happened? Exists is too much, we should tone it down in next helm chart release: kubernetes-sigs/aws-ebs-csi-driver#758 (comment)
What you expected to happen?
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?:
Environment
kubectl version
):The text was updated successfully, but these errors were encountered: