-
Notifications
You must be signed in to change notification settings - Fork 336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scaling issue: kube apiserver throttles external-provisioner when 100 PVCs created at same time #68
Comments
@cduchesne Please try the same test using this PR for external provisioner: |
@sbezverk, I have hit the same issue. After getting throttled, it will take over 30mins to create a Volume. I will try your PR. |
The problem is still not resolved. I am not sure if I understand correctly, the root cause is that Every time We can go through the logic of
Once external-provisioner gets throttled, the operation of watching
Worse, because of the policy of timeout of It will make things worse. |
Thanks for surfacing this @orainxiong. Problem sounds like the external-provisioner (and probably external-attacher and driver-registrar), open connections to kube API server without thinking twice. If you provision a lot of volumes at once, the kube API server gets mad and throttles everything. We should do the following:
|
@saad-ali Many thanks for your reply.
As you mentioned, If possible, I hope to have the opportunity to ask you @saad-ali some views on the granularity of lock of |
Spoke with Jan. He agrees with @orainxiong: Major source of issues is probably the leader election logic -- each provisioner tries to write an annotation trying to be leader, if not successful each will retry after some time. We should focus on making leader election scalable. |
Temporary workaround may be to disable leader election per PV and instead use leader election per provisioner. |
Yes n pvcs * m provisioners is unacceptably bad. The original code is ancient, I will copy how HA is done in kube. It is only like this for the rare case where a person wants multiple provisioner instances (pointing to different storage sources w/ same properties) to serve the same storage class. But that isn't a common use-case and this is a lazy and inefficient way of achieving it anyway. An alternative was sharding but I don't think it's possible |
@saad-ali Thanks for your reply. As we discussed above, this issue is mainly due to the leadership election and the abuse of
But, these are not enough, both of There is a PR show more details which mainly involve following files :
These changes resulted in significantly reducing the time-consuming of |
Where are we with this? Is @orainxiong PR the work around we want to pursue ? |
@orainxiong can you please create a proper PR so we can comment on it. Thanks. |
@vladimirvivien Here is PR #104 If any questions, please let me know. Many thanks. |
@orainxiong nice summary and illustration of the issue! The leader lock change looks to be a decent fix (atleast as an interim) for now. |
added https://github.com/kubernetes-incubator/external-storage/releases/tag/v5.0.0 . just need to bump this repo's deps and edit https://github.com/kubernetes-csi/external-provisioner/blob/master/deploy/kubernetes/statefulset.yaml#L31 slightly to look like https://github.com/kubernetes-incubator/external-storage/blob/master/aws/efs/deploy/rbac.yaml#L19 |
We removed per-pvc leader election a while ago and I think that this issue is fixed in 1.0. I created 100 PVCs with host path CSI driver and all PVs were created within ~50 seconds (using single VM with 2 CPUs and local-up-cluster.sh), without any API throttling. 1000 PVCs were provisioned in ~8.5 minutes. |
OCPBUGS-17264: USPTREAM: 969: build(deps): bump golang.org/x/tools from 0.9.3 to 0.12.0
Fixbug: check driver timeout
Reported by @cduchesne
The CSI external-provisioner has scaling issues.
When 100 PVCs are created at same time, the CSI external-provisioner hammers the kube apiserver with requests and gets throttled, causing all sorts of issues.
The text was updated successfully, but these errors were encountered: