-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vector making excessive api requests to Kubernetes API server on 429/500 http responses #16798
Comments
@jeremy-mi-rh Hello! Just to share. To prevent Kubernetes API overloading on restarts, we added a FlowSchema with PriorityLevelConfigurartion only for Vector pods, and it works like a charm! It helped us a lot with overloading. The code snippet can be found here: The other option was using the |
Thanks for sharing! @nabokihms Internally we are looking to implement/enforce APF as well in our clusters. As of now we don't have any so it will take some time for us to get there. Great to hear that flowcontrol does help on this use case and that would definitely motivates us to adopt it. Other than enforcing APF, is it possible to allow users to configure backoff strategy from vector configurations? |
In the Kubernetes client-go the throttling feature is embedded in the client itself, but I think there is no such thing in kube-rs at the current time. In my humble opinion, users do not need to worry about the back off policy. It should work out of the box, and if it is not, it should be fixed on the vector side, not by configuration. |
Added default backoff to initial list request for all watchers #17009 It turned out that in case of an error, vector immediately retried the list request. The problem with |
Wow thanks for the quick fix! This will be greatly helpful to all vector users on k8s sources! Once the release is out, we will be testing it in our cluster and share the results in the community! |
👋 hi all, just wanted to double check before closing this manually, does #17009 fully satisfy the issue described here? |
@neuronull I'd like to keep this open for now untill the new vector version is released and all interested parties has the chance to test it (if you don't mind). |
Ah, I missed that bit but see it now. Sounds good 👍 and thanks for your work on this! |
Hi all, having a similar issue taking out a production cluster (>1000 nodes). I'm curious in which version of vector this change is intended to be released. |
@skygrammas this looks to have been released in |
Closing since #17009 has been released for a few versions now. Thanks again @nabokihms ! |
A note for the community
Problem
This is to split issues 16753 to its own.
Context
We use vector to deliver
kubernetes_logs
to our kafka cluster which will be later processed and ingested into Humio. Vector is deployed as a daemon set in our kubernetes clusteres (each with >1000 nodes running).We recently had an outage in one of our kubernetes clusters (with ~1100 nodes running). There was a failure in ETCD leader node, which resulted in a cascaded failure where pods making 1000x API calls to our API server which eventually brought the kubernetes control plane down entirely.
In the process of remediation, we identified vector as one of the candidate that was hammering the API server. Shutting down vector along with a few other daemon sets eventually reduced the traffic on Control Plane components, which allows ETCD nodes to recover.
Issue: Need a more aggressive backoff strategy?
The issue we found is that vector was making lot more requests when it was seeing non successful responses from Kube API server. Making more requests is expected, as it needs to retry. However we are seeing 1000x times more requests in some cases.
Before 17:45, the traffic was pretty steady. It makes 1 - 300 requests at a per minute basis. And when etcd server starts to have issues, it starts to make very aggressive requests which results in as many as 200,000 requests per minute. Is there a way we can configure the backoff strategy in this case? Or should it be less aggressive on retrying by default?
Also attached the graph that filtered on 429 response code:
Configuration
No response
Version
vector 0.27.0 (x86_64-unknown-linux-gnu 5623d1e 2023-01-18)
References
#7943
#16753
The text was updated successfully, but these errors were encountered: