-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enabled the memory buffer causes high memory usage #8998
Comments
I feel that the input throughput of vector is greater than the output, resulting in oom. I expanded from 3 nodes to 9 nodes. This phenomenon does not appear. |
Hi @wgb1990 ! Apologies for the long delay in response. A few things jumped out from your configuration:
This will cause Vector to create batches of up to 7000 events or ~30 MB. The number of concurrent batches will be related to the number of partitions. For the
For:
Depending on your average event size, this could end up allocating a large amount as well. For example, if we assume your average event is 1 Kb, this would mean the buffer could be up to ~ 10 GB. Does this additional context help? Your graphs just show percentages so I can't tell what the RSS is of Vector in absolute terms. |
Closing this due to lack of response to the last comment. Feel free to re-open though. |
Vector Version
Vector Configuration File
Debug Output
Expected Behavior
memory consumption is within the normal range, and events can be sent to Loki storage normally
Actual Behavior
events accumulate in the buffer and eventually cause the pod to run out of memory and restart
Example Data
Vector I play the role of aggregator and have three nodes to process about 6000 events per second
Additional INFO
memory begins to grow at some point. It should be that the memory buffer accumulates a large number of events, which eventually causes Loki to stop sending.
References
The text was updated successfully, but these errors were encountered: