-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Splunk Hec Receiver - Memory Leak (Cont of #34886) #35294
Labels
Comments
brettplarson
added
bug
Something isn't working
needs triage
New item requiring triage
labels
Sep 19, 2024
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Just bumping this. Thank you! |
looking into it now. |
bogdandrutu
pushed a commit
that referenced
this issue
Nov 3, 2024
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description Fix memory leak by changing how we run obsreports for metrics and logs. <!-- Issue number (e.g. #1234) or full URL to issue, if applicable. --> #### Link to tracking issue Fixes #35294
sbylica-splunk
pushed a commit
to sbylica-splunk/opentelemetry-collector-contrib
that referenced
this issue
Dec 17, 2024
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description Fix memory leak by changing how we run obsreports for metrics and logs. <!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. --> #### Link to tracking issue Fixes open-telemetry#35294
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Component(s)
receiver/splunkhec
What happened?
Description
We are still seeing an issue with the collector memory after upgrading to 0.109.0 with the fix. The behavior changed and we are now seeing more memory in the stack vs heap. Although the heap still grows slowly over time. Similar to before, removing the hec receiver from the logs pipeline gets rid of the issue. This is a test cluster where I can reproduce this sending metrics to the hec receiver.
One clue is that this is all under startlogop - the memory in startmetricsop seems normal - perhaps the way hec events are processed, as events first, is causing these to never end. forgive my speculation :)
Steps to Reproduce
send a ton of metrics to a hec endpoint and profile the memory.
Expected Result
memory should not be held in this way
Actual Result
Collector version
0.109.0
Environment information
Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
OpenTelemetry Collector configuration
No response
Log output
No response
Additional context
We opened splunk case 3554107 as well.
The text was updated successfully, but these errors were encountered: