-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prometheus metrics tags grow unbounded over time #35710
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I believe this is actually an issue with the hostmetrics receiver and updated the issue description |
It looks like you have created an infinite telemetry loop. The prometheus exporter serves metrics on the endpoint, and then the Prometheus receiver scrapes the metrics and forwards them to the prometheus exporter through the metrics pipeline you've configured. I would recommend removing the prometheus receiver from your setup. |
Thanks @dashpole, seems like I've misunderstood how to configure this pipeline. I'll try that and update! |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Feel free to reopen if you have further questions |
Component(s)
receiver/hostmetrics
What happened?
Description
The longer my opentelemetry collector instance runs, the longer each metric's list of tags becomes. Each metric has a name that gets continually prepended with "exported_". For example, if you use the config I've attached with this docker compose YAML:
And then query localhost:8889 you'll see metrics that match the logs I've attached.
Expected Result
Metrics are emitted without tags that grow infinitely in size.
Collector version
ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.111.0
Environment information
Environment
OS: Ubuntu 24.04
OpenTelemetry Collector configuration
Log output
Additional context
This may be related to another issue I've observed where the container consumes disk storage space unbounded. The dip in this graph is what happened on my server immediately after stopping the container. The value the disk utilization dropped to is the steady state from before the container was started.
The text was updated successfully, but these errors were encountered: