-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prometheus "Counter" metrics are skipped (sent from Python Prometheus client) #3557
Comments
cc @Aneurysm9 |
The use of It isn't clear to me why counters would be dropped if they have that suffix, but removing it doesn't seem to cause any compliance test failures. I will put up a PR shortly to add |
@Aneurysm9 should this issue be closed? |
@nayasam can you give a try using the newly release 0.30.0? |
Thank you. With release 0.30.0, I do see the counter, gauge, summary, histogram Prometheus metrics (generated from python-client) pulled from PrometheusReceiver, and properly exported to the Prometheus backend (via PrometheusExporter).
However, I am seeing the below in collector stdout/log. (not sure what those metrics that failed to translate --- FYI: I am not sending any metric with metric_name=””):
2021-07-15T10:59:06.483-0400 error prometheusexporter/accumulator.go:103 failed to translate metric {"kind": "exporter", "name": "prometheus", "data_type": "\u0000", "metric_name": ""}
go.opentelemetry.io/collector/exporter/prometheusexporter.(*lastValueAccumulator).addMetric
go.opentelemetry.io/collector/exporter/prometheusexporter/accumulator.go:103
go.opentelemetry.io/collector/exporter/prometheusexporter.(*lastValueAccumulator).Accumulate
go.opentelemetry.io/collector/exporter/prometheusexporter/accumulator.go:74
go.opentelemetry.io/collector/exporter/prometheusexporter.(*collector).processMetrics
go.opentelemetry.io/collector/exporter/prometheusexporter/collector.go:54
go.opentelemetry.io/collector/exporter/prometheusexporter.(*prometheusExporter).ConsumeMetrics
go.opentelemetry.io/collector/exporter/prometheusexporter/prometheus.go:100
go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsRequest).export
go.opentelemetry.io/collector/exporter/exporterhelper/metrics.go:52
go.opentelemetry.io/collector/exporter/exporterhelper.(*timeoutSender).send
go.opentelemetry.io/collector/exporter/exporterhelper/common.go:229
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
go.opentelemetry.io/collector/exporter/exporterhelper/queued_retry.go:252
go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send
go.opentelemetry.io/collector/exporter/exporterhelper/metrics.go:117
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send
go.opentelemetry.io/collector/exporter/exporterhelper/queued_retry.go:174
go.opentelemetry.io/collector/exporter/exporterhelper.NewMetricsExporter.func2
go.opentelemetry.io/collector/exporter/exporterhelper/metrics.go:97
go.opentelemetry.io/collector/consumer/consumerhelper.ConsumeMetricsFunc.ConsumeMetrics
go.opentelemetry.io/collector/consumer/consumerhelper/metrics.go:29
go.opentelemetry.io/collector/processor/batchprocessor.(*batchMetrics).export
go.opentelemetry.io/collector/processor/batchprocessor/batch_processor.go:285
go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).sendItems
go.opentelemetry.io/collector/processor/batchprocessor/batch_processor.go:183
go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).startProcessingCycle
go.opentelemetry.io/collector/processor/batchprocessor/batch_processor.go:144
From: Bogdan Drutu ***@***.***>
Sent: Wednesday, July 14, 2021 7:11 PM
To: open-telemetry/opentelemetry-collector ***@***.***>
Cc: Samarasinghe, Nayanamana (Application Infrastructure) ***@***.***>; Mention ***@***.***>
Subject: Re: [open-telemetry/opentelemetry-collector] Prometheus "Counter" metrics are skipped (sent from Python Prometheus client) (#3557)
@nayasam<https://github.com/nayasam> can you give a try using the newly release 0.30.0?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#3557 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ASENBL7GGBJTBS2XI5THDSLTXYKR3ANCNFSM47XEV4EA>.
…________________________________
NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent required and/or permitted under applicable law, to monitor electronic communications, including telephone calls with Morgan Stanley personnel. This message is subject to the Morgan Stanley General Disclaimers available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access the links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you acknowledge that you have read, understand and consent, (where applicable), to the foregoing and the Morgan Stanley General Disclaimers.
You may have certain rights regarding the information that Morgan Stanley collects about you. Please see our Privacy Pledge https://www.morganstanley.com/privacy-pledge for more information about your rights.
|
Since the issue reported is solved, this issue can be closed. The observation I mentioned on failing to translate metrics in PrometheusExporter should be a separate issue. I will create a separate issue for that.. |
Closing this issue as resolved. |
Describe the bug
We observed Prometheus "Counter" metrics (sent from the Python prometheus_client API) are skipped (in OpenTelemetry collector) when configured to grab them (in PrometheusReceiver collector configuration) from a HTTP endpoint. However, this issue is not seen when Prometheus metrics are sent using the C# prometheus client.
With The Python prometheus_client API, Counter metrics sent have a _total suffix appended (this is not the case with C# prometheus client API). It appears that this _total suffix causes the Counter metrics to skip in the OpenTelemetry collector.
Upon checking the Python prometheus_client code - https://github.com/prometheus/client_python/blob/master/prometheus_client/registry.py#L58-L70 , we observe _total suffix is appended in Counter metric name; _sum and _counter suffixes are appended in Summary metric names; _bucket suffix is appended in Histogram metric names.
Then in OpenTelemetry collector code (version 0.28.0), the suffixes appended to Summary and Histogram metric names are trimmed, but not the _total suffix appended to Counter metric name.
https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/prometheusreceiver/internal/metricsbuilder.go#L33-L46
https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/prometheusreceiver/internal/metricsbuilder.go#L215-L222
However, when I update the code in metricbuilder.go (of the OpenTelemetry collector code) as follows (i.e., to trim the _total suffix of Counters) --- see bold text, I can see prometheus Counter metrics properly exported by the PrometheusExporter (of OpenTelemetry collector), and sent to the prometheus backend.
const (
metricsSuffixCount = "_count"
metricsSuffixBucket = "_bucket"
metricsSuffixSum = "_sum"
startTimeMetricName = "process_start_time_seconds"
scrapeUpMetricName = "up"
metricsSuffixTotalCount = "_total"
)
var (
trimmableSuffixes = []string{metricsSuffixBucket, metricsSuffixCount, metricsSuffixSum, metricsSuffixTotalCount }
errNoDataToBuild = errors.New("there's no data to build")
errNoBoundaryLabel = errors.New("given metricType has no BucketLabel or QuantileLabel")
errEmptyBoundaryLabel = errors.New("BucketLabel or QuantileLabel is empty")
)
Why do we see the skipping of prometheus Counter metrics (from the collector) when they are sent from the Python prometheus client API, and not with other prometheus client APIs (e.g., C#)? This particular issue is not observed from the C# prometheus client API? It appears that Python prometheus metric APIs support OpenMetrics, which requires the _total suffix to be appended to Counters, whereas prometheus C# client does not support OpenMetrics.
This issue is similar to https://github.com/open-telemetry/opentelemetry-collector/issues/3118 (which is closed)
Steps to reproduce
Use prometheus_client API (https://github.com/prometheus/client_python) to send prometheus Counter metrics, and grab them from opentelemetry collector.
What did you expect to see?
Prometheus Counter metrics properly received by the PrometehusReceiver (of the collector) and exported.
What did you see instead?
Prometheus counter metrics are skipped
What version did you use?
Prometheus client version 0.11.0 (https://github.com/prometheus/client_python), openetelemetry collector version = 0.28.0
Environment
OS: Linux
Additional context
Same question was asked from the maintainers of Python Prometheus client (see prometheus/client_python#678), and they believe this is an issue with opentelemetry collector ("All counters should end with a _total suffix, and that is actually required for OpenMetrics..... opentelemetry collector negotiates OpenMetrics preferentially, so it definitely needs to handle _total suffixes on counters.")
The text was updated successfully, but these errors were encountered: