-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prometheusremotewrite emits noisy errors on empty data points #4972
Comments
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping |
Pinging code owners for exporter/prometheusremotewrite: @Aneurysm9. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
The part about the noisy error message should be fixed now, each type of metric has a following condition to handle the case with empty data points. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@nicks do you still have this issue? |
nope, let's close it! |
I think there may have been a regression because I'm seeing this behavior and it makes the logs completely unusable for debugging. Version: opentelemetry-collector-contrib:0.75.0
|
Updated to the latest version of the opentelemetry-operator (0.33.0), but same here:
|
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@crlnz ’s PR seemed to have a nice fix/workaround but it was closed. We’re still experiencing this issue (on v0.83, but we'll try to update to v0.89 and see if that helps). @Aneurysm9 @rapphil
|
@cyberw We're currently in the process of completing the EasyCLA internally, so this will be re-opened eventually. It's taking a little longer because we need to poke around regarding this issue. If anyone that has already signed the EasyCLA would like to take ownership of these changes, please feel free to fork my changes and open a new PR. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
We are still running into this issue on 0.95.0:
|
Hello, I've done some investigation on my end because this issue is still affecting us in 0.97 as well. I think the main problem is that there is an inconsistency in what receivers push into the pipeline and what (some) exporters expect. In our case, the issue is happening with the I haven't checked other receivers, but any receiver that does something like this will cause There are a few ways that I can think of for fixing this:
I personally feel like option 1 is the best, as it reduces unnecessary processing/exporting work. Option 2 might be a decent temporary workaround, and option 3 seems like it could lead down a path of undetected data loss. In the broader sense, there is also the problem of an OTLP client pushing a message with empty metrics. In that case it's not clear whether OTLP receiver should reject the message as malformed, or drop the empty metrics before pushing them into the pipeline. (I haven't checked, but if this is already specified, then receivers should probably implement a similar behavior) In my specific case of the Windows Perf Counters, the error is already being logged by the receiver (once) at startup, and then results in one error from prometheusremotewrite per scrape cycle. My plan is to open a PR that fixes the scrape behavior of |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
/label -Stale The PR for fixing |
…s. (#32384) **Description:** When scraping Windows Performance Counters, it's possible that some counter objects do not exist. When that is the case, `windowsperfcounters` will still create the `Metric` object with no datapoints in it. Some exporters throw errors when encountering this. The fix proposed in this PR does an extra pass after all metrics have been scraped and removes the `Metric` objects for which no datapoints were scraped. **Link to tracking Issue:** #4972 **Testing:** - Confirmed that `debug` exporter sees `ResourceMetrics` with no metrics and doesn't throw. - Confirmed that `prometheusremotewrite` exporter no longer complains about empty datapoints and that it skips the export when no metrics are available - ~No unit tests added for now. I will add a unit test once I have confirmation that this is the right way to remove empty datapoints~ Unit test covering the changes and enabling fixture validation which was not implemented.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
Describe the bug
Here's the error message:
Here's the metric descriptor emitted by logging exporter for the same metric:
I don't know enough about the contracts here to know if this is a bug in the opencensus code that I'm using to send the metric, or in the batcher, or in the prometheusremotewrite exporter, or something else entirely.
What did you expect to see?
No error messages
What did you see instead?
An error message
What version did you use?
Docker image: otel/opentelemetry-collector:0.15.0
What config did you use?
Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: