-
Notifications
You must be signed in to change notification settings - Fork 440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cardinality limit on metrics (otel_metrics_overflow) #2997
Comments
As per the investigation by @xjanin - #2993 (reply in thread) |
Thanks for the investigation, @xjanin. Could you please also share the related sample code which could reproduce the issue? Like some filter is added to the metric view? |
Hello @ThomsonTan, I didn't use any view. It was more difficult than I though to create a small example but here goes (be careful, this code leaks memory):
This is not how I create my tags in my code but it achieves the same effect. This the result :
|
Discussed in #2993
Originally posted by xjanin July 8, 2024
Hi,
I've instrumented my application with opentelemetry-cpp (metrics only), and I use tags in my metrics. When testing with a low rate of metrics recording, I see my metrics with the expected tags in the prometheus export on the opentelemetry collector. However, when testing the application with a high rate of requests, and so a high rate of metrics recording, I get metrics aggregated in time series with no tags except "otel_metrics_overflow".
My understanding is that this should happens if the sdk collector has to collect metrics with a tag cardinality superior to 2000 in a collection cycle (by default).
Edit: https://opentelemetry.io/docs/specs/otel/metrics/sdk/#cardinality-limits
However, my tag cardinality doesn't increase with the number of request processed, and in the opentelemetry collector that my application uses, I don't see 2000 time series in the prometheus export.
So my questions are :
Thank you and best regards,
Xavier
The text was updated successfully, but these errors were encountered: