-
Notifications
You must be signed in to change notification settings - Fork 629
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Measure how efficient with_label_values()
is
#7568
Conversation
Time per inc() call when caching counters: 0.050249520625 microseconds Time per inc() call when caching label values: 0.624767583375 microseconds Time per inc() call when caching strings: 0.659365745875 microseconds Time per inc() call when using `with_label_values()`: 0.87889163825 microseconds
cc: @matklad |
Do we actually want to merge it? I am skeptical of the value of benchmarks, as we don't actually run them continuously. I think the right approach is to do ad-hoc investigation, document the outcomes at a point in time in some issue, link the code there, but don't get it into master. No particularly strong opinion though. |
NB: this comment applies in general. The unfortunate outcome of not documenting any results of an investigation is that the outcomes of said investigation become a folklore and are eventually lost to time. For investigations that are trivial to reproduce it might be fine, but ultimately some people leave, others forget details and the same mistake that prompted the investigation in the first place will be made again. While it isn’t strictly necessary to merge benchmarks per se, I think it is worthwhile to spend some time to at least document what the outcome of this investigation is. Should we use a specific pattern out of these three when implementing metrics? Add an example/recommendations to |
I’ve taken the liberty of adding two more cases.
|
From that I think a quick improvement would be to offer |
OK, one more. I always suspected Rust’s format is shite:
|
Can’t help myself… ;)
Mostly I was curious about overhead of |
Couple of crates to be aware of in this area: |
This PR is an ongoing investigation, not a result of an investigation yet.
Thank you! I simply wanted to compare the worst case |
As per benchmarks in near#7568, using format! as opposed to calling to_string directly has ~100ns overhead. There’s no reason not to get rid of that overhead especially since using to_string is actually shorter to type. Note that it may be beneficial to further optimise integer formatting by using fixed-size buffer and custom conversion which doesn’t use std. This optimisation is outside the scope of this commit.
As per benchmarks in near#7568, using format! as opposed to calling to_string directly has ~100ns overhead. There’s no reason not to get rid of that overhead especially since using to_string is actually shorter to type. Note that it may be beneficial to further optimise integer formatting by using fixed-size buffer and custom conversion which doesn’t use std. This optimisation is outside the scope of this commit.
cc: @matklad ``` % cargo bench -p near-o11y Compiling near-o11y v0.0.0 (/storage/code/nearcore-master/core/o11y) Finished bench [optimized] target(s) in 1.05s Running unittests src/lib.rs (target/release/deps/near_o11y-ccbb448e66a5a4d9) running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s Running benches/metrics.rs (target/release/deps/metrics-d531efabdee84e90) running 3 tests test inc_counter_vec_cached ... bench: 21 ns/iter (+/- 1) test inc_counter_vec_cached_str ... bench: 183 ns/iter (+/- 1) test inc_counter_vec_with_label_values ... bench: 506 ns/iter (+/- 10) test result: ok. 0 passed; 0 failed; 0 ignored; 3 measured ```
As per benchmarks in #7568, using format! as opposed to calling to_string directly has ~100ns overhead. There’s no reason not to get rid of that overhead especially since using to_string is actually shorter to type. Note that it may be beneficial to further optimise integer formatting by using fixed-size buffer and custom conversion which doesn’t use std. This optimisation is outside the scope of this commit.
cc: @matklad ``` % cargo bench -p near-o11y Compiling near-o11y v0.0.0 (/storage/code/nearcore-master/core/o11y) Finished bench [optimized] target(s) in 1.05s Running unittests src/lib.rs (target/release/deps/near_o11y-ccbb448e66a5a4d9) running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s Running benches/metrics.rs (target/release/deps/metrics-d531efabdee84e90) running 3 tests test inc_counter_vec_cached ... bench: 21 ns/iter (+/- 1) test inc_counter_vec_cached_str ... bench: 183 ns/iter (+/- 1) test inc_counter_vec_with_label_values ... bench: 506 ns/iter (+/- 10) test result: ok. 0 passed; 0 failed; 0 ignored; 3 measured ```
As per benchmarks in #7568, using format! as opposed to calling to_string directly has ~100ns overhead. There’s no reason not to get rid of that overhead especially since using to_string is actually shorter to type. Note that it may be beneficial to further optimise integer formatting by using fixed-size buffer and custom conversion which doesn’t use std. This optimisation is outside the scope of this commit.
cc: @matklad