-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to complete 10 samples #322
Comments
No, that's just an oversight on my part. The idea here is that Criterion.rs will automatically suggest reducing the sample count to make your benchmarks fit into the specified measurement time. Unfortunately, your benchmark is so long that it would have to reduce the sample count below the minimum safe value of 10, so it gives a confusing recommendation. I'll leave this open as a reminder to myself to skip the warning if the recommended count is equal to the actual sample count. |
What are we suppose to do if we want to use a large amount of data to test a function per iteration, but doing so 10x times takes hours? I'm not exactly sure if iter_batched works for this usecase, but ideally, I want Here's my setup, note that |
To be honest, I'd recommend just not doing that. It's tempting to assume that larger benchmarks are better, but (at least from a statistical point of view) things are already about as good as they're going to get after a few seconds of benchmarking. There isn't much to be gained by extending a single benchmark to an hour or more unless your benchmarking environment is super noisy (in which case you might look at improving that instead). If you want to do it anyway, I would say that Criterion.rs is not intended for that use case and you might consider |
Thanks for the insight @bheisler , what do you think about this approach where instead of benching over the entire range, I actually change the sample per iteration?
|
Hey, thanks for your patience, I've been super busy IRL lately so I haven't had the energy to keep up with Criterion as much as I'd like. What you're doing (looking up a different word on each iteration) is... probably OK, as long as you're doing a lot of iterations. It does slightly bias the measurements towards over-weighting the values in the beginning of the list, and the analysis kind of assumes every iteration does the same work. In this case, the differences from one iteration to the next are probably small and probably washed out by having lots of iterations. Probably. Honestly, I wouldn't take the chance; I'd do a random sampling of the |
@bheisler |
How is the latter intended to be used? Trying just
|
Oh, I should have perused a little more before asking — the answer is to use config in the criterion_group!{
name = benches;
config = Criterion::default().measurement_time(Duration::from_secs(100));
targets = criterion_benchmark
} ... right? (At least that's working for me.) |
Folks suggest using 'measurement_time' equals to 100s in order to make measurements more resilient to transitory peak loads caused by external programs. bheisler/criterion.rs#322
Folks suggest using 'measurement_time' equals to 100s in order to make measurements more resilient to transitory peak loads caused by external programs. bheisler/criterion.rs#322
* feat: Merge Hyperkzg and Shplonk * bench: Include HyperKzg+Shplonk PCS to the benchmark Folks suggest using 'measurement_time' equals to 100s in order to make measurements more resilient to transitory peak loads caused by external programs. bheisler/criterion.rs#322 * chore: Move HyperKZG+Shponk code to hyperkzg.rs source file * chore: Requested refactoring * chore: Implement SubAssign trait for UniPoly
Hey,
I'm using criterion 0.3 and I saw that I need to increase the sample_size at least to 10.
So I did. Now I get following warning:
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 62.2s or reduce sample count to 10
.These message doesn't make a lot of sense in my oppinion. Or did I look over something in the guide?
T
The text was updated successfully, but these errors were encountered: