Skip to content

Extended

Justin Conklin edited this page Aug 24, 2021 · 6 revisions

Note: there is an updated version of this example here.

Continuing our example from the README, we'll now show how running benchmarks in the same process can sometimes influence results. Below, we use the run-expr macro and the bench-opts from before to manually run our benchmarks.

(require '[demo.core :as demo]
         '[demo.utils :as utils])

(def bench-fns [[:str utils/make-str]
                [:vec utils/make-vec]])

(for [count [31 100000]
      [name make] bench-fns
      :let [idx (* 0.5 count)
            val (make count)
            result (jmh/run-expr (demo/value-at val idx) bench-opts)]]
  {:tag [name count], :score (:score result)})
;; => ({:tag [:str 31],     :score [1.29379504484452E8 "ops/s"]}
;;     {:tag [:str 100000], :score [1.39374070077195E8 "ops/s"]}
;;     {:tag [:vec 31],     :score [7.9548957250226E7 "ops/s"]}
;;     {:tag [:vec 100000], :score [5.8700906284802E7 "ops/s"]})

Notice how the :str results remain consistent with the previous results, but the first :vec result is slower than before. For example, let's try interleaving calls to each implementation of value-at.

(for [[name make] bench-fns
      count [31 100000]
      :let [idx (* 0.5 count)
            val (make count)
            result (jmh/run-expr (demo/value-at val idx) bench-opts)]]
  {:tag [name count], :score (:score result)})
;; => ({:tag [:str 31],     :score [1.28725659977987E8 "ops/s"]}
;;     {:tag [:vec 31],     :score [8.4931939178099E7 "ops/s"]}
;;     {:tag [:str 100000], :score [8.4740128604998E7 "ops/s"]}
;;     {:tag [:vec 100000], :score [6.1636447829743E7 "ops/s"]})

Now the second :str result is slower than the first. Actually, both of the above results are misleading. Over ten runs, the manual results are more varied (the criterium results show a similar trend), while the forked results remain mostly consistent, with only one or two outliers. A likely explanation is found in the javadoc here.

Sometimes however, benchmarking within the same JVM process is what you want. But we don't have to resort to benchmarking manually like above. Simply specifying :fork 0 in the bench-opts above accomplishes the same thing.

Finally, I should mention that we could probably achieve even more consistency with the initial forked runs by omitting the :quick option type.

Clone this wiki locally