python[patch]: accept simple evaluators #307
Triggered via pull request
November 14, 2024 19:24
Status
Success
Total duration
19m 16s
Artifacts
–
Annotations
1 warning and 2 notices
benchmark
The following actions use a deprecated Node.js version and will be forced to run on node20: actions/cache@v3. For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/
|
Benchmark results:
python/langsmith/evaluation/_runner.py#L1
.........................................
create_5_000_run_trees: Mean +- std dev: 610 ms +- 42 ms
.........................................
create_10_000_run_trees: Mean +- std dev: 1.19 sec +- 0.05 sec
.........................................
create_20_000_run_trees: Mean +- std dev: 1.18 sec +- 0.05 sec
.........................................
dumps_class_nested_py_branch_and_leaf_200x400: Mean +- std dev: 704 us +- 7 us
.........................................
dumps_class_nested_py_leaf_50x100: Mean +- std dev: 25.0 ms +- 0.2 ms
.........................................
dumps_class_nested_py_leaf_100x200: Mean +- std dev: 104 ms +- 2 ms
.........................................
dumps_dataclass_nested_50x100: Mean +- std dev: 25.3 ms +- 0.2 ms
.........................................
WARNING: the benchmark result may be unstable
* the standard deviation (14.4 ms) is 22% of the mean (64.7 ms)
Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m pyperf system tune' command to reduce the system jitter.
Use pyperf stats, pyperf dump and pyperf hist to analyze results.
Use --quiet option to hide these warnings.
dumps_pydantic_nested_50x100: Mean +- std dev: 64.7 ms +- 14.4 ms
.........................................
WARNING: the benchmark result may be unstable
* the standard deviation (28.5 ms) is 13% of the mean (215 ms)
Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m pyperf system tune' command to reduce the system jitter.
Use pyperf stats, pyperf dump and pyperf hist to analyze results.
Use --quiet option to hide these warnings.
dumps_pydanticv1_nested_50x100: Mean +- std dev: 215 ms +- 29 ms
|
Comparison against main:
python/langsmith/evaluation/_runner.py#L1
+------------------------------------+----------+------------------------+
| Benchmark | main | changes |
+====================================+==========+========================+
| dumps_pydanticv1_nested_50x100 | 224 ms | 215 ms: 1.04x faster |
+------------------------------------+----------+------------------------+
| create_5_000_run_trees | 627 ms | 610 ms: 1.03x faster |
+------------------------------------+----------+------------------------+
| create_20_000_run_trees | 1.20 sec | 1.18 sec: 1.02x faster |
+------------------------------------+----------+------------------------+
| create_10_000_run_trees | 1.21 sec | 1.19 sec: 1.01x faster |
+------------------------------------+----------+------------------------+
| dumps_class_nested_py_leaf_100x200 | 105 ms | 104 ms: 1.01x faster |
+------------------------------------+----------+------------------------+
| dumps_dataclass_nested_50x100 | 25.4 ms | 25.3 ms: 1.00x faster |
+------------------------------------+----------+------------------------+
| dumps_class_nested_py_leaf_50x100 | 25.1 ms | 25.0 ms: 1.00x faster |
+------------------------------------+----------+------------------------+
| Geometric mean | (ref) | 1.02x faster |
+------------------------------------+----------+------------------------+
Benchmark hidden because not significant (2): dumps_pydantic_nested_50x100, dumps_class_nested_py_branch_and_leaf_200x400
|