Releases: relari-ai/continuous-eval
Releases · relari-ai/continuous-eval
v0.3.13
v0.3.11
What's Changed
- Fix issue #69 by @kelvinchanwh in #73
- Fix issue #71 by @kelvinchanwh in #72
- Fix key name of retrieved_contexts in the dataset evaluation sample by @jmartisk in #68
New Contributors
- @kelvinchanwh made their first contribution in #73
- @jmartisk made their first contribution in #68
Full Changelog: v0.3.10...v0.3.11
v0.3.10
v0.3.9
What's Changed
- Fix contexts attribute name in examples documentation in #58
- Transition to eval runner in #61
- Add sql metrics in #63
New Contributors
- @LucasLeRay made their first contribution in #58
Full Changelog: v0.3.7...v0.3.9
v0.3.7
What's Changed
- fixed double counting corner case for precision / average precision in #55
- Fix required keyword for code string to ground_truth_answers in #56
New Contributors
- @stantonius made their first contribution in #56
Full Changelog: v0.3.5...v0.3.7
v0.3.5
- Add bedrock LLM provider
- Bug fixing
v0.3.4
v0.3.2
- Metrics batch execution now use threads by default
- Bug fixing
v0.3.1
Key points:
- Added
from_data
class method toDataset
class - Fixed
is_empty
method in EvaluationResults, MetricsResults, and TestResults - Added error handling in LLM-based metrics