From 55f1bc6e072b05c2d9db1589a07e20f38902b1ec Mon Sep 17 00:00:00 2001 From: Moritz Laurer <41862082+MoritzLaurer@users.noreply.github.com> Date: Tue, 17 Sep 2024 02:19:55 +0200 Subject: [PATCH] add tip in docs and readme referring to lighteval (#618) --- README.md | 6 ++++++ docs/source/index.mdx | 2 ++ 2 files changed, 8 insertions(+) diff --git a/README.md b/README.md index 21747f675..1de165677 100644 --- a/README.md +++ b/README.md @@ -22,6 +22,12 @@
+ + +> **Tip:** For more recent evaluation approaches, for example for evaluating LLMs, we recommend our newer and more actively maintained library [LightEval](https://github.com/huggingface/lighteval). + + + 🤗 Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized. It currently contains: diff --git a/docs/source/index.mdx b/docs/source/index.mdx index 4729d6599..ec771def2 100644 --- a/docs/source/index.mdx +++ b/docs/source/index.mdx @@ -12,6 +12,8 @@ With a single line of code, you get access to dozens of evaluation methods for d Visit the 🤗 Evaluate [organization](https://huggingface.co/evaluate-metric) for a full list of available metrics. Each metric has a dedicated Space with an interactive demo for how to use the metric, and a documentation card detailing the metrics limitations and usage. +> **Tip:** For more recent evaluation approaches, for example for evaluating LLMs, we recommend our newer and more actively maintained library [LightEval](https://github.com/huggingface/lighteval). +