From 8d9779079646c9282c14f6799d254271b3768637 Mon Sep 17 00:00:00 2001 From: MagicCaster <97146796+newsbreakDuadua9@users.noreply.github.com> Date: Thu, 3 Oct 2024 19:06:35 -0700 Subject: [PATCH] Update README.md add missing period --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 78e8ecd..5ac0ebf 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ RAGTruth is a word-level hallucination corpus in various tasks within the Retrie RAG has become a main technique for alleviating hallucinations in large language models (LLMs). Despite the integration of RAG, LLMs may still present unsupported or contradictory claims to the retrieved contents. In order to develop effective hallucination prevention strategies under RAG, it is important to create benchmark datasets that can measure the extent of hallucination. RAGTruth comprises nearly 18,000 naturally generated responses from diverse LLMs using RAG. These responses have undergone meticulous manual annotations at both the individual cases and word levels, incorporating evaluations of hallucination intensity. ## Updates -1. [2024/06] We released our training and evaluation code. Model weight can be found [here](https://github.com/CodingLL/RAGTruth_Eval/tree/master) +1. [2024/06] We released our training and evaluation code. Model weight can be found [here](https://github.com/CodingLL/RAGTruth_Eval/tree/master). 2. [2024/02] We updated the data: we included more annotated hallucinations and added one new meta, `implicit_true`. 3. [2024/01] We released the RAGTruth corpus. ## Dataset