Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
-
Updated
Mar 26, 2024 - Python
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)
[ICML'2024] Can AI Assistants Know What They Don't Know?
Improving LLM truthfulness via reporting confidence
Add a description, image, and links to the truthfulness topic page so that developers can more easily learn about it.
To associate your repository with the truthfulness topic, visit your repo's landing page and select "manage topics."