Releases: embeddings-benchmark/mteb
Releases Β· embeddings-benchmark/mteb
1.0.2 Improve SummEval
Major changes to SummEval:
- #99: Batched evaluation making it significantly faster
- #99: Standardized interface to call encode with List[str] ; Previously it called model.encode with str, which lead to some wrong scores; All scores on the leaderboard have already been fixed
- #97: Fixed typos
Other:
- A new version of the paper has been released with cleaner plots & some additional scores π
- Lots of cool models have been added to the leaderboard: https://huggingface.co/spaces/mteb/leaderboard
- MTEB has been accepted to the EACL 2023 conference
1.0.1 Deactivate parallel encoding
There have been several problems with the GPU parallelism employed by the beir package for Retrieval tasks, such as here & here. This patch release rolls back the support for GPU parallelism with beir. Encoding will still automatically use GPUs, but only a single one. We hope the issue in beir will be fixed soon, so we can enable GPU parallelism for Retrieval tasks in MTEB again! π€
1.0.0 Paper release & SLURM scripts
- We've released our paper π€ https://arxiv.org/abs/2210.07316
- Added some useful scripts for merging results & SLURM scripts in this repository: https://github.com/embeddings-benchmark/mtebscripts
0.9.1 Minor fixes
- Test release prior to 1.0.0 with minor fixes
0.9.0 Bug fixes
Lots of bug fixes across all tasks. π»
We aim to make this the final version before the initial 1.0.0 release. π€