Skip to content

Releases: embeddings-benchmark/mteb

1.0.2 Improve SummEval

28 Mar 10:33
Compare
Choose a tag to compare

Major changes to SummEval:

  • #99: Batched evaluation making it significantly faster
  • #99: Standardized interface to call encode with List[str] ; Previously it called model.encode with str, which lead to some wrong scores; All scores on the leaderboard have already been fixed
  • #97: Fixed typos

Other:

  • A new version of the paper has been released with cleaner plots & some additional scores 😊
  • Lots of cool models have been added to the leaderboard: https://huggingface.co/spaces/mteb/leaderboard
  • MTEB has been accepted to the EACL 2023 conference

1.0.1 Deactivate parallel encoding

29 Nov 18:08
Compare
Choose a tag to compare

There have been several problems with the GPU parallelism employed by the beir package for Retrieval tasks, such as here & here. This patch release rolls back the support for GPU parallelism with beir. Encoding will still automatically use GPUs, but only a single one. We hope the issue in beir will be fixed soon, so we can enable GPU parallelism for Retrieval tasks in MTEB again! πŸ€—

1.0.0 Paper release & SLURM scripts

17 Oct 07:13
Compare
Choose a tag to compare

0.9.1 Minor fixes

13 Oct 15:13
Compare
Choose a tag to compare
  • Test release prior to 1.0.0 with minor fixes

0.9.0 Bug fixes

06 Oct 15:14
Compare
Choose a tag to compare

Lots of bug fixes across all tasks. πŸ‘»
We aim to make this the final version before the initial 1.0.0 release. πŸ€—