forked from castorini/anserini
-
Notifications
You must be signed in to change notification settings - Fork 0
/
beir-v1.0.0-cqadupstack-programmers-flat.template
55 lines (35 loc) · 1.82 KB
/
beir-v1.0.0-cqadupstack-programmers-flat.template
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-programmers
This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-programmers](http://beir.ai/).
These experiments index the corpus in a "flat" manner, by concatenating the "title" and "text" into the "contents" field.
The exact configurations for these regressions are stored in [this YAML file](${yaml}).
Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead.
From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
```
python src/main/python/run_regression.py --index --verify --search --regression ${test_name}
```
All the BEIR corpora are available for download:
```bash
wget https://rgw.cs.uwaterloo.ca/pyserini/data/beir-v1.0.0-corpus.tar -P collections/
tar xvf collections/beir-v1.0.0-corpus.tar -C collections/
```
The tarball is 14 GB and has MD5 checksum `faefd5281b662c72ce03d22021e4ff6b`.
After download and unpacking the corpora, the `run_regression.py` command above should work without any issue.
## Indexing
Typical indexing command:
```
${index_cmds}
```
For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
## Retrieval
Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
After indexing has completed, you should be able to perform retrieval as follows:
```
${ranking_cmds}
```
Evaluation can be performed using `trec_eval`:
```
${eval_cmds}
```
## Effectiveness
With the above commands, you should be able to reproduce the following results:
${effectiveness}