Models: various bag-of-words approaches on complete documents
This page describes experiments, integrated into Anserini's regression testing framework, on the TREC 2021 Deep Learning Track document ranking task using the MS MARCO V2 document corpus. For additional instructions on working with the MS MARCO V2 document corpus, refer to this page.
Note that the NIST relevance judgments provide far more relevant documents per topic, unlike the "sparse" judgments provided by Microsoft (these are sometimes called "dense" judgments to emphasize this contrast).
Note that there are four different bag-of-words regression conditions for this task, and this page describes the following:
- Indexing Condition: each document in the MS MARCO V2 document corpus is treated as a unit of indexing
- Expansion Condition: none
The exact configurations for these regressions are stored in this YAML file. Note that this page is automatically generated from this template as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead.
From one of our Waterloo servers (e.g., orca
), the following command will perform the complete regression, end to end:
python src/main/python/run_regression.py --index --verify --search --regression dl21-doc
Typical indexing command:
target/appassembler/bin/IndexCollection \
-collection MsMarcoV2DocCollection \
-input /path/to/msmarco-v2-doc \
-generator DefaultLuceneDocumentGenerator \
-index indexes/lucene-index.msmarco-v2-doc/ \
-threads 24 -storeRaw \
>& logs/log.msmarco-v2-doc &
The value of -input
should be a directory containing the compressed jsonl
files that comprise the corpus.
See this page for additional details.
For additional details, see explanation of common indexing options.
Topics and qrels are stored here, which is linked to the Anserini repo as a submodule. The regression experiments here evaluate on the 57 topics for which NIST has provided judgments as part of the TREC 2021 Deep Learning Track.
After indexing has completed, you should be able to perform retrieval as follows:
target/appassembler/bin/SearchCollection \
-index indexes/lucene-index.msmarco-v2-doc/ \
-topics tools/topics-and-qrels/topics.dl21.txt \
-topicReader TsvInt \
-output runs/run.msmarco-v2-doc.bm25-default.topics.dl21.txt \
-hits 1000 -bm25 &
target/appassembler/bin/SearchCollection \
-index indexes/lucene-index.msmarco-v2-doc/ \
-topics tools/topics-and-qrels/topics.dl21.txt \
-topicReader TsvInt \
-output runs/run.msmarco-v2-doc.bm25-default+rm3.topics.dl21.txt \
-hits 1000 -bm25 -rm3 -collection MsMarcoV2DocCollection &
target/appassembler/bin/SearchCollection \
-index indexes/lucene-index.msmarco-v2-doc/ \
-topics tools/topics-and-qrels/topics.dl21.txt \
-topicReader TsvInt \
-output runs/run.msmarco-v2-doc.bm25-default+rocchio.topics.dl21.txt \
-hits 1000 -bm25 -rocchio -collection MsMarcoV2DocCollection &
Evaluation can be performed using trec_eval
:
target/appassembler/bin/trec_eval -c -M 100 -m map tools/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default.topics.dl21.txt
target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default.topics.dl21.txt
target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default.topics.dl21.txt
target/appassembler/bin/trec_eval -c -M 100 -m recip_rank -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default.topics.dl21.txt
target/appassembler/bin/trec_eval -c -M 100 -m map tools/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rm3.topics.dl21.txt
target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rm3.topics.dl21.txt
target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rm3.topics.dl21.txt
target/appassembler/bin/trec_eval -c -M 100 -m recip_rank -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rm3.topics.dl21.txt
target/appassembler/bin/trec_eval -c -M 100 -m map tools/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rocchio.topics.dl21.txt
target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rocchio.topics.dl21.txt
target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rocchio.topics.dl21.txt
target/appassembler/bin/trec_eval -c -M 100 -m recip_rank -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rocchio.topics.dl21.txt
With the above commands, you should be able to reproduce the following results:
MAP@100 | BM25 (default) | +RM3 | +Rocchio |
---|---|---|---|
DL21 (Doc) | 0.2126 | 0.2452 | 0.2467 |
MRR@100 | BM25 (default) | +RM3 | +Rocchio |
DL21 (Doc) | 0.8367 | 0.7914 | 0.7997 |
nDCG@10 | BM25 (default) | +RM3 | +Rocchio |
DL21 (Doc) | 0.5116 | 0.5304 | 0.5476 |
R@100 | BM25 (default) | +RM3 | +Rocchio |
DL21 (Doc) | 0.3195 | 0.3376 | 0.3456 |
R@1000 | BM25 (default) | +RM3 | +Rocchio |
DL21 (Doc) | 0.6739 | 0.7341 | 0.7367 |
Some of these regressions correspond to official TREC 2021 Deep Learning Track "baseline" submissions:
d_bm25
= BM25 (default),k1=0.9
,b=0.4
d_bm25rm3
= BM25 (default) + RM3,k1=0.9
,b=0.4