diff --git a/README.md b/README.md index 4c7aa4b869..acddba16cc 100644 --- a/README.md +++ b/README.md @@ -105,32 +105,32 @@ See individual pages for details! + Regressions for Mr. TyDi (v1.1) baselines : [ar](docs/regressions-mrtydi-v1.1-ar.md), [bn](docs/regressions-mrtydi-v1.1-bn.md), [en](docs/regressions-mrtydi-v1.1-en.md), [fi](docs/regressions-mrtydi-v1.1-fi.md), [id](docs/regressions-mrtydi-v1.1-id.md), [ja](docs/regressions-mrtydi-v1.1-ja.md), [ko](docs/regressions-mrtydi-v1.1-ko.md), [ru](docs/regressions-mrtydi-v1.1-ru.md), [sw](docs/regressions-mrtydi-v1.1-sw.md), [te](docs/regressions-mrtydi-v1.1-te.md), [th](docs/regressions-mrtydi-v1.1-th.md) + Regressions for BEIR (v1.0.0): + TREC-COVID: ["flat" baseline](docs/regressions-beir-v1.0.0-trec-covid-flat.md), ["multfield" baseline](docs/regressions-beir-v1.0.0-trec-covid-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.md) - + BioASQ: ["flat" baseline](docs/regressions-beir-v1.0.0-bioasq-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.md) + + BioASQ: ["flat" baseline](docs/regressions-beir-v1.0.0-bioasq-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-bioasq-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.md) + NFCorpus: ["flat" baseline](docs/regressions-beir-v1.0.0-nfcorpus-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-nfcorpus-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.md) + NQ: ["flat" baseline](docs/regressions-beir-v1.0.0-nq-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-nq-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-nq-splade-distil-cocodenser-medium.md) - + HotpotQA: ["flat" baseline](docs/regressions-beir-v1.0.0-hotpotqa-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.md) + + HotpotQA: ["flat" baseline](docs/regressions-beir-v1.0.0-hotpotqa-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-hotpotqa-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.md) + FiQA-2018: ["flat" baseline](docs/regressions-beir-v1.0.0-fiqa-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-fiqa-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.md) - + Signal-1M(RT): ["flat" baseline](docs/regressions-beir-v1.0.0-signal1m-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.md) + + Signal-1M(RT): ["flat" baseline](docs/regressions-beir-v1.0.0-signal1m-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-signal1m-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.md) + TREC-NEWS: ["flat" baseline](docs/regressions-beir-v1.0.0-trec-news-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-trec-news-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.md) + Robust04: ["flat" baseline](docs/regressions-beir-v1.0.0-robust04-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-robust04-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-robust04-splade-distil-cocodenser-medium.md) + ArguAna: ["flat" baseline](docs/regressions-beir-v1.0.0-arguana-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-arguana-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-arguana-splade-distil-cocodenser-medium.md) - + Touche2020: ["flat" baseline](docs/regressions-beir-v1.0.0-webis-touche2020-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.md) - + CQADupStack-Android: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-android-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.md) - + CQADupStack-English: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-english-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.md) - + CQADupStack-Gaming: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-gaming-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.md) - + CQADupStack-Gis: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-gis-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.md) - + CQADupStack-Mathematica: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-mathematica-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.md) - + CQADupStack-Physics: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-physics-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.md) - + CQADupStack-Programmers: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-programmers-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.md) - + CQADupStack-Stats: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-stats-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.md) - + CQADupStack-Tex: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-tex-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.md) - + CQADupStack-Unix: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-unix-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.md) - + CQADupStack-Webmasters: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-webmasters-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.md) - + CQADupStack-Wordpress: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-wordpress-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.md) - + Quora: ["flat" baseline](docs/regressions-beir-v1.0.0-quora-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-quora-splade-distil-cocodenser-medium.md) - + DBPedia: ["flat" baseline](docs/regressions-beir-v1.0.0-dbpedia-entity-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.md) + + Touche2020: ["flat" baseline](docs/regressions-beir-v1.0.0-webis-touche2020-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-arguana-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.md) + + CQADupStack-Android: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-android-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-cqadupstack-android-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.md) + + CQADupStack-English: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-english-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-cqadupstack-english-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.md) + + CQADupStack-Gaming: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-gaming-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-cqadupstack-gaming-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.md) + + CQADupStack-Gis: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-gis-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-cqadupstack-gis-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.md) + + CQADupStack-Mathematica: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-mathematica-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-cqadupstack-mathematica-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.md) + + CQADupStack-Physics: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-physics-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-cqadupstack-physics-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.md) + + CQADupStack-Programmers: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-programmers-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-cqadupstack-programmers-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.md) + + CQADupStack-Stats: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-stats-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-cqadupstack-stats-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.md) + + CQADupStack-Tex: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-tex-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-cqadupstack-tex-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.md) + + CQADupStack-Unix: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-unix-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-cqadupstack-unix-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.md) + + CQADupStack-Webmasters: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-webmasters-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-cqadupstack-webmasters-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.md) + + CQADupStack-Wordpress: ["flat" baseline](docs/regressions-beir-v1.0.0-cqadupstack-wordpress-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-cqadupstack-wordpress-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.md) + + Quora: ["flat" baseline](docs/regressions-beir-v1.0.0-quora-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-quora-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-quora-splade-distil-cocodenser-medium.md) + + DBPedia: ["flat" baseline](docs/regressions-beir-v1.0.0-dbpedia-entity-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-dbpedia-entity-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.md) + SCIDOCS: ["flat" baseline](docs/regressions-beir-v1.0.0-scidocs-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-multifield-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.md) - + FEVER: ["flat" baseline](docs/regressions-beir-v1.0.0-fever-flat.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-fever-splade-distil-cocodenser-medium.md) + + FEVER: ["flat" baseline](docs/regressions-beir-v1.0.0-fever-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-fever-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-fever-splade-distil-cocodenser-medium.md) + Climate-FEVER: ["flat" baseline](docs/regressions-beir-v1.0.0-climate-fever-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-climate-fever-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.md) + SciFact: ["flat" baseline](docs/regressions-beir-v1.0.0-scifact-flat.md), ["multifield" baseline](docs/regressions-beir-v1.0.0-scifact-multifield.md), [SPLADE-distill CoCodenser-medium](docs/regressions-beir-v1.0.0-scifact-splade-distil-cocodenser-medium.md) diff --git a/docs/regressions-beir-v1.0.0-arguana-multifield.md b/docs/regressions-beir-v1.0.0-arguana-multifield.md index 48c7d2ac00..17ba3c2a19 100644 --- a/docs/regressions-beir-v1.0.0-arguana-multifield.md +++ b/docs/regressions-beir-v1.0.0-arguana-multifield.md @@ -1,6 +1,6 @@ -# Anserini Regressions: BEIR (v1.0.0) — arguana +# Anserini Regressions: BEIR (v1.0.0) — ArguAna -This page documents BM25 regression experiments for [BEIR (v1.0.0) — arguana](http://beir.ai/). +This page documents BM25 regression experiments for [BEIR (v1.0.0) — ArguAna](http://beir.ai/). These experiments index the "title" and "text" fields in corpus separately. At retrieval time, a query is issued across both fields (equally weighted). diff --git a/docs/regressions-beir-v1.0.0-bioasq-multifield.md b/docs/regressions-beir-v1.0.0-bioasq-multifield.md new file mode 100644 index 0000000000..f50837f0f2 --- /dev/null +++ b/docs/regressions-beir-v1.0.0-bioasq-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — BioASQ + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — BioASQ](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-bioasq-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-bioasq-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-bioasq-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-bioasq-multifield \ + -index indexes/lucene-index.beir-v1.0.0-bioasq-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-bioasq-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-bioasq-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-bioasq.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-bioasq-multifield.bm25.topics.beir-v1.0.0-bioasq.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-bioasq.test.txt runs/run.beir-v1.0.0-bioasq-multifield.bm25.topics.beir-v1.0.0-bioasq.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-bioasq.test.txt runs/run.beir-v1.0.0-bioasq-multifield.bm25.topics.beir-v1.0.0-bioasq.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-bioasq.test.txt runs/run.beir-v1.0.0-bioasq-multifield.bm25.topics.beir-v1.0.0-bioasq.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): bioasq | 0.4646 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): bioasq | 0.7145 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): bioasq | 0.8428 | diff --git a/docs/regressions-beir-v1.0.0-climate-fever-multifield.md b/docs/regressions-beir-v1.0.0-climate-fever-multifield.md index a07eb935eb..0f84dd8b83 100644 --- a/docs/regressions-beir-v1.0.0-climate-fever-multifield.md +++ b/docs/regressions-beir-v1.0.0-climate-fever-multifield.md @@ -1,6 +1,6 @@ -# Anserini Regressions: BEIR (v1.0.0) — climate-fever +# Anserini Regressions: BEIR (v1.0.0) — Climate-FEVER -This page documents BM25 regression experiments for [BEIR (v1.0.0) — climate-fever](http://beir.ai/). +This page documents BM25 regression experiments for [BEIR (v1.0.0) — Climate-FEVER](http://beir.ai/). These experiments index the "title" and "text" fields in corpus separately. At retrieval time, a query is issued across both fields (equally weighted). @@ -21,7 +21,7 @@ Typical indexing command: target/appassembler/bin/IndexCollection \ -collection BeirMultifieldCollection \ -input /path/to/beir-v1.0.0-climate-fever-multifield \ - -index indexes/lucene-index.beir-v1.0.0-climate-multifield/ \ + -index indexes/lucene-index.beir-v1.0.0-climate-fever-multifield/ \ -generator DefaultLuceneDocumentGenerator \ -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ >& logs/log.beir-v1.0.0-climate-fever-multifield & @@ -35,7 +35,7 @@ After indexing has completed, you should be able to perform retrieval as follows ``` target/appassembler/bin/SearchCollection \ - -index indexes/lucene-index.beir-v1.0.0-climate-multifield/ \ + -index indexes/lucene-index.beir-v1.0.0-climate-fever-multifield/ \ -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-climate-fever.test.tsv.gz \ -topicreader TsvString \ -output runs/run.beir-v1.0.0-climate-fever-multifield.bm25.topics.beir-v1.0.0-climate-fever.test.txt \ diff --git a/docs/regressions-beir-v1.0.0-cqadupstack-android-multifield.md b/docs/regressions-beir-v1.0.0-cqadupstack-android-multifield.md new file mode 100644 index 0000000000..bd7858b108 --- /dev/null +++ b/docs/regressions-beir-v1.0.0-cqadupstack-android-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Android + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Android](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-cqadupstack-android-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-android-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-android-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-cqadupstack-android-multifield \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-android-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-cqadupstack-android-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-android-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-android.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-cqadupstack-android-multifield.bm25.topics.beir-v1.0.0-cqadupstack-android.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-android.test.txt runs/run.beir-v1.0.0-cqadupstack-android-multifield.bm25.topics.beir-v1.0.0-cqadupstack-android.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-android.test.txt runs/run.beir-v1.0.0-cqadupstack-android-multifield.bm25.topics.beir-v1.0.0-cqadupstack-android.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-android.test.txt runs/run.beir-v1.0.0-cqadupstack-android-multifield.bm25.topics.beir-v1.0.0-cqadupstack-android.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-android | 0.3709 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-android | 0.6889 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-android | 0.8712 | diff --git a/docs/regressions-beir-v1.0.0-cqadupstack-english-multifield.md b/docs/regressions-beir-v1.0.0-cqadupstack-english-multifield.md new file mode 100644 index 0000000000..105cf98036 --- /dev/null +++ b/docs/regressions-beir-v1.0.0-cqadupstack-english-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-English + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-English](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-cqadupstack-english-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-english-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-english-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-cqadupstack-english-multifield \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-english-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-cqadupstack-english-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-english-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-english.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-cqadupstack-english-multifield.bm25.topics.beir-v1.0.0-cqadupstack-english.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-english.test.txt runs/run.beir-v1.0.0-cqadupstack-english-multifield.bm25.topics.beir-v1.0.0-cqadupstack-english.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-english.test.txt runs/run.beir-v1.0.0-cqadupstack-english-multifield.bm25.topics.beir-v1.0.0-cqadupstack-english.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-english.test.txt runs/run.beir-v1.0.0-cqadupstack-english-multifield.bm25.topics.beir-v1.0.0-cqadupstack-english.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-english | 0.3321 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-english | 0.5842 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-english | 0.7574 | diff --git a/docs/regressions-beir-v1.0.0-cqadupstack-gaming-multifield.md b/docs/regressions-beir-v1.0.0-cqadupstack-gaming-multifield.md new file mode 100644 index 0000000000..29f58fa32a --- /dev/null +++ b/docs/regressions-beir-v1.0.0-cqadupstack-gaming-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Gaming + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Gaming](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-cqadupstack-gaming-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gaming-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-gaming-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-cqadupstack-gaming-multifield \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-gaming-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-cqadupstack-gaming-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-gaming-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-gaming.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-cqadupstack-gaming-multifield.bm25.topics.beir-v1.0.0-cqadupstack-gaming.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-gaming.test.txt runs/run.beir-v1.0.0-cqadupstack-gaming-multifield.bm25.topics.beir-v1.0.0-cqadupstack-gaming.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-gaming.test.txt runs/run.beir-v1.0.0-cqadupstack-gaming-multifield.bm25.topics.beir-v1.0.0-cqadupstack-gaming.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-gaming.test.txt runs/run.beir-v1.0.0-cqadupstack-gaming-multifield.bm25.topics.beir-v1.0.0-cqadupstack-gaming.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-gaming | 0.4418 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-gaming | 0.7571 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-gaming | 0.8882 | diff --git a/docs/regressions-beir-v1.0.0-cqadupstack-gis-multifield.md b/docs/regressions-beir-v1.0.0-cqadupstack-gis-multifield.md new file mode 100644 index 0000000000..5ab9562161 --- /dev/null +++ b/docs/regressions-beir-v1.0.0-cqadupstack-gis-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Gis + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Gis](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-cqadupstack-gis-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gis-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-gis-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-cqadupstack-gis-multifield \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-gis-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-cqadupstack-gis-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-gis-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-gis.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-cqadupstack-gis-multifield.bm25.topics.beir-v1.0.0-cqadupstack-gis.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-gis.test.txt runs/run.beir-v1.0.0-cqadupstack-gis-multifield.bm25.topics.beir-v1.0.0-cqadupstack-gis.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-gis.test.txt runs/run.beir-v1.0.0-cqadupstack-gis-multifield.bm25.topics.beir-v1.0.0-cqadupstack-gis.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-gis.test.txt runs/run.beir-v1.0.0-cqadupstack-gis-multifield.bm25.topics.beir-v1.0.0-cqadupstack-gis.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-gis | 0.2904 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-gis | 0.6458 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-gis | 0.8248 | diff --git a/docs/regressions-beir-v1.0.0-cqadupstack-mathematica-multifield.md b/docs/regressions-beir-v1.0.0-cqadupstack-mathematica-multifield.md new file mode 100644 index 0000000000..187ff25f01 --- /dev/null +++ b/docs/regressions-beir-v1.0.0-cqadupstack-mathematica-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Mathematica + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Mathematica](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-cqadupstack-mathematica-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-mathematica-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-mathematica-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-cqadupstack-mathematica-multifield \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-mathematica-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-cqadupstack-mathematica-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-mathematica-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-mathematica.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-cqadupstack-mathematica-multifield.bm25.topics.beir-v1.0.0-cqadupstack-mathematica.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-mathematica.test.txt runs/run.beir-v1.0.0-cqadupstack-mathematica-multifield.bm25.topics.beir-v1.0.0-cqadupstack-mathematica.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-mathematica.test.txt runs/run.beir-v1.0.0-cqadupstack-mathematica-multifield.bm25.topics.beir-v1.0.0-cqadupstack-mathematica.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-mathematica.test.txt runs/run.beir-v1.0.0-cqadupstack-mathematica-multifield.bm25.topics.beir-v1.0.0-cqadupstack-mathematica.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-mathematica | 0.2046 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-mathematica | 0.5215 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-mathematica | 0.7559 | diff --git a/docs/regressions-beir-v1.0.0-cqadupstack-physics-multifield.md b/docs/regressions-beir-v1.0.0-cqadupstack-physics-multifield.md new file mode 100644 index 0000000000..5496569e1e --- /dev/null +++ b/docs/regressions-beir-v1.0.0-cqadupstack-physics-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Physics + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Physics](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-cqadupstack-physics-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-physics-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-physics-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-cqadupstack-physics-multifield \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-physics-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-cqadupstack-physics-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-physics-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-physics.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-cqadupstack-physics-multifield.bm25.topics.beir-v1.0.0-cqadupstack-physics.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-physics.test.txt runs/run.beir-v1.0.0-cqadupstack-physics-multifield.bm25.topics.beir-v1.0.0-cqadupstack-physics.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-physics.test.txt runs/run.beir-v1.0.0-cqadupstack-physics-multifield.bm25.topics.beir-v1.0.0-cqadupstack-physics.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-physics.test.txt runs/run.beir-v1.0.0-cqadupstack-physics-multifield.bm25.topics.beir-v1.0.0-cqadupstack-physics.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-physics | 0.3248 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-physics | 0.6486 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-physics | 0.8506 | diff --git a/docs/regressions-beir-v1.0.0-cqadupstack-programmers-multifield.md b/docs/regressions-beir-v1.0.0-cqadupstack-programmers-multifield.md new file mode 100644 index 0000000000..6063e9e244 --- /dev/null +++ b/docs/regressions-beir-v1.0.0-cqadupstack-programmers-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Programmers + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Programmers](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-cqadupstack-programmers-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-programmers-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-programmers-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-cqadupstack-programmers-multifield \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-programmers-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-cqadupstack-programmers-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-programmers-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-programmers.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-cqadupstack-programmers-multifield.bm25.topics.beir-v1.0.0-cqadupstack-programmers.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-programmers.test.txt runs/run.beir-v1.0.0-cqadupstack-programmers-multifield.bm25.topics.beir-v1.0.0-cqadupstack-programmers.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-programmers.test.txt runs/run.beir-v1.0.0-cqadupstack-programmers-multifield.bm25.topics.beir-v1.0.0-cqadupstack-programmers.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-programmers.test.txt runs/run.beir-v1.0.0-cqadupstack-programmers-multifield.bm25.topics.beir-v1.0.0-cqadupstack-programmers.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-programmers | 0.2963 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-programmers | 0.6194 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-programmers | 0.8096 | diff --git a/docs/regressions-beir-v1.0.0-cqadupstack-stats-multifield.md b/docs/regressions-beir-v1.0.0-cqadupstack-stats-multifield.md new file mode 100644 index 0000000000..84ac20e5d8 --- /dev/null +++ b/docs/regressions-beir-v1.0.0-cqadupstack-stats-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Stats + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Stats](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-cqadupstack-stats-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-stats-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-stats-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-cqadupstack-stats-multifield \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-stats-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-cqadupstack-stats-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-stats-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-stats.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-cqadupstack-stats-multifield.bm25.topics.beir-v1.0.0-cqadupstack-stats.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-stats.test.txt runs/run.beir-v1.0.0-cqadupstack-stats-multifield.bm25.topics.beir-v1.0.0-cqadupstack-stats.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-stats.test.txt runs/run.beir-v1.0.0-cqadupstack-stats-multifield.bm25.topics.beir-v1.0.0-cqadupstack-stats.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-stats.test.txt runs/run.beir-v1.0.0-cqadupstack-stats-multifield.bm25.topics.beir-v1.0.0-cqadupstack-stats.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-stats | 0.2790 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-stats | 0.5719 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-stats | 0.7619 | diff --git a/docs/regressions-beir-v1.0.0-cqadupstack-tex-multifield.md b/docs/regressions-beir-v1.0.0-cqadupstack-tex-multifield.md new file mode 100644 index 0000000000..d2fae2c955 --- /dev/null +++ b/docs/regressions-beir-v1.0.0-cqadupstack-tex-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Tex + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Tex](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-cqadupstack-tex-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-tex-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-tex-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-cqadupstack-tex-multifield \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-tex-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-cqadupstack-tex-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-tex-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-tex.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-cqadupstack-tex-multifield.bm25.topics.beir-v1.0.0-cqadupstack-tex.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-tex.test.txt runs/run.beir-v1.0.0-cqadupstack-tex-multifield.bm25.topics.beir-v1.0.0-cqadupstack-tex.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-tex.test.txt runs/run.beir-v1.0.0-cqadupstack-tex-multifield.bm25.topics.beir-v1.0.0-cqadupstack-tex.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-tex.test.txt runs/run.beir-v1.0.0-cqadupstack-tex-multifield.bm25.topics.beir-v1.0.0-cqadupstack-tex.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-tex | 0.2086 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-tex | 0.4954 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-tex | 0.7222 | diff --git a/docs/regressions-beir-v1.0.0-cqadupstack-unix-multifield.md b/docs/regressions-beir-v1.0.0-cqadupstack-unix-multifield.md new file mode 100644 index 0000000000..ab0440d077 --- /dev/null +++ b/docs/regressions-beir-v1.0.0-cqadupstack-unix-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Unix + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Unix](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-cqadupstack-unix-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-unix-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-unix-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-cqadupstack-unix-multifield \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-unix-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-cqadupstack-unix-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-unix-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-unix.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-cqadupstack-unix-multifield.bm25.topics.beir-v1.0.0-cqadupstack-unix.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-unix.test.txt runs/run.beir-v1.0.0-cqadupstack-unix-multifield.bm25.topics.beir-v1.0.0-cqadupstack-unix.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-unix.test.txt runs/run.beir-v1.0.0-cqadupstack-unix-multifield.bm25.topics.beir-v1.0.0-cqadupstack-unix.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-unix.test.txt runs/run.beir-v1.0.0-cqadupstack-unix-multifield.bm25.topics.beir-v1.0.0-cqadupstack-unix.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-unix | 0.2788 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-unix | 0.5721 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-unix | 0.7783 | diff --git a/docs/regressions-beir-v1.0.0-cqadupstack-webmasters-multifield.md b/docs/regressions-beir-v1.0.0-cqadupstack-webmasters-multifield.md new file mode 100644 index 0000000000..42c269e29d --- /dev/null +++ b/docs/regressions-beir-v1.0.0-cqadupstack-webmasters-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Webmasters + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Webmasters](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-cqadupstack-webmasters-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-webmasters-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-webmasters-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-cqadupstack-webmasters-multifield \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-webmasters-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-cqadupstack-webmasters-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-webmasters-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-webmasters.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-cqadupstack-webmasters-multifield.bm25.topics.beir-v1.0.0-cqadupstack-webmasters.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-webmasters.test.txt runs/run.beir-v1.0.0-cqadupstack-webmasters-multifield.bm25.topics.beir-v1.0.0-cqadupstack-webmasters.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-webmasters.test.txt runs/run.beir-v1.0.0-cqadupstack-webmasters-multifield.bm25.topics.beir-v1.0.0-cqadupstack-webmasters.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-webmasters.test.txt runs/run.beir-v1.0.0-cqadupstack-webmasters-multifield.bm25.topics.beir-v1.0.0-cqadupstack-webmasters.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-webmasters | 0.3008 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-webmasters | 0.6100 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-webmasters | 0.8226 | diff --git a/docs/regressions-beir-v1.0.0-cqadupstack-wordpress-multifield.md b/docs/regressions-beir-v1.0.0-cqadupstack-wordpress-multifield.md new file mode 100644 index 0000000000..9924970762 --- /dev/null +++ b/docs/regressions-beir-v1.0.0-cqadupstack-wordpress-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Wordpress + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Wordpress](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-cqadupstack-wordpress-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-wordpress-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-wordpress-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-cqadupstack-wordpress-multifield \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-wordpress-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-cqadupstack-wordpress-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-cqadupstack-wordpress-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-wordpress.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-cqadupstack-wordpress-multifield.bm25.topics.beir-v1.0.0-cqadupstack-wordpress.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-wordpress.test.txt runs/run.beir-v1.0.0-cqadupstack-wordpress-multifield.bm25.topics.beir-v1.0.0-cqadupstack-wordpress.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-wordpress.test.txt runs/run.beir-v1.0.0-cqadupstack-wordpress-multifield.bm25.topics.beir-v1.0.0-cqadupstack-wordpress.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-wordpress.test.txt runs/run.beir-v1.0.0-cqadupstack-wordpress-multifield.bm25.topics.beir-v1.0.0-cqadupstack-wordpress.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-wordpress | 0.2562 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-wordpress | 0.5526 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): cqadupstack-wordpress | 0.7848 | diff --git a/docs/regressions-beir-v1.0.0-dbpedia-entity-multifield.md b/docs/regressions-beir-v1.0.0-dbpedia-entity-multifield.md new file mode 100644 index 0000000000..8696a3c278 --- /dev/null +++ b/docs/regressions-beir-v1.0.0-dbpedia-entity-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — DBPedia + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — DBPedia](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-dbpedia-entity-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-dbpedia-entity-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-dbpedia-entity-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-dbpedia-entity-multifield \ + -index indexes/lucene-index.beir-v1.0.0-dbpedia-entity-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-dbpedia-entity-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-dbpedia-entity-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-dbpedia-entity.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-dbpedia-entity-multifield.bm25.topics.beir-v1.0.0-dbpedia-entity.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-dbpedia-entity.test.txt runs/run.beir-v1.0.0-dbpedia-entity-multifield.bm25.topics.beir-v1.0.0-dbpedia-entity.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-dbpedia-entity.test.txt runs/run.beir-v1.0.0-dbpedia-entity-multifield.bm25.topics.beir-v1.0.0-dbpedia-entity.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-dbpedia-entity.test.txt runs/run.beir-v1.0.0-dbpedia-entity-multifield.bm25.topics.beir-v1.0.0-dbpedia-entity.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): dbpedia-entity | 0.3128 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): dbpedia-entity | 0.3981 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): dbpedia-entity | 0.5848 | diff --git a/docs/regressions-beir-v1.0.0-fever-multifield.md b/docs/regressions-beir-v1.0.0-fever-multifield.md new file mode 100644 index 0000000000..afd71e95fe --- /dev/null +++ b/docs/regressions-beir-v1.0.0-fever-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — FEVER + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — FEVER](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-fever-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-fever-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-fever-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-fever-multifield \ + -index indexes/lucene-index.beir-v1.0.0-fever-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-fever-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-fever-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-fever.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-fever-multifield.bm25.topics.beir-v1.0.0-fever.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-fever.test.txt runs/run.beir-v1.0.0-fever-multifield.bm25.topics.beir-v1.0.0-fever.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-fever.test.txt runs/run.beir-v1.0.0-fever-multifield.bm25.topics.beir-v1.0.0-fever.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-fever.test.txt runs/run.beir-v1.0.0-fever-multifield.bm25.topics.beir-v1.0.0-fever.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): fever | 0.7530 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): fever | 0.9309 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): fever | 0.9599 | diff --git a/docs/regressions-beir-v1.0.0-hotpotqa-multifield.md b/docs/regressions-beir-v1.0.0-hotpotqa-multifield.md new file mode 100644 index 0000000000..42c847387a --- /dev/null +++ b/docs/regressions-beir-v1.0.0-hotpotqa-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — HotpotQA + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — HotpotQA](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-hotpotqa-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-hotpotqa-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-hotpotqa-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-hotpotqa-multifield \ + -index indexes/lucene-index.beir-v1.0.0-hotpotqa-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-hotpotqa-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-hotpotqa-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-hotpotqa.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-hotpotqa-multifield.bm25.topics.beir-v1.0.0-hotpotqa.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-hotpotqa.test.txt runs/run.beir-v1.0.0-hotpotqa-multifield.bm25.topics.beir-v1.0.0-hotpotqa.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-hotpotqa.test.txt runs/run.beir-v1.0.0-hotpotqa-multifield.bm25.topics.beir-v1.0.0-hotpotqa.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-hotpotqa.test.txt runs/run.beir-v1.0.0-hotpotqa-multifield.bm25.topics.beir-v1.0.0-hotpotqa.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): hotpotqa | 0.6027 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): hotpotqa | 0.7400 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): hotpotqa | 0.8405 | diff --git a/docs/regressions-beir-v1.0.0-quora-multifield.md b/docs/regressions-beir-v1.0.0-quora-multifield.md new file mode 100644 index 0000000000..ab7e3a4105 --- /dev/null +++ b/docs/regressions-beir-v1.0.0-quora-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — Quora + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — Quora](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-quora-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-quora-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-quora-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-quora-multifield \ + -index indexes/lucene-index.beir-v1.0.0-quora-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-quora-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-quora-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-quora.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-quora-multifield.bm25.topics.beir-v1.0.0-quora.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-quora.test.txt runs/run.beir-v1.0.0-quora-multifield.bm25.topics.beir-v1.0.0-quora.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-quora.test.txt runs/run.beir-v1.0.0-quora-multifield.bm25.topics.beir-v1.0.0-quora.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-quora.test.txt runs/run.beir-v1.0.0-quora-multifield.bm25.topics.beir-v1.0.0-quora.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): quora | 0.7886 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): quora | 0.9733 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): quora | 0.9950 | diff --git a/docs/regressions-beir-v1.0.0-signal1m-multifield.md b/docs/regressions-beir-v1.0.0-signal1m-multifield.md new file mode 100644 index 0000000000..250448bf20 --- /dev/null +++ b/docs/regressions-beir-v1.0.0-signal1m-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — Signal-1M + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — Signal-1M](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-signal1m-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-signal1m-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-signal1m-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-signal1m-multifield \ + -index indexes/lucene-index.beir-v1.0.0-signal1m-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-signal1m-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-signal1m-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-signal1m.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-signal1m-multifield.bm25.topics.beir-v1.0.0-signal1m.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-signal1m.test.txt runs/run.beir-v1.0.0-signal1m-multifield.bm25.topics.beir-v1.0.0-signal1m.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-signal1m.test.txt runs/run.beir-v1.0.0-signal1m-multifield.bm25.topics.beir-v1.0.0-signal1m.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-signal1m.test.txt runs/run.beir-v1.0.0-signal1m-multifield.bm25.topics.beir-v1.0.0-signal1m.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): signal1m | 0.3304 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): signal1m | 0.3703 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): signal1m | 0.5642 | diff --git a/docs/regressions-beir-v1.0.0-webis-touche2020-multifield.md b/docs/regressions-beir-v1.0.0-webis-touche2020-multifield.md new file mode 100644 index 0000000000..b17bec814e --- /dev/null +++ b/docs/regressions-beir-v1.0.0-webis-touche2020-multifield.md @@ -0,0 +1,69 @@ +# Anserini Regressions: BEIR (v1.0.0) — Webis-Touche2020 + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — Webis-Touche2020](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/beir-v1.0.0-webis-touche2020-multifield.yaml). +Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/beir-v1.0.0-webis-touche2020-multifield.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-webis-touche2020-multifield +``` + +## Indexing + +Typical indexing command: + +``` +target/appassembler/bin/IndexCollection \ + -collection BeirMultifieldCollection \ + -input /path/to/beir-v1.0.0-webis-touche2020-multifield \ + -index indexes/lucene-index.beir-v1.0.0-webis-touche2020-multifield/ \ + -generator DefaultLuceneDocumentGenerator \ + -threads 1 -storePositions -storeDocvectors -storeRaw -fields title \ + >& logs/log.beir-v1.0.0-webis-touche2020-multifield & +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +target/appassembler/bin/SearchCollection \ + -index indexes/lucene-index.beir-v1.0.0-webis-touche2020-multifield/ \ + -topics src/main/resources/topics-and-qrels/topics.beir-v1.0.0-webis-touche2020.test.tsv.gz \ + -topicreader TsvString \ + -output runs/run.beir-v1.0.0-webis-touche2020-multifield.bm25.topics.beir-v1.0.0-webis-touche2020.test.txt \ + -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 & +``` + +Evaluation can be performed using `trec_eval`: + +``` +tools/eval/trec_eval.9.0.4/trec_eval -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-webis-touche2020.test.txt runs/run.beir-v1.0.0-webis-touche2020-multifield.bm25.topics.beir-v1.0.0-webis-touche2020.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-webis-touche2020.test.txt runs/run.beir-v1.0.0-webis-touche2020-multifield.bm25.topics.beir-v1.0.0-webis-touche2020.test.txt +tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.beir-v1.0.0-webis-touche2020.test.txt runs/run.beir-v1.0.0-webis-touche2020-multifield.bm25.topics.beir-v1.0.0-webis-touche2020.test.txt +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +| nDCG@10 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): webis-touche2020 | 0.3673 | + + +| R@100 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): webis-touche2020 | 0.5376 | + + +| R@1000 | BM25 | +|:-------------------------------------------------------------------------------------------------------------|-----------| +| BEIR (v1.0.0): webis-touche2020 | 0.8668 | diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-arguana-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-arguana-multifield.template index 16bf7b2142..db59c13677 100644 --- a/src/main/resources/docgen/templates/beir-v1.0.0-arguana-multifield.template +++ b/src/main/resources/docgen/templates/beir-v1.0.0-arguana-multifield.template @@ -1,6 +1,6 @@ -# Anserini Regressions: BEIR (v1.0.0) — arguana +# Anserini Regressions: BEIR (v1.0.0) — ArguAna -This page documents BM25 regression experiments for [BEIR (v1.0.0) — arguana](http://beir.ai/). +This page documents BM25 regression experiments for [BEIR (v1.0.0) — ArguAna](http://beir.ai/). These experiments index the "title" and "text" fields in corpus separately. At retrieval time, a query is issued across both fields (equally weighted). diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-bioasq-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-bioasq-multifield.template new file mode 100644 index 0000000000..911f2412b6 --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-bioasq-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — BioASQ + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — BioASQ](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-climate-fever-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-climate-fever-multifield.template index f4a902aede..4556f07c48 100644 --- a/src/main/resources/docgen/templates/beir-v1.0.0-climate-fever-multifield.template +++ b/src/main/resources/docgen/templates/beir-v1.0.0-climate-fever-multifield.template @@ -1,6 +1,6 @@ -# Anserini Regressions: BEIR (v1.0.0) — climate-fever +# Anserini Regressions: BEIR (v1.0.0) — Climate-FEVER -This page documents BM25 regression experiments for [BEIR (v1.0.0) — climate-fever](http://beir.ai/). +This page documents BM25 regression experiments for [BEIR (v1.0.0) — Climate-FEVER](http://beir.ai/). These experiments index the "title" and "text" fields in corpus separately. At retrieval time, a query is issued across both fields (equally weighted). diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-android-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-android-multifield.template new file mode 100644 index 0000000000..15bd50320b --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-android-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Android + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Android](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-english-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-english-multifield.template new file mode 100644 index 0000000000..01db42ae38 --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-english-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-English + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-English](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gaming-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gaming-multifield.template new file mode 100644 index 0000000000..c07be33f38 --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gaming-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Gaming + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Gaming](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gis-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gis-multifield.template new file mode 100644 index 0000000000..67b5942ccb --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gis-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Gis + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Gis](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-mathematica-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-mathematica-multifield.template new file mode 100644 index 0000000000..af5fa2f763 --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-mathematica-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Mathematica + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Mathematica](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-physics-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-physics-multifield.template new file mode 100644 index 0000000000..a603b29c04 --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-physics-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Physics + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Physics](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-programmers-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-programmers-multifield.template new file mode 100644 index 0000000000..bde98c1cf1 --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-programmers-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Programmers + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Programmers](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-stats-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-stats-multifield.template new file mode 100644 index 0000000000..f88eb32270 --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-stats-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Stats + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Stats](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-tex-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-tex-multifield.template new file mode 100644 index 0000000000..e7da41f700 --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-tex-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Tex + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Tex](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-unix-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-unix-multifield.template new file mode 100644 index 0000000000..d58a281229 --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-unix-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Unix + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Unix](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-webmasters-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-webmasters-multifield.template new file mode 100644 index 0000000000..02531f7ff9 --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-webmasters-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Webmasters + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Webmasters](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-wordpress-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-wordpress-multifield.template new file mode 100644 index 0000000000..bcd609ffc1 --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-wordpress-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-Wordpress + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — CQADupStack-Wordpress](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-dbpedia-entity-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-dbpedia-entity-multifield.template new file mode 100644 index 0000000000..abf5b50a3c --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-dbpedia-entity-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — DBPedia + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — DBPedia](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-fever-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-fever-multifield.template new file mode 100644 index 0000000000..68b1fb8b4e --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-fever-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — FEVER + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — FEVER](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-hotpotqa-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-hotpotqa-multifield.template new file mode 100644 index 0000000000..8ad8565e70 --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-hotpotqa-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — HotpotQA + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — HotpotQA](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-quora-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-quora-multifield.template new file mode 100644 index 0000000000..8324c6930d --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-quora-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — Quora + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — Quora](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-signal1m-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-signal1m-multifield.template new file mode 100644 index 0000000000..a5d793a25f --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-signal1m-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — Signal-1M + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — Signal-1M](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-webis-touche2020-multifield.template b/src/main/resources/docgen/templates/beir-v1.0.0-webis-touche2020-multifield.template new file mode 100644 index 0000000000..671f458138 --- /dev/null +++ b/src/main/resources/docgen/templates/beir-v1.0.0-webis-touche2020-multifield.template @@ -0,0 +1,44 @@ +# Anserini Regressions: BEIR (v1.0.0) — Webis-Touche2020 + +This page documents BM25 regression experiments for [BEIR (v1.0.0) — Webis-Touche2020](http://beir.ai/). +These experiments index the "title" and "text" fields in corpus separately. +At retrieval time, a query is issued across both fields (equally weighted). + +The exact configurations for these regressions are stored in [this YAML file](${yaml}). +Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. + +From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end: + +``` +python src/main/python/run_regression.py --index --verify --search --regression ${test_name} +``` + +## Indexing + +Typical indexing command: + +``` +${index_cmds} +``` + +For additional details, see explanation of [common indexing options](common-indexing-options.md). + +## Retrieval + +After indexing has completed, you should be able to perform retrieval as follows: + +``` +${ranking_cmds} +``` + +Evaluation can be performed using `trec_eval`: + +``` +${eval_cmds} +``` + +## Effectiveness + +With the above commands, you should be able to reproduce the following results: + +${effectiveness} diff --git a/src/main/resources/regression/beir-v1.0.0-bioasq-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-bioasq-multifield.yaml new file mode 100644 index 0000000000..4d5de3073e --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-bioasq-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-bioasq-multifield +corpus_path: collections/beir-v1.0.0/corpus/bioasq/ + +index_path: indexes/lucene-index.beir-v1.0.0-bioasq-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 14914602 + documents (non-empty): 14914585 + total terms: 2099554317 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): bioasq" + id: test + path: topics.beir-v1.0.0-bioasq.test.tsv.gz + qrel: qrels.beir-v1.0.0-bioasq.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.4646 + R@100: + - 0.7145 + R@1000: + - 0.8428 diff --git a/src/main/resources/regression/beir-v1.0.0-climate-fever-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-climate-fever-multifield.yaml index 12c8d3e6f8..86b26031a6 100644 --- a/src/main/resources/regression/beir-v1.0.0-climate-fever-multifield.yaml +++ b/src/main/resources/regression/beir-v1.0.0-climate-fever-multifield.yaml @@ -2,7 +2,7 @@ corpus: beir-v1.0.0-climate-fever-multifield corpus_path: collections/beir-v1.0.0/corpus/climate-fever/ -index_path: indexes/lucene-index.beir-v1.0.0-climate-multifield/ +index_path: indexes/lucene-index.beir-v1.0.0-climate-fever-multifield/ collection_class: BeirMultifieldCollection generator_class: DefaultLuceneDocumentGenerator index_threads: 1 diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-android-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-android-multifield.yaml new file mode 100644 index 0000000000..a347a65dda --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-cqadupstack-android-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-cqadupstack-android-multifield +corpus_path: collections/beir-v1.0.0/corpus/cqadupstack-android/ + +index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-android-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 22998 + documents (non-empty): 22998 + total terms: 1591284 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): cqadupstack-android" + id: test + path: topics.beir-v1.0.0-cqadupstack-android.test.tsv.gz + qrel: qrels.beir-v1.0.0-cqadupstack-android.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.3709 + R@100: + - 0.6889 + R@1000: + - 0.8712 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-english-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-english-multifield.yaml new file mode 100644 index 0000000000..14213fefed --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-cqadupstack-english-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-cqadupstack-english-multifield +corpus_path: collections/beir-v1.0.0/corpus/cqadupstack-english/ + +index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-english-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 40221 + documents (non-empty): 40221 + total terms: 2006983 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): cqadupstack-english" + id: test + path: topics.beir-v1.0.0-cqadupstack-english.test.tsv.gz + qrel: qrels.beir-v1.0.0-cqadupstack-english.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.3321 + R@100: + - 0.5842 + R@1000: + - 0.7574 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-gaming-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-gaming-multifield.yaml new file mode 100644 index 0000000000..6f98bb3992 --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-cqadupstack-gaming-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-cqadupstack-gaming-multifield +corpus_path: collections/beir-v1.0.0/corpus/cqadupstack-gaming/ + +index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-gaming-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 45300 + documents (non-empty): 45300 + total terms: 2510477 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): cqadupstack-gaming" + id: test + path: topics.beir-v1.0.0-cqadupstack-gaming.test.tsv.gz + qrel: qrels.beir-v1.0.0-cqadupstack-gaming.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.4418 + R@100: + - 0.7571 + R@1000: + - 0.8882 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-gis-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-gis-multifield.yaml new file mode 100644 index 0000000000..4762b867cb --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-cqadupstack-gis-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-cqadupstack-gis-multifield +corpus_path: collections/beir-v1.0.0/corpus/cqadupstack-gis/ + +index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-gis-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 37637 + documents (non-empty): 37637 + total terms: 3789161 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): cqadupstack-gis" + id: test + path: topics.beir-v1.0.0-cqadupstack-gis.test.tsv.gz + qrel: qrels.beir-v1.0.0-cqadupstack-gis.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.2904 + R@100: + - 0.6458 + R@1000: + - 0.8248 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-mathematica-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-mathematica-multifield.yaml new file mode 100644 index 0000000000..f232579bc0 --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-cqadupstack-mathematica-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-cqadupstack-mathematica-multifield +corpus_path: collections/beir-v1.0.0/corpus/cqadupstack-mathematica/ + +index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-mathematica-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 16705 + documents (non-empty): 16705 + total terms: 2234369 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): cqadupstack-mathematica" + id: test + path: topics.beir-v1.0.0-cqadupstack-mathematica.test.tsv.gz + qrel: qrels.beir-v1.0.0-cqadupstack-mathematica.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.2046 + R@100: + - 0.5215 + R@1000: + - 0.7559 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-physics-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-physics-multifield.yaml new file mode 100644 index 0000000000..b53f74ff88 --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-cqadupstack-physics-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-cqadupstack-physics-multifield +corpus_path: collections/beir-v1.0.0/corpus/cqadupstack-physics/ + +index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-physics-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 38316 + documents (non-empty): 38316 + total terms: 3542078 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): cqadupstack-physics" + id: test + path: topics.beir-v1.0.0-cqadupstack-physics.test.tsv.gz + qrel: qrels.beir-v1.0.0-cqadupstack-physics.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.3248 + R@100: + - 0.6486 + R@1000: + - 0.8506 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-programmers-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-programmers-multifield.yaml new file mode 100644 index 0000000000..9eca7ea870 --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-cqadupstack-programmers-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-cqadupstack-programmers-multifield +corpus_path: collections/beir-v1.0.0/corpus/cqadupstack-programmers/ + +index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-programmers-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 32176 + documents (non-empty): 32176 + total terms: 3905694 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): cqadupstack-programmers" + id: test + path: topics.beir-v1.0.0-cqadupstack-programmers.test.tsv.gz + qrel: qrels.beir-v1.0.0-cqadupstack-programmers.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.2963 + R@100: + - 0.6194 + R@1000: + - 0.8096 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-stats-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-stats-multifield.yaml new file mode 100644 index 0000000000..1927c1ce74 --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-cqadupstack-stats-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-cqadupstack-stats-multifield +corpus_path: collections/beir-v1.0.0/corpus/cqadupstack-stats/ + +index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-stats-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 42269 + documents (non-empty): 42269 + total terms: 5073873 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): cqadupstack-stats" + id: test + path: topics.beir-v1.0.0-cqadupstack-stats.test.tsv.gz + qrel: qrels.beir-v1.0.0-cqadupstack-stats.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.2790 + R@100: + - 0.5719 + R@1000: + - 0.7619 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-tex-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-tex-multifield.yaml new file mode 100644 index 0000000000..9e524a206f --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-cqadupstack-tex-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-cqadupstack-tex-multifield +corpus_path: collections/beir-v1.0.0/corpus/cqadupstack-tex/ + +index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-tex-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 68184 + documents (non-empty): 68184 + total terms: 9155404 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): cqadupstack-tex" + id: test + path: topics.beir-v1.0.0-cqadupstack-tex.test.tsv.gz + qrel: qrels.beir-v1.0.0-cqadupstack-tex.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.2086 + R@100: + - 0.4954 + R@1000: + - 0.7222 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-unix-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-unix-multifield.yaml new file mode 100644 index 0000000000..0f66caa32c --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-cqadupstack-unix-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-cqadupstack-unix-multifield +corpus_path: collections/beir-v1.0.0/corpus/cqadupstack-unix/ + +index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-unix-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 47382 + documents (non-empty): 47382 + total terms: 5449726 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): cqadupstack-unix" + id: test + path: topics.beir-v1.0.0-cqadupstack-unix.test.tsv.gz + qrel: qrels.beir-v1.0.0-cqadupstack-unix.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.2788 + R@100: + - 0.5721 + R@1000: + - 0.7783 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-webmasters-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-webmasters-multifield.yaml new file mode 100644 index 0000000000..f25a364efb --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-cqadupstack-webmasters-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-cqadupstack-webmasters-multifield +corpus_path: collections/beir-v1.0.0/corpus/cqadupstack-webmasters/ + +index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-webmasters-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 17405 + documents (non-empty): 17405 + total terms: 1358292 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): cqadupstack-webmasters" + id: test + path: topics.beir-v1.0.0-cqadupstack-webmasters.test.tsv.gz + qrel: qrels.beir-v1.0.0-cqadupstack-webmasters.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.3008 + R@100: + - 0.6100 + R@1000: + - 0.8226 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-wordpress-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-wordpress-multifield.yaml new file mode 100644 index 0000000000..80114e0d1e --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-cqadupstack-wordpress-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-cqadupstack-wordpress-multifield +corpus_path: collections/beir-v1.0.0/corpus/cqadupstack-wordpress/ + +index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-wordpress-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 48605 + documents (non-empty): 48605 + total terms: 5151575 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): cqadupstack-wordpress" + id: test + path: topics.beir-v1.0.0-cqadupstack-wordpress.test.tsv.gz + qrel: qrels.beir-v1.0.0-cqadupstack-wordpress.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.2562 + R@100: + - 0.5526 + R@1000: + - 0.7848 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-dbpedia-entity-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-dbpedia-entity-multifield.yaml new file mode 100644 index 0000000000..62884876ff --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-dbpedia-entity-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-dbpedia-entity-multifield +corpus_path: collections/beir-v1.0.0/corpus/dbpedia-entity/ + +index_path: indexes/lucene-index.beir-v1.0.0-dbpedia-entity-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 4635922 + documents (non-empty): 4635863 + total terms: 152205484 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): dbpedia-entity" + id: test + path: topics.beir-v1.0.0-dbpedia-entity.test.tsv.gz + qrel: qrels.beir-v1.0.0-dbpedia-entity.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.3128 + R@100: + - 0.3981 + R@1000: + - 0.5848 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-fever-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-fever-multifield.yaml new file mode 100644 index 0000000000..b8eb8edee7 --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-fever-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-fever-multifield +corpus_path: collections/beir-v1.0.0/corpus/fever/ + +index_path: indexes/lucene-index.beir-v1.0.0-fever-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 5396138 + documents (non-empty): 5396092 + total terms: 310655704 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): fever" + id: test + path: topics.beir-v1.0.0-fever.test.tsv.gz + qrel: qrels.beir-v1.0.0-fever.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.7530 + R@100: + - 0.9309 + R@1000: + - 0.9599 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-hotpotqa-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-hotpotqa-multifield.yaml new file mode 100644 index 0000000000..92578accc8 --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-hotpotqa-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-hotpotqa-multifield +corpus_path: collections/beir-v1.0.0/corpus/hotpotqa/ + +index_path: indexes/lucene-index.beir-v1.0.0-hotpotqa-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 5233235 + documents (non-empty): 5233230 + total terms: 158180689 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): hotpotqa" + id: test + path: topics.beir-v1.0.0-hotpotqa.test.tsv.gz + qrel: qrels.beir-v1.0.0-hotpotqa.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.6027 + R@100: + - 0.7400 + R@1000: + - 0.8405 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-quora-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-quora-multifield.yaml new file mode 100644 index 0000000000..57554a8b90 --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-quora-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-quora-multifield +corpus_path: collections/beir-v1.0.0/corpus/quora/ + +index_path: indexes/lucene-index.beir-v1.0.0-quora-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 522931 + documents (non-empty): 522913 + total terms: 4390852 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): quora" + id: test + path: topics.beir-v1.0.0-quora.test.tsv.gz + qrel: qrels.beir-v1.0.0-quora.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.7886 + R@100: + - 0.9733 + R@1000: + - 0.9950 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-signal1m-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-signal1m-multifield.yaml new file mode 100644 index 0000000000..8a16fbd5e5 --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-signal1m-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-signal1m-multifield +corpus_path: collections/beir-v1.0.0/corpus/signal1m/ + +index_path: indexes/lucene-index.beir-v1.0.0-signal1m-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 2866315 + documents (non-empty): 2866094 + total terms: 32240067 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): signal1m" + id: test + path: topics.beir-v1.0.0-signal1m.test.tsv.gz + qrel: qrels.beir-v1.0.0-signal1m.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.3304 + R@100: + - 0.3703 + R@1000: + - 0.5642 \ No newline at end of file diff --git a/src/main/resources/regression/beir-v1.0.0-webis-touche2020-multifield.yaml b/src/main/resources/regression/beir-v1.0.0-webis-touche2020-multifield.yaml new file mode 100644 index 0000000000..e0d3b43598 --- /dev/null +++ b/src/main/resources/regression/beir-v1.0.0-webis-touche2020-multifield.yaml @@ -0,0 +1,57 @@ +--- +corpus: beir-v1.0.0-webis-touche2020-multifield +corpus_path: collections/beir-v1.0.0/corpus/webis-touche2020/ + +index_path: indexes/lucene-index.beir-v1.0.0-webis-touche2020-multifield/ +collection_class: BeirMultifieldCollection +generator_class: DefaultLuceneDocumentGenerator +index_threads: 1 +index_options: -storePositions -storeDocvectors -storeRaw -fields title +index_stats: + documents: 382545 + documents (non-empty): 381918 + total terms: 74066724 + +metrics: + - metric: nDCG@10 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m ndcg_cut.10 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@100 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.100 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + - metric: R@1000 + command: tools/eval/trec_eval.9.0.4/trec_eval + params: -c -m recall.1000 + separator: "\t" + parse_index: 2 + metric_precision: 4 + can_combine: false + +topic_reader: TsvString +topic_root: src/main/resources/topics-and-qrels/ +qrels_root: src/main/resources/topics-and-qrels/ +topics: + - name: "BEIR (v1.0.0): webis-touche2020" + id: test + path: topics.beir-v1.0.0-webis-touche2020.test.tsv.gz + qrel: qrels.beir-v1.0.0-webis-touche2020.test.txt + +models: + - name: bm25 + display: BM25 + params: -bm25 -removeQuery -hits 1000 -fields contents=1.0 title=1.0 + results: + nDCG@10: + - 0.3673 + R@100: + - 0.5376 + R@1000: + - 0.8668 \ No newline at end of file