[ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval Learns Passage Retrieval, Too https://arxiv.org/abs/2012.12624
-
Updated
Jun 15, 2022 - Python
[ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval Learns Passage Retrieval, Too https://arxiv.org/abs/2012.12624
The official implementation of ICLR 2020, "Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering".
Code and Models for the paper "End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering" (NeurIPS 2021)
📔 notes for Multi-hop Reading Comprehension and open-domain question answering
This is the official repository for NAACL 2021, "XOR QA: Cross-lingual Open-Retrieval Question Answering".
Code and resources for papers "Generation-Augmented Retrieval for Open-Domain Question Answering" and "Reader-Guided Passage Reranking for Open-Domain Question Answering", ACL 2021
The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".
WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000+ "why" question-answer-rationale triplets.
ACL 2023: Evaluating Open-Domain Question Answering in the Era of Large Language Models
Code for the ACL 2023 long paper - Expand, Rerank, and Retrieve: Query Reranking for Open-Domain Question Answering
Code Repo for "Differentiable Open-Ended Commonsense Reasoning" (NAACL 2021)
Awesome Question Answering
Evaluation framework for open-domain question answering.
Open-WikiTable :Dataset for Open Domain Question Answering with Complex Reasoning over Table
Open-Retrieval Conversational Machine Reading: A new setting & OR-ShARC dataset
A Turkish question answering system made by fine-tuning BERTurk and XLM-Roberta models.
IIT Guwahati's Gold Medal winning solution to DevRev’s Expert Answers in a Flash Improving Domain-Specific QA
A enhanced Open Dialogue Context Generator supported by General Language Model Pretraining with Autoregressive Blank Infilling
A Turkish question answering system made by fine-tuning BERTurk and XLM-Roberta models.
Code for the ACL2022 paper "C-MORE: Pretraining to Answer Open-Domain Questions by Consulting Millions of References"
Add a description, image, and links to the open-domain-qa topic page so that developers can more easily learn about it.
To associate your repository with the open-domain-qa topic, visit your repo's landing page and select "manage topics."