Skip to content

Do experiments on SCROLLS benchmark for my own long text models

License

Notifications You must be signed in to change notification settings

lzhou1998/scrolls-for-longtext-models

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SCROLLS

This repository contains the official code of the paper: "SCROLLS: Standardized CompaRison Over Long Language Sequences".

Setup instructions are in the baselines and evaluator folders.

For the live leaderboard, checkout the official website.


Loading the SCROLLS Benchmark Datasets

Citation

@inproceedings{shaham-etal-2022-scrolls,
    title = "{SCROLLS}: Standardized {C}ompa{R}ison Over Long Language Sequences",
    author = "Shaham, Uri  and
      Segal, Elad  and
      Ivgi, Maor  and
      Efrat, Avia  and
      Yoran, Ori  and
      Haviv, Adi  and
      Gupta, Ankit  and
      Xiong, Wenhan  and
      Geva, Mor  and
      Berant, Jonathan  and
      Levy, Omer",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.emnlp-main.823",
    pages = "12007--12021",
}

When citing SCROLLS, please make sure to cite all of the original dataset papers. [bibtex]

About

Do experiments on SCROLLS benchmark for my own long text models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%