Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multilingual Hellaswag tasks #332

Merged
merged 60 commits into from
Oct 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
5c69eb0
add multilignaul dynamic generative metrics
hynky1999 Sep 5, 2024
39c4220
Merge branch 'main' into geneartive_dynamic_metrics
hynky1999 Sep 5, 2024
2a5cdca
Merge branch 'geneartive_dynamic_metrics' into config_templates
hynky1999 Sep 5, 2024
2df9a08
draft
hynky1999 Sep 6, 2024
95729ee
finish multichoice config
hynky1999 Sep 9, 2024
3aa0579
Merge branch 'main' into geneartive_dynamic_metrics
hynky1999 Sep 9, 2024
b8f90a9
update tokenizers + install nltk reqs
hynky1999 Sep 9, 2024
f5a8717
use punkt tab
hynky1999 Sep 9, 2024
227f572
Update src/lighteval/utils/imports.py
hynky1999 Sep 13, 2024
d80b3ba
Update src/lighteval/metrics/normalizations.py
hynky1999 Sep 13, 2024
532bdad
fix imports
Sep 13, 2024
75f7ac5
remove unused import
Sep 13, 2024
f99e330
Merge branch 'main' into geneartive_dynamic_metrics
NathanHB Sep 13, 2024
92daf90
Merge branch 'main' into geneartive_dynamic_metrics
clefourrier Sep 14, 2024
f2a801d
Merge branch 'main' into geneartive_dynamic_metrics
NathanHB Sep 17, 2024
91d9d4f
finish implementation of templates + move stuff around
Sep 23, 2024
9356cc6
resolve nits
Sep 23, 2024
0fbc731
when in rome do as romans do (handle error messages the same way)
Sep 23, 2024
fa1fa83
fix utils
hynky1999 Sep 23, 2024
db36e16
Merge branch 'geneartive_dynamic_metrics' into config_templates
hynky1999 Sep 23, 2024
44aeecf
nicers tests + fix them
hynky1999 Sep 23, 2024
2bff963
nicer todo
hynky1999 Sep 23, 2024
3c9eb21
add nice doscrings 📃
hynky1999 Sep 23, 2024
4216ae2
add even more docstring
hynky1999 Sep 23, 2024
d8f56b8
nit
hynky1999 Sep 23, 2024
f26e88c
fix test
hynky1999 Sep 23, 2024
111d615
add multilingual to dev group
hynky1999 Sep 24, 2024
7ca4239
merge nli, add languagees to literals
hynky1999 Sep 25, 2024
22eeddb
translation literals
hynky1999 Sep 25, 2024
7faaa8a
add nli
hynky1999 Sep 25, 2024
865bcbc
add copa tasks + fix tranlation literals
hynky1999 Sep 25, 2024
856e758
add hellaswag tasks
hynky1999 Sep 25, 2024
0c7987f
remove custom telgu hellaswag
hynky1999 Sep 25, 2024
4165fc2
remove hindi hellaswag
hynky1999 Sep 25, 2024
ba44fe9
add rcb + chinese nli
hynky1999 Sep 26, 2024
2d09256
Merge branch 'geneartive_dynamic_metrics' into config_templates
hynky1999 Sep 26, 2024
7324e89
Merge branch 'config_templates' into multilnag_nli_tasks
hynky1999 Sep 26, 2024
01161b2
Merge branch 'multilnag_nli_tasks' into multilang_copa_task
hynky1999 Sep 26, 2024
680124e
Merge branch 'multilang_copa_task' into hellaswag_tasks
hynky1999 Sep 26, 2024
ca865bd
Update src/lighteval/tasks/multilingual/tasks.py
hynky1999 Sep 30, 2024
1cc1187
Update src/lighteval/tasks/multilingual/tasks.py
hynky1999 Sep 30, 2024
d64251f
Update src/lighteval/tasks/multilingual/tasks.py
hynky1999 Sep 30, 2024
9806fab
Update src/lighteval/tasks/multilingual/tasks.py
hynky1999 Sep 30, 2024
35d7e6d
Update src/lighteval/tasks/multilingual/tasks.py
hynky1999 Sep 30, 2024
e560738
Update src/lighteval/tasks/multilingual/tasks.py
hynky1999 Sep 30, 2024
99524c5
Update src/lighteval/tasks/multilingual/tasks.py
hynky1999 Sep 30, 2024
150c76f
add two new tasks + docs
hynky1999 Sep 30, 2024
4e6100d
Merge branch 'multilnag_nli_tasks' of github.com:huggingface/lighteva…
hynky1999 Sep 30, 2024
7b561fe
Merge remote-tracking branch 'origin/main' into multilnag_nli_tasks
hynky1999 Sep 30, 2024
f9c7134
Merge branch 'multilnag_nli_tasks' into multilang_copa_task
hynky1999 Sep 30, 2024
b219e07
add nice docs
hynky1999 Sep 30, 2024
8899a1c
Merge branch 'multilang_copa_task' into hellaswag_tasks
hynky1999 Sep 30, 2024
fcad0e8
update hellaswag with docs
hynky1999 Sep 30, 2024
43809ab
move hellaswag to lighteval suite
hynky1999 Sep 30, 2024
a673d65
Update src/lighteval/tasks/multilingual/tasks.py
hynky1999 Sep 30, 2024
3c453f7
Merge remote-tracking branch 'origin/main' into hellaswag_tasks
hynky1999 Sep 30, 2024
815e897
enable returning none from templates + better typing
hynky1999 Sep 30, 2024
eec2444
change unoficial hellaswag names to have community_prefix + unify hel…
hynky1999 Oct 1, 2024
6284fe5
let strip be optional in hellaswag
hynky1999 Oct 1, 2024
9bf53e9
Merge branch 'main' into hellaswag_tasks
clefourrier Oct 1, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 19 additions & 11 deletions src/lighteval/tasks/default_prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -755,21 +755,29 @@ def headqa(line, task_name: str = None):
)


def hellaswag_harness(line, task_name: str = None):
def preprocess(text):
"""Comes from AiHarness"""
# text = text.strip()
# NOTE: Brackets are artifacts of the WikiHow dataset portion of HellaSwag.
text = text.replace(" [title]", ". ")
text = re.sub("\\[.*?\\]", "", text)
text = text.replace(" ", " ")
return text
def hellaswag_preprocess(
text: str, wikihow_artifacts: list[str] = [" [title]"], truncate_dots: bool = False, strip_text: bool = False
):
"""Comes from AiHarness"""
# text = text.strip()
# NOTE: Brackets are artifacts of the WikiHow dataset portion of HellaSwag.
for dot_repl in wikihow_artifacts:
text = text.replace(dot_repl, ". ")
text = re.sub("\\[.*?\\]", "", text)
text = text.replace(" ", " ")
if truncate_dots:
text = text.replace(r"\.+", r"\.")
if strip_text:
text = text.strip()
return text


def hellaswag_harness(line, task_name: str = None):
ctx = f"{line['ctx_a']} {line['ctx_b'].capitalize()} "
return Doc(
task_name=task_name,
query=preprocess(line["activity_label"] + ": " + ctx),
choices=[preprocess(ending) for ending in line["endings"]],
query=hellaswag_preprocess(line["activity_label"] + ": " + ctx),
choices=[hellaswag_preprocess(ending) for ending in line["endings"]],
gold_index=int(line["label"]) if line["label"] != "" else -1, # -1 for test
# "metric": "choices_loglikelihood",
)
Expand Down
2 changes: 1 addition & 1 deletion src/lighteval/tasks/lighteval_task.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ class LightevalTaskConfig:
"""

name: str
prompt_function: Callable[[dict, str], Doc]
prompt_function: Callable[[dict, str], Doc | None]
hf_repo: str
hf_subset: str
metric: ListLike[Metric | Metrics]
Expand Down
142 changes: 142 additions & 0 deletions src/lighteval/tasks/multilingual/tasks.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@
from lighteval.metrics.normalizations import LogProbTokenNorm
from lighteval.tasks.lighteval_task import LightevalTaskConfig
from lighteval.tasks.templates.copa import get_copa_prompt_function
from lighteval.tasks.templates.hellaswag import get_hellaswag_prompt_function
from lighteval.tasks.templates.nli import get_nli_prompt_function
from lighteval.tasks.templates.utils.formulation import (
CFFormulation,
Expand Down Expand Up @@ -386,6 +387,9 @@
),
hf_repo="ai4bharat/IndicCOPA",
hf_subset=f"translation-{standardize_tag(language.value)}",
# Since we use trust_dataset, we have to be careful about what is inside the dataset
# script. We thus lock the revision to ensure that the script doesn't change
hf_revision="d356ef19a4eb287e88a51d07a56b73ba88c7f188",
evaluation_splits=["test"],
metric=[
loglikelihood_acc_metric(normalization=LogProbTokenNorm()),
Expand Down Expand Up @@ -443,6 +447,141 @@
]


# ------------------------------- Hellaswag Tasks ------------------------------- #
# Hellaswag is a commonsense reasoning task that requires models to complete a given scenario
# with the most plausible ending. It tests the model's ability to understand and reason about
# everyday situations and human behavior.

# MLMM-Hellaswag: Multilingual adaptation of Hellaswag
# Paper: https://arxiv.org/abs/2306.07610
# This is a multilingual version of Hellaswag, part of the MLMM (Massive Language Model Meta-Evaluation) benchmark.
# It evaluates commonsense reasoning abilities across multiple languages.
mlmm_hellaswag_tasks = [
LightevalTaskConfig(
name=f"hellaswag_{lang.value}_{formulation.name.lower()}",
suite=["lighteval"],
prompt_function=get_hellaswag_prompt_function(
language=lang,
adapter=lambda line: {
# We don't use activity_label as they are not available
"ctx_a": line["ctx_a"],
"ctx_b": line["ctx_b"],
"continuations": line["endings"],
"gold_idx": int(line["label"]),
},
formulation=formulation,
),
hf_repo="jon-tow/okapi_hellaswag",
hf_subset=standardize_tag(lang.value),
# Since we use trust_dataset, we have to be careful about what is inside the dataset
# script. We thus lock the revision to ensure that the script doesn't change
hf_revision="96ed8e0dfc6172dad1d3df338d7b8ba6c1ff9d83",
hynky1999 marked this conversation as resolved.
Show resolved Hide resolved
evaluation_splits=["validation"],
metric=[
loglikelihood_acc_metric(normalization=LogProbTokenNorm()),
],
trust_dataset=True,
)
for lang in [
Language.ARABIC,
Language.BENGALI,
Language.CATALAN,
Language.DANISH,
Language.GERMAN,
Language.SPANISH,
Language.BASQUE,
Language.FRENCH,
Language.GUJARATI,
Language.HINDI,
Language.CROATIAN,
Language.HUNGARIAN,
Language.ARMENIAN,
Language.INDONESIAN,
Language.ICELANDIC,
Language.ITALIAN,
Language.KANNADA,
Language.MALAYALAM,
Language.MARATHI,
Language.NORWEGIAN,
Language.NEPALI,
Language.DUTCH,
Language.PORTUGUESE,
Language.ROMANIAN,
Language.RUSSIAN,
Language.SLOVAK,
Language.SERBIAN,
Language.SWEDISH,
Language.TAMIL,
Language.TELUGU,
Language.UKRAINIAN,
Language.VIETNAMESE,
Language.CHINESE,
]
for formulation in [MCFFormulation(), CFFormulation(), HybridFormulation()]
]

# Hellaswag Turkish
# This is a Turkish adaptation of the Hellaswag task.
# While there's no specific paper for this version, it has been found to work well for evaluating
# Turkish language models on commonsense reasoning tasks.

# We don't handle them in single task as there is quite a lot of differences (dataset/subset, dot replacement, etc.)
# which would make it hard to read
hellaswag_tur_tasks = [
hynky1999 marked this conversation as resolved.
Show resolved Hide resolved
LightevalTaskConfig(
name=f"community_hellaswag_{Language.TURKISH.value}_{formulation.name.lower()}",
suite=["lighteval"],
prompt_function=get_hellaswag_prompt_function(
language=Language.TURKISH,
adapter=lambda line: {
"ctx_a": line["ctx_a"],
"ctx_b": line["ctx_b"],
"continuations": line["endings"],
"gold_idx": int(line["label"]),
},
formulation=formulation,
# https://github.com/malhajar17/lm-evaluation-harness_turkish/blob/main/lm_eval/tasks/hellaswag_tr-v0.2/utils.py
wikihow_artifacts=[" [title]", " [başlık]", " [adım]", " [header]"],
),
hf_repo="malhajar/hellaswag_tr-v0.2",
hf_subset="default",
evaluation_splits=["validation"],
metric=[
loglikelihood_acc_metric(normalization=LogProbTokenNorm()),
],
)
for formulation in [MCFFormulation(), CFFormulation(), HybridFormulation()]
]

# Hellaswag Thai
# This is a Thai adaptation of the Hellaswag task.
# Similar to the Turkish version, there's no specific paper, but it has been found to be effective
# for evaluating Thai language models on commonsense reasoning tasks.
hellaswag_tha_tasks = [
LightevalTaskConfig(
name=f"community_hellaswag_{Language.THAI.value}_{formulation.name.lower()}",
suite=["lighteval"],
prompt_function=get_hellaswag_prompt_function(
language=Language.THAI,
adapter=lambda line: {
"ctx_a": line["ctx_a"],
"ctx_b": line["ctx_b"],
"continuations": line["endings"],
"gold_idx": int(line["label"]),
},
formulation=formulation,
),
hf_repo="HuggingFaceFW-Dev/hellaswag_thai",
hf_subset="default",
evaluation_splits=["validation"],
few_shots_split="train",
metric=[
loglikelihood_acc_metric(normalization=LogProbTokenNorm()),
],
)
for formulation in [MCFFormulation(), CFFormulation(), HybridFormulation()]
]

TASKS_TABLE = [
*xnli_tasks,
*xnli2_tasks,
Expand All @@ -454,4 +593,7 @@
*xcopa_tasks,
*copa_indic_tasks,
*parus_tasks,
*mlmm_hellaswag_tasks,
*hellaswag_tur_tasks,
*hellaswag_tha_tasks,
]
18 changes: 14 additions & 4 deletions src/lighteval/tasks/templates/continuation.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ class ContinuationDictAdapter(TypedDict):

def get_continuation_prompt_function(
language: Language,
adapter: Callable[[dict], ContinuationInput] | ContinuationDictAdapter,
adapter: Callable[[dict], ContinuationInput | None] | ContinuationDictAdapter,
formulation: Formulation = MCFFormulation(),
):
"""
Expand Down Expand Up @@ -121,11 +121,13 @@ def get_continuation_prompt_function(
Returns:
Callable: A function that generates Continuation prompt based on the given parameters.
"""
adapter_fn: Callable[[dict], ContinuationInput] = create_adapter_from_dict(adapter) # type: ignore
adapter_fn = create_adapter_from_dict(adapter)
translation_literals = TRANSLATION_LITERALS[language]

def prepare_prompt(line: dict):
cont_input = adapter_fn(line)
if cont_input is None:
return None

instruction_val = cont_input.get("instruction")
instruction = f"{instruction_val}\n" if instruction_val else ""
Expand All @@ -140,7 +142,11 @@ def prepare_prompt(line: dict):
return cont_input, instruction, context, continuations

def prompt_fn_cf(line, task_name: str):
cont_input, instruction, context, continuations = prepare_prompt(line)
prepared_prompt = prepare_prompt(line)
if prepared_prompt is None:
return None

cont_input, instruction, context, continuations = prepared_prompt

context_follows_sentence_space = punctuation_ends_sentence(context, translation_literals)
answers = build_answers(continuations, formulation, translation_literals, context_follows_sentence_space)
Expand All @@ -160,7 +166,11 @@ def prompt_fn_cf(line, task_name: str):
)

def prompt_fn_mcf(line, task_name: str):
cont_input, instruction, context, continuations = prepare_prompt(line)
prepared_prompt = prepare_prompt(line)
if prepared_prompt is None:
return None

cont_input, instruction, context, continuations = prepared_prompt

options = build_choices(continuations, formulation, translation_literals)
options = f"{options}\n" if options else ""
Expand Down
9 changes: 7 additions & 2 deletions src/lighteval/tasks/templates/copa.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,9 @@ class COPAAdapter(TypedDict):


def get_copa_prompt_function(
language: Language, adapter: Callable[[dict], COPAInput] | COPAAdapter, formulation: Formulation = MCFFormulation()
language: Language,
adapter: Callable[[dict], COPAInput | None] | COPAAdapter,
formulation: Formulation = MCFFormulation(),
):
"""
Create a templated prompt function for a COPA task.
Expand Down Expand Up @@ -109,7 +111,7 @@ def get_copa_prompt_function(
Returns:
Callable: A function that generates COPA prompts based on the given parameters.
"""
adapter_fn: Callable[[dict], COPAInput] = create_adapter_from_dict(adapter) # type: ignore
adapter_fn = create_adapter_from_dict(adapter)
continuation_prompt_fn = get_continuation_prompt_function(
language, {"context": "context", "continuations": "continuations", "gold_idx": "gold_idx"}, formulation
)
Expand All @@ -120,6 +122,9 @@ def copa_prompt(
task_name: str,
):
input_data = adapter_fn(line)
if input_data is None:
return None

context = capitalize(input_data["context"].rstrip(PUNCT))
cause_or_effect_trans = (
translation_literals.cause_word
Expand Down
Loading
Loading