Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot use f1/recall/precision arguments in CombinedEvaluations.compute #234

Open
fcakyon opened this issue Aug 5, 2022 · 6 comments
Open

Comments

@fcakyon
Copy link
Contributor

fcakyon commented Aug 5, 2022

This works:

metric=evaluate.load('f1')
metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], average=None)

This won't work:

metric=evaluate.combine(["f1"])
metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], average=None)

Reason:
average is not included in f1 score features:

features=datasets.Features(

and CombinedEvaluations.compute ignore if argument is not included in features:

batch = {input_name: batch[input_name] for input_name in evaluation_module._feature_names()}

Is this expected or a bug?

@fcakyon fcakyon changed the title Cannot use average argument of f1 score in CombinedEvaluations Cannot use average argument of f1 score in CombinedEvaluations.compute Aug 5, 2022
@fcakyon fcakyon changed the title Cannot use average argument of f1 score in CombinedEvaluations.compute Cannot use f1/recall/precision arguments in CombinedEvaluations.compute Aug 5, 2022
@fcakyon
Copy link
Contributor Author

fcakyon commented Aug 11, 2022

@lvwerra do you have any opinion on that?

@lvwerra
Copy link
Member

lvwerra commented Aug 15, 2022

Yes, this is a current limitation of combine: you can't pass any settings to compute only the features. Rather than fixing this in combine we aim to enable changing the settings when the metric is loaded in #169. This should be coming in the next few weeks.

@m-movahhedinia
Copy link

@lvwerra Do you mind if I ask whether you have an estimate on when this issue will be closed? I reviewed #188 and noticed that it seems to be passing for pip release. Is there a chance we can git it in the next couple of days?

@falcaopetri
Copy link

@lvwerra I also need to pass custom kwargs to my metrics. My current workaround is to overwrite CombinedEvaluations .compute in a child class and remove

batch = {input_name: batch[input_name] for input_name in evaluation_module._feature_names()}

This works in my specific use case because all my (custom) metrics accept **kwargs in their compute method, which means they will just ignore extra args.

@ernestchu
Copy link

Has this been resolved?

@lvwerra
Copy link
Member

lvwerra commented Nov 17, 2022

Not yet. There is an issue with the sync mechanism between the hub and the library which is why we had to roll back #169. Merging it will break pre 0.3.0 installs so we need to wait for sufficient adaptation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants