Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds average argument to AveragePrecision metric #477

Merged
merged 15 commits into from
Sep 6, 2021

Conversation

SkafteNicki
Copy link
Member

@SkafteNicki SkafteNicki commented Aug 24, 2021

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

What does this PR do?

Fixes #471
Adds average argument to AveragePrecision metric as requested by users. This will change the default output for multiclass and multilabel input from a list with the score per class to instead output the macro average.

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@Borda
Copy link
Member

Borda commented Aug 26, 2021

@SkafteNicki seems average_precision now returns single number instead of tuple

@codecov
Copy link

codecov bot commented Aug 26, 2021

Codecov Report

Merging #477 (6e41b91) into master (043714e) will decrease coverage by 0%.
The diff coverage is 94%.

@@          Coverage Diff          @@
##           master   #477   +/-   ##
=====================================
- Coverage      95%    95%   -0%     
=====================================
  Files         132    132           
  Lines        4652   4681   +29     
=====================================
+ Hits         4435   4459   +24     
- Misses        217    222    +5     

@SkafteNicki SkafteNicki force-pushed the average_precision_reduction branch from de7bfa2 to f9e3a17 Compare August 28, 2021 09:08
@SkafteNicki SkafteNicki enabled auto-merge (squash) September 3, 2021 11:45
@Borda Borda force-pushed the average_precision_reduction branch from c0192f0 to fb7e88c Compare September 3, 2021 12:21
Borda and others added 2 commits September 3, 2021 14:27
@mergify mergify bot added the ready label Sep 6, 2021
Copy link
Contributor

@SeanNaren SeanNaren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice!

@SeanNaren
Copy link
Contributor

fine with the wording choice here, but isn't mean usually the term? i.e mAP?

@SkafteNicki SkafteNicki merged commit f96c717 into master Sep 6, 2021
@SkafteNicki SkafteNicki deleted the average_precision_reduction branch September 6, 2021 20:04
@Borda
Copy link
Member

Borda commented Sep 6, 2021

fine with the wording choice here, but isn't mean usually the term? i.e mAP?

good point, mAP is mean Average Precision but it is not simple stat mean, but some aggregated mean if I am correct 🐰
@SeanNaren pls have also look at #467

@SkafteNicki
Copy link
Member Author

This is the confusing part with the naming of these metrics:

  • AveragePrecision (the metric in this PR) refers to the area under the precision-recall curve evaluated user different thresholds. So the name basically means: the average value of precision given all the full range of recall values
  • MeanAveragePrecsion or mAP most commonly refers to metric used in object detection. It is also related to the average precision, but the mean is calculated instead over different thresholds of intersection over union between objects (as far as I understand)
    So "same same but different".

@SeanNaren
Copy link
Contributor

Makes sense, thanks for the clarification guys :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request ready
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AveragePrecision returns list of metrics per class
4 participants