Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Segmentation] Add mean IoU #1236

Merged
merged 30 commits into from
Apr 23, 2024
Merged

Conversation

NielsRogge
Copy link
Contributor

What does this PR do?

This PR adds the mean Intersection over Union (mIoU) metric, especially useful for semantic segmentation (where the goal is to label each pixel of an image with a certain class).

I first tried to use the existing Jaccard Index metric for this, but it's not ideal; one needs to set average=None, and even then, you can't just easily calculate the mIoU as you need to take the union of the labels present in the predicted segmentation map and the ground truth segmentation map.

Hence, this PR proposes to add a new "Segmentation" section, to which metrics like mIoU and panoptic quality (PQ) can be added.

Fixes #1124

The implementation is based on the one in mmsegmentation by OpenMMLab.

I just created a functional variant for now, if this gets approved I can proceed with making a module variant, as well as implementing the tests.

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

Copy link
Member

@justusschock justusschock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, thanks for opening this PR. I have left a few comments:

1.) Why didn't you rely on the jaccard index as pointed out in the issue you linked?
2.) You only implemented a multiclass case. Could you also implement a binary case (see our recent classification refactor) and comment on a multilabel case?

docs/source/index.rst Outdated Show resolved Hide resolved
src/torchmetrics/functional/segmentation/mean_iou.py Outdated Show resolved Hide resolved
src/torchmetrics/functional/segmentation/mean_iou.py Outdated Show resolved Hide resolved
src/torchmetrics/functional/segmentation/mean_iou.py Outdated Show resolved Hide resolved
src/torchmetrics/functional/segmentation/mean_iou.py Outdated Show resolved Hide resolved
src/torchmetrics/functional/segmentation/mean_iou.py Outdated Show resolved Hide resolved
src/torchmetrics/functional/segmentation/mean_iou.py Outdated Show resolved Hide resolved
src/torchmetrics/functional/segmentation/mean_iou.py Outdated Show resolved Hide resolved
src/torchmetrics/functional/segmentation/mean_iou.py Outdated Show resolved Hide resolved
src/torchmetrics/functional/segmentation/mean_iou.py Outdated Show resolved Hide resolved
@NielsRogge
Copy link
Contributor Author

NielsRogge commented Sep 26, 2022

Thanks for the quick review.

1.) Why didn't you rely on the jaccard index as pointed out in the issue you linked?

I made a quick Colab notebook to showcase what I'd like to achieve: https://colab.research.google.com/drive/1O8KlOdiz7JXAIKLh2TsKs11cwH5B4ZDK?usp=sharing.

I implemented the mIoU metric in HF evaluate, but as it's in NumPy it's rather slow. As can be seen, the per_category_iou returns nan for categories which aren't present in the preds and targets, taking into account the ignore_index.

Could we achieve this using the existing implementation? I think the current implementation doesn't take ignore_index into account in the same way as the implementation in evaluate does.

@mergify mergify bot removed the has conflicts label Sep 29, 2022
@justusschock
Copy link
Member

@NielsRogge fair enough. Just wanted to get some confirmation that we actually need this and don't implement something again we already have on a different name :)

@Borda Borda added this to the v0.11 milestone Oct 4, 2022
@NielsRogge NielsRogge closed this Nov 17, 2022
@robmarkcole
Copy link

robmarkcole commented Mar 23, 2024

@NielsRogge why close this?
As can be seen, the per_category_iou returns nan for categories which aren't present in the preds and targets, taking into account the ignore_index. This is essential IMO, as demonstrated in the linked colab:

Below, returning 0 rather than nan results in a clearly 'wrong' result:
image

Evaluate handles this:
image

@lantiga
Copy link
Contributor

lantiga commented Mar 29, 2024

Let's reopen this one. I agree that the output is not handled correctly here when ignore_index is provided in our Jaccard implementation.

@lantiga lantiga reopened this Mar 29, 2024
@lantiga lantiga requested review from lantiga and stancld as code owners March 29, 2024 17:30
@github-actions github-actions bot added the documentation Improvements or additions to documentation label Mar 29, 2024
@mergify mergify bot removed the has conflicts label Mar 29, 2024
@SkafteNicki SkafteNicki self-assigned this Apr 12, 2024
@SkafteNicki SkafteNicki added the Priority Critical task/issue label Apr 12, 2024
@mergify mergify bot removed the has conflicts label Apr 19, 2024
Copy link

codecov bot commented Apr 22, 2024

Codecov Report

Merging #1236 (1f5820a) into master (745c471) will decrease coverage by 35%.
Report is 1 commits behind head on master.
The diff coverage is 89%.

Additional details and impacted files
@@           Coverage Diff            @@
##           master   #1236     +/-   ##
========================================
- Coverage      69%     34%    -35%     
========================================
  Files         311     313      +2     
  Lines       17527   17596     +69     
========================================
- Hits        12085    5915   -6170     
- Misses       5442   11681   +6239     

@mergify mergify bot added the ready label Apr 23, 2024
src/torchmetrics/functional/segmentation/utils.py Outdated Show resolved Hide resolved
SkafteNicki and others added 2 commits April 23, 2024 13:31
Co-authored-by: Jirka Borovec <6035284+Borda@users.noreply.github.com>
@mergify mergify bot removed the has conflicts label Apr 23, 2024
@Borda Borda enabled auto-merge (squash) April 23, 2024 11:59
@Borda Borda merged commit af32fd0 into Lightning-AI:master Apr 23, 2024
62 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation New metric Priority Critical task/issue ready
Projects
None yet
Development

Successfully merging this pull request may close these issues.

mIoU
7 participants