Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IoU metric returns 0 score for classes not present in prediction or target #3097

Closed
abrahambotros opened this issue Aug 21, 2020 · 1 comment · Fixed by #3098
Closed

IoU metric returns 0 score for classes not present in prediction or target #3097

abrahambotros opened this issue Aug 21, 2020 · 1 comment · Fixed by #3098
Labels
bug Something isn't working help wanted Open to be worked on

Comments

@abrahambotros
Copy link
Contributor

abrahambotros commented Aug 21, 2020

🐛 Bug

The iou metric implementation always returns a score of 0 for a class that is not present in either the prediction or the target. This can lead to a deflated score even for perfectly-predicted examples.

Case 1: one example of an affected case is multi-class semantic segmentation of an image that does not contain one of the classes. This can be outlined as follows:

  • We have 3 possible classes in this dataset (0, 1, and 2, where 0 can optionally be the background class).
  • Ground-truth target for an image consists only of classes 0 and 2.
  • Model perfectly predicts the target.
  • The IoU score should be 1.0 (perfect), but the actual score will be deflated (0.67) since there will be an unnecessary penalty for class 1.

Case 2: another example that is a bit more implementation-dependent to explain:

  • Target contains only 1's.
  • Prediction perfectly assigns all 1's.
  • The IoU score should be 1.0 (perfect), but the actual score will be deflated (0.5) since there will be an unnecessary penalty for class 0.
  • This only applies when a higher-numbered class is present, and lower-numbered classes are not present.

Case 3: All the above are also affected by any num_classes parameter passed to the functional iou implementation - if num_classes=N is given, then all classes with ids <N that did not appear in the target or prediction will always be assigned 0 IoU score. For example, if N=10, and only classes 0 and 1 are present and correct in target and prediction, then classes 2-9 will all have IoU score 0.0.

Especially in aggregate for a dataset with substantial neutral ground-truth values (i.e., semantic segmentation dataset with lots of images where not all classes are present), this can significantly deflate the (m)IoU score(s). This can also undesirably interact with checkpointing that looks at IoU-based metrics.

To Reproduce / Code sample

Case 1 above:

import torch
from pytorch_lightning.metrics.functional.classification import iou

target = torch.tensor([0, 2])
pred = torch.tensor([0, 2])

iou(pred, target) # Returns tensor(0.6667)
# Same computation, but with 'none' reduction to illustrate what score each class gets:
iou(pred, target, reduction='none') # Returns tensor([1., 0., 1.])

Case 2 above:

target = torch.tensor([1])
pred = torch.tensor([1])

iou(pred, target) # Returns tensor(0.5)
iou(pred, target, reduction='none') # Returns tensor([0., 1.])

Case 3 above:

target = torch.tensor([0, 1])
pred = torch.tensor([0, 1])

iou(pred, target, num_classes=10) # Returns tensor(0.2), or 2/10
iou(pred, target, num_classes=10, reduction='none') # Returns tensor([1., 1., 0., 0., 0., 0., 0., 0., 0., 0.])

Expected behavior

The fallback IoU score to use for classes not in the target and correctly not in the prediction should be configurable. This should probably default to 1.0, which seems more expected behavior to me.

Case 1:

target = torch.tensor([0, 2])
pred = torch.tensor([0, 2])
iou(pred, target) # Should return tensor(1.)
iou(pred, target, reduction='none') # Should return tensor([1., 1., 1.])

Case 2:

target = torch.tensor([1])
pred = torch.tensor([1])
iou(pred, target) # Should return tensor(1.)
iou(pred, target, reduction='none') # Should return tensor([1., 1.])

Case 3:

target = torch.tensor([0, 1])
pred = torch.tensor([0, 1])
iou(pred, target, num_classes=10) # Should return tensor(1.)
iou(pred, target, num_classes=10, reduction='none') # Should return tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])

Environment

* CUDA:
	- GPU:
		- GeForce RTX 2070 with Max-Q Design
	- available:         True
	- version:           10.2
* Packages:
	- numpy:             1.19.1
	- pyTorch_debug:     False
	- pyTorch_version:   1.5.1
	- pytorch-lightning: 0.9.0rc18
	- tensorboard:       2.2.0
	- tqdm:              4.48.0
* System:
	- OS:                Linux
	- architecture:
		- 64bit
		- 
	- processor:         x86_64
	- python:            3.7.8
	- version:           #38~1596560323~20.04~7719dbd-Ubuntu SMP Tue Aug 4 19:12:34 UTC 2

Additional context

I have a draft PR open at #3098 that attempts to implement the expected behavior described above, and adds some tests for this. Any feedback welcome!

Somewhat-related issues:

@abrahambotros abrahambotros added bug Something isn't working help wanted Open to be worked on labels Aug 21, 2020
@github-actions
Copy link
Contributor

Hi! thanks for your contribution!, great first issue!

abrahambotros added a commit to abrahambotros/pytorch-lightning that referenced this issue Aug 21, 2020
Fixes Lightning-AI#3097

- Allow configurable not_present_score for IoU for classes
  not present in target or pred. Defaults to 1.0.
- Also allow passing `num_classes` parameter through from iou
  metric class down to its underlying functional iou
  call.
abrahambotros added a commit to abrahambotros/pytorch-lightning that referenced this issue Aug 21, 2020
Fixes Lightning-AI#3097

- Allow configurable not_present_score for IoU for classes
  not present in target or pred. Defaults to 1.0.
- Also allow passing `num_classes` parameter through from iou
  metric class down to its underlying functional iou
  call.
abrahambotros added a commit to abrahambotros/pytorch-lightning that referenced this issue Sep 1, 2020
Fixes Lightning-AI#3097

- Allow configurable not_present_score for IoU for classes
  not present in target or pred. Defaults to 1.0.
- Also allow passing `num_classes` parameter through from iou
  metric class down to its underlying functional iou
  call.
abrahambotros added a commit to abrahambotros/pytorch-lightning that referenced this issue Sep 1, 2020
Fixes Lightning-AI#3097

- Allow configurable not_present_score for IoU for classes
  not present in target or pred. Defaults to 1.0.
- Also allow passing `num_classes` parameter through from iou
  metric class down to its underlying functional iou
  call.
abrahambotros added a commit to abrahambotros/pytorch-lightning that referenced this issue Sep 4, 2020
Fixes Lightning-AI#3097

- Allow configurable not_present_score for IoU for classes
  not present in target or pred. Defaults to 1.0.
- Also allow passing `num_classes` parameter through from iou
  metric class down to its underlying functional iou
  call.
Borda pushed a commit to abrahambotros/pytorch-lightning that referenced this issue Sep 4, 2020
Fixes Lightning-AI#3097

- Allow configurable not_present_score for IoU for classes
  not present in target or pred. Defaults to 1.0.
- Also allow passing `num_classes` parameter through from iou
  metric class down to its underlying functional iou
  call.
abrahambotros added a commit to abrahambotros/pytorch-lightning that referenced this issue Sep 8, 2020
Fixes Lightning-AI#3097

- Allow configurable not_present_score for IoU for classes
  not present in target or pred. Defaults to 1.0.
- Also allow passing `num_classes` parameter through from iou
  metric class down to its underlying functional iou
  call.
abrahambotros added a commit to abrahambotros/pytorch-lightning that referenced this issue Sep 14, 2020
Fixes Lightning-AI#3097

- Allow configurable not_present_score for IoU for classes
  not present in target or pred. Defaults to 1.0.
- Also allow passing `num_classes` parameter through from iou
  metric class down to its underlying functional iou
  call.
abrahambotros added a commit to abrahambotros/pytorch-lightning that referenced this issue Sep 15, 2020
Fixes Lightning-AI#3097

- Allow configurable not_present_score for IoU for classes
  not present in target or pred. Defaults to 1.0.
- Also allow passing `num_classes` parameter through from iou
  metric class down to its underlying functional iou
  call.
Borda pushed a commit that referenced this issue Sep 17, 2020
* Fix IoU score for classes not present in target or pred

Fixes #3097

- Allow configurable not_present_score for IoU for classes
  not present in target or pred. Defaults to 1.0.
- Also allow passing `num_classes` parameter through from iou
  metric class down to its underlying functional iou
  call.

* Changelog: move IoU not-present score fix to [unreleased]

* IoU: avoid recomputing class presence in target and pred

Use already-computed support, true positives, and false positives to
determine if a class is not present in either target or pred.

* Test IoU against sklearn jaccard_score

Also add TODO to test our IoU's not_present_score against sklearn's
jaccard_score's zero_division when it beecomes available.

* IoU: remove_bg -> ignore_index

Fixes #2736

- Rename IoU metric argument from `remove_bg` -> `ignore_index`.
- Accept an optional int class index to ignore, instead of a bool and
  instead of always assuming the background class has index 0.
- If given, ignore the class index when computing the IoU output,
  regardless of reduction method.

* Improve documentation for IoU not_present_score

* Update default IoU not_present_score to 0.0

* Add note about IoU division by zero

* Rename IoU not_present_score -> absent_score

* Update IoU absent score changelog wording

* Condense IoU absent_score argument docstring

* Remove unnecessary IoU ignore_index comment

* docstrings

* isort

* flake8

* Fix test of IoU against sklearn jaccard

Use macro instead of micro averaging in sklearn's jaccard score, to
match multi-class IoU, which conventionally takes per-class scores
before averaging.

Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants