Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Metrics] Panoptic Quality #50

Closed
justusschock opened this issue Mar 30, 2020 · 15 comments · Fixed by #929
Closed

[Metrics] Panoptic Quality #50

justusschock opened this issue Mar 30, 2020 · 15 comments · Fixed by #929
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed New metric
Milestone

Comments

@justusschock
Copy link
Member

🚀 Feature

Implement Panoptic Quality

@Borda Borda changed the title [Metrics Package] Panoptic Quality Metrics: Panoptic Quality Mar 30, 2020
@edenlightning edenlightning changed the title Metrics: Panoptic Quality [Metrics] Panoptic Quality Jun 18, 2020
@stale
Copy link

stale bot commented Aug 17, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@ddrevicky
Copy link
Contributor

Hi @ananyahjha93 are you working on this? If you're busy with other things I could take a look at this metric :).

@justusschock
Copy link
Member Author

@ddrevicky I think you can give it a shot, thanks!

cc @teddykoker

@ddrevicky
Copy link
Contributor

I will most likely not have time to look at this now, if anyone else would like to take a look at this feel free to do so :)

@ddrevicky ddrevicky removed their assignment Nov 10, 2020
@Borda Borda transferred this issue from Lightning-AI/pytorch-lightning Mar 12, 2021
@github-actions
Copy link

Hi! thanks for your contribution!, great first issue!

@SkafteNicki SkafteNicki added the good first issue Good for newcomers label Mar 15, 2021
@Borda Borda added enhancement New feature or request help wanted Extra attention is needed labels Mar 17, 2021
@stale
Copy link

stale bot commented May 22, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label May 22, 2021
@stale stale bot closed this as completed Jun 1, 2021
@InCogNiTo124
Copy link
Contributor

A polite request for reopening the issue, PQ is an important metric and it would be great to have it

@niberger
Copy link
Contributor

Hello, I will give a try this week, inspired from the COCO implementation: https://github.com/cocodataset/panopticapi/blob/master/panopticapi/evaluation.py

Any preliminary comments are most welcome, especially on the signature that the methods should have.
In any case I will submit a draft PR soon.

@Borda Borda reopened this Apr 4, 2022
@niberger
Copy link
Contributor

niberger commented Apr 4, 2022

Regarding the spirit of implementation to adopt I do have a few question since it is my first contribution to PL.

  • Should the metric return a single float (the actual panoptic quality) or should it return a dict of detailed intermediate results like in the reference implementation in COCO api?
  • If I see small bugs/differences between the reference implementation and the reference paper, which one should I follow?

Reference paper
Reference implementation
My work so far

@niberger
Copy link
Contributor

niberger commented Apr 4, 2022

Answer from @justusschock on Discord, transcribed here for visibility :
Regarding your questions:

  • Metrics (after the full computation i.e after compute has been called) usually return a single float/scalar tensor so that these values can easily be logged to a logger of your choice. Sometimes (like for a PR-Curve) this is not feasible because you can’t reduce it to a single scalar, but if possible we should try to get it like that. Note that if reduction is None, we should get a scalar per sample of the current batch.
  • That’s a very good question. Not sure how much this potential difference impacts the overall value. Usually I’d go with the paper, but in your specific case, I’d opt for the reference implementation since coco is the established de-facto standard and for comparability and consistency I feel like we should match them

@bhack
Copy link

bhack commented Feb 13, 2023

For Boundary PQ (Panoptic Quality) see:
#1500 (comment)
https://bowenc0221.github.io/boundary-iou/

@ananyahjha93 ananyahjha93 removed their assignment Feb 13, 2023
@justusschock
Copy link
Member Author

Reopening for missing things

  • better test coverage
  • batched support

@marcocaccin
Copy link
Contributor

Ready to close this issue now? 😉

@Borda Borda closed this as completed Feb 24, 2023
@Borda Borda modified the milestones: future, v1.0.0 Jun 16, 2023
@tommiekerssies
Copy link

So this does not include the matching of the predicted and ground truth thing segments?

@aymuos15

This comment was marked as resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed New metric
Projects
None yet