Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong results #169

Open
PasaImage opened this issue Sep 3, 2022 · 1 comment
Open

Wrong results #169

PasaImage opened this issue Sep 3, 2022 · 1 comment

Comments

@PasaImage
Copy link

PasaImage commented Sep 3, 2022

for testing I running metrics with MOT17 train labels as prediction and GT but result is :
this special case use MOT17-02-DPM GT label as prediction and GT.

          Rcll  Prcn MT PT ML    FP FN IDs  FM  MOTA  MOTP IDt
MOT17   100.0% 61.9% 62  0  0 11422  0   0   0 38.5% 0.000   0
OVERALL 100.0% 61.9% 62  0  0 11422  0   0   0 38.5% 0.000   0

as it clear why we have 11422 FP ?

  • (I Expected to have Rcll = 100, Prcn=100, FP=0, MOTA=100, MOTP=100)
@jvlmdr
Copy link
Collaborator

jvlmdr commented Sep 4, 2022

This is probably caused by the non-pedestrian objects that are present in the MOT17 ground-truth, which have confidence set to 0. The apps.eval_motchallenge script filters the ground-truth by min_confidence while the same filtering is not applied to the predicted tracks. See the difference here:

gt = OrderedDict([(Path(f).parts[-3], mm.io.loadtxt(f, fmt=args.fmt, min_confidence=1)) for f in gtfiles])
ts = OrderedDict([(os.path.splitext(Path(f).parts[-1])[0], mm.io.loadtxt(f, fmt=args.fmt)) for f in tsfiles])

As a result, the non-pedestrian objects will be present in the predictions but not the ground-truth (100% recall but < 100% precision, as you are seeing).

This implementation matches the MOT challenge documentation: https://motchallenge.net/instructions/

The conf value contains the detection confidence in the det.txt files. For the ground truth, it acts as a flag whether the entry is to be considered. A value of 0 means that this particular instance is ignored in the evaluation, while any other value can be used to mark it as active. For submitted results, all lines in the .txt file are considered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants