Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nuScences inference visualization result bad #197

Closed
lucasjinreal opened this issue May 14, 2019 · 23 comments
Closed

nuScences inference visualization result bad #197

lucasjinreal opened this issue May 14, 2019 · 23 comments

Comments

@lucasjinreal
Copy link

Does anybody can reveal result on nuScenes? I trained on 98750 steps on nuScenes just got this result:

image

Have tried both newest config and previous, just as bad as above

@lucasjinreal
Copy link
Author

About loss convergency:

runtime.step=98600, runtime.steptime=1.06, runtime.voxel_gene_time=0.01199, runtime.prep_time=0.1279, loss.cls_loss=0.245, loss.cls_loss_rt=0.1969, loss.loc_loss=0.5946, loss.loc_loss_rt=0.4983, loss.loc_elem=[0.02254, 0.03984, 0.03145, 0.02951, 0.03548, 0.04808, 0.04226], loss.cls_pos_rt=0.07668, loss.cls_neg_rt=0.1202, loss.dir_rt=0.3968, rpn_acc=0.9993, pr.prec@10=0.1292, pr.rec@10=0.8945, pr.prec@30=0.6781, pr.rec@30=0.6485, pr.prec@50=0.9225, pr.rec@50=0.4074, pr.prec@70=0.9897, pr.rec@70=0.1646, pr.prec@80=0.9974, pr.rec@80=0.05506, pr.prec@90=1.0, pr.rec@90=0.003025, pr.prec@95=1.0, pr.rec@95=6.747e-05, misc.num_vox=16232, misc.num_pos=338, misc.num_neg=199477, misc.num_anchors=200000, misc.lr=3.55e-08, misc.mem_usage=29.4
runtime.step=98650, runtime.steptime=1.002, runtime.voxel_gene_time=0.0337, runtime.prep_time=0.326, loss.cls_loss=0.2453, loss.cls_loss_rt=0.2832, loss.loc_loss=0.5964, loss.loc_loss_rt=0.7215, loss.loc_elem=[0.04749, 0.04151, 0.03852, 0.03103, 0.03609, 0.05135, 0.1148], loss.cls_pos_rt=0.1407, loss.cls_neg_rt=0.1424, loss.dir_rt=0.6351, rpn_acc=0.9993, pr.prec@10=0.1292, pr.rec@10=0.8943, pr.prec@30=0.6779, pr.rec@30=0.6478, pr.prec@50=0.9226, pr.rec@50=0.4068, pr.prec@70=0.9898, pr.rec@70=0.1644, pr.prec@80=0.9975, pr.rec@80=0.055, pr.prec@90=1.0, pr.rec@90=0.002998, pr.prec@95=1.0, pr.rec@95=6.651e-05, misc.num_vox=16637, misc.num_pos=204, misc.num_neg=199691, misc.num_anchors=200000, misc.lr=3e-08, misc.mem_usage=27.1

@lucasjinreal
Copy link
Author

It seems loss slow and accuracy high, but visualization totally bad

@poodarchu
Copy link

github debug 大法好。

@lucasjinreal
Copy link
Author

@poodarchu Better to say, I am just testing for the repo owner.... and it's costing time...

@forvd
Copy link

forvd commented May 15, 2019

i tried kitti viewer, it got much better result than the simple inference.py

@lucasjinreal
Copy link
Author

@forvd nuScence or kitti? I don't think there is any difference between inference visualization and kittiviewer? BTW, I trained kitti model works pretty good but not for nuScenes

@forvd
Copy link

forvd commented May 15, 2019

i trained nuscenes and use the simple inference. Got the wrong result just like you. change to kitti viewer is fine for me.

@lucasjinreal
Copy link
Author

@forvd That's weired.... I think I need change the inference script.......................... Do u have any idea why result goes different? (Or did you just visualize ground truth in kitti viewer rather than prediction?)

@forvd
Copy link

forvd commented May 16, 2019

@jinfagang I use kitti viewer to prediction. But the reason for the poor results may be a large number of low scores (0.1~0.3) bboxes.

@lucasjinreal
Copy link
Author

@forvd Obviously... even above objects there were no box around it. Are you sure your visualization is detection not prediction? Can u give me an screenshot about what you got?

@forvd
Copy link

forvd commented May 16, 2019

2019-05-16 15-16-35屏幕截图

@lucasjinreal
Copy link
Author

@forvd this lidar point cloud seems overlap with another. Did you using nuScenes lidar data or your fused lidar of 2 16-beams?

@forvd
Copy link

forvd commented May 16, 2019

Origin nuScenes lidar data

@poodarchu
Copy link

poodarchu commented May 16, 2019

Origin nuScenes lidar data

@forvd

so what's the NDS score or AP of the above screenshot-shown model?
It looks ok, so what do you mean by poor results?

@lucasjinreal
Copy link
Author

@forvd You were right.... in kittiviewer the result seems reasonable.... model still need more finetuning...

image

@traveller59
Copy link
Owner

@jinfagang I have tested the simple-inference.ipynb, you need to use dataset instance to get correct point cloud (with 10 sweeps) and increase score threshold to get reasonable result on bev image. don't need to modify other parts of simple-inference.ipynb.

@lucasjinreal
Copy link
Author

@traveller59 thanks, that is what I think. Close since we have got an solution.

@mayanks888
Copy link

Hi,
Which is best config file for training nuscene datasets for multiclass detection in pointpillar
I'm kind a confuse between all config file
"second.pytorch/second/configs/"

@kargarisaac
Copy link

kargarisaac commented May 26, 2019

@forvd @jinfagang @traveller59
Is there any way to filter out detected boxes with low scores in kitti web viewer? I can't find any option for that.

image

@lucasjinreal
Copy link
Author

@kargarisaac You need to edit your self, simply get a score if less than a threshold then pass

@lucasjinreal
Copy link
Author

Here is a snippets can be referenceed:

box3d = pred["box3d_lidar"].detach().cpu().numpy()
    scores = pred["scores"].detach().cpu().numpy()
    labels = pred["label_preds"].detach().cpu().numpy()
    idx = np.where(scores > 0.3)[0]
    print(idx)
    # filter low score ones
    print(box3d)
    box3d = box3d[idx, :]
    print(labels)
    # label is one-dim
    labels = np.take(labels, idx)
    print(labels)
    scores = np.take(scores, idx)
    print(scores)

@vatsal-shah
Copy link

Hi @jinfagang
I was trying to evaluate a pretrained model (not Pointpillars, a different one) on the Nuscenes val dataset, and was getting very low AP. I had used the Nuscenes authors' script to convert their dataset into KITTI format. I visualised my inferences with kitti_object_vis, and noticed the inferences were not that bad, but the ground truths were. To be precise, the ground truth had many missing annotations, which the pretrained model was detecting. Since they were not present in ground truth, they were being counted as false positives, resulting in low AP.
At first I thought maybe the ground truths were going bad during the conversion process. So I visualized the original Nuscenes images with the nuscenes devkit, without any conversion, and even they had the same issue. I have created an issue regarding this, but yet to receive a reply.

Here are some examples, where I have visualized the ground truth annotations of only the Car category:
000309
000329
001721
005835

Did you face this issue as well?

You can use the following code to visualize some errors:

from nuscenes.nuscenes import NuScenes
nusc = NuScenes(version='v1.0-trainval', dataroot=/path/to/nuscenes,verbose=True)
sensor = 'CAM_FRONT'

tokens = ['0d9c4c2de24b49758901191f623d426b','0ed1a404c1fb4f7a87a30d3ee45f7f97','139bce92199440ea8929d1a1bd10dbda','224d34c137b64e4f8012c7280b4b9089','3abf81a7c3894000a4c508e6ced0caca','4b5202b4625f48799812af5d212e68a4','4e56a7a63b984597844eb55df9a2ba21','74109c3e72b24fb48e2262dc869ba868','8d265c91cc944ba790c09e74d2811d08','9827d52b3aa2484c8901f67f89742e15','f9438d42bb944364b5a75d6c5d0bc758','fbbad6309f1543f78634e49c50dfb779']

for my_sample_token in tokens:
    print(my_sample_token)
    my_sample = nusc.get('sample', my_sample_token)
    cam_front_data = nusc.get('sample_data', my_sample['data'][sensor])
    nusc.render_sample_data(cam_front_data['token'], out_path=/path/to/out_file.png)

@ryontang
Copy link

Hi,
Which is best config file for training nuscene datasets for multiclass detection in pointpillar
I'm kind a confuse between all config file
"second.pytorch/second/configs/"

Hi, do you solve this problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants