This fork has been adjusted only to use four features of the waymo dataset (x, y, z, intensity). How to format custom data to be used in place of the waymo dataset is also explained fork_specific.
3D Object Detection and Tracking using center points in the bird-eye view.
Center-based 3D Object Detection and Tracking,
Tianwei Yin, Xingyi Zhou, Philipp Krähenbühl,
arXiv technical report (arXiv 2006.11275)
@article{yin2021center,
title={Center-based 3D Object Detection and Tracking},
author={Yin, Tianwei and Zhou, Xingyi and Kr{\"a}henb{\"u}hl, Philipp},
journal={CVPR},
year={2021},
}
[2021-12-27] We release a multimodal fusion approach for 3D detection MVP.
[2021-12-27] A TensorRT implementation (by Wang Hao) of CenterPoint-PointPillar is available at URL. ~60 FPS on Waymo Open Dataset. There is also a nice onnx conversion repo by CarkusL.
[2021-06-20] The real time version of CenterPoint ranked 2nd in the Waymo Real-time 3D detection challenge (72.8 mAPH / 57.1 ms). The corresponding techical report is available at URL. Code is at URL
[2021-04-13] Better nuScenes results by fixing sync-bn bug and using stronger augmentations. Plese refer to nuScenes.
[2021-02-28] CenterPoint is accepted at CVPR 2021 🔥
[2021-01-06] CenterPoint v0.1 is released. Without bells and whistles, we rank first among all Lidar-only methods on Waymo Open Dataset with a single model. Check out CenterPoint's model zoo for Waymo and nuScenes.
Any questions or suggestions are welcome!
Tianwei Yin tianweiy@mit.edu Xingyi Zhou zhouxy@cs.utexas.edu
Three-dimensional objects are commonly represented as 3D boxes in a point-cloud. This representation mimics the well-studied image-based 2D bounding-box detection but comes with additional challenges. Objects in a 3D world do not follow any particular orientation, and box-based detectors have difficulties enumerating all orientations or fitting an axis-aligned bounding box to rotated objects. In this paper, we instead propose to represent, detect, and track 3D objects as points. Our framework, CenterPoint, first detects centers of objects using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation, and velocity. In a second stage, it refines these estimates using additional point features on the object. In CenterPoint, 3D object tracking simplifies to greedy closest-point matching. The resulting detection and tracking algorithm is simple, efficient, and effective. CenterPoint achieved state-of-the-art performance on the nuScenes benchmark for both 3D detection and tracking, with 65.5 NDS and 63.8 AMOTA for a single model. On the Waymo Open Dataset, CenterPoint outperforms all previous single model method by a large margin and ranks first among all Lidar-only submissions.
-
Simple: Two sentences method summary: We use standard 3D point cloud encoder with a few convolutional layers in the head to produce a bird-eye-view heatmap and other dense regression outputs including the offset to centers in the previous frame. Detection is a simple local peak extraction with refinement, and tracking is a closest-distance matching.
-
Fast and Accurate: Our best single model achieves 71.9 mAPH on Waymo and 65.5 NDS on nuScenes while running at 11FPS+.
-
Extensible: Simple replacement for anchor-based detector in your novel algorithms.
#Frame | Veh_L2 | Ped_L2 | Cyc_L2 | MAPH | FPS | |
---|---|---|---|---|---|---|
VoxelNet | 1 | 71.9 | 67.0 | 68.2 | 69.0 | 13 |
VoxelNet | 2 | 73.0 | 71.5 | 71.3 | 71.9 | 11 |
#Frame | Veh_L2 | Ped_L2 | Cyc_L2 | MAPH | FPS | |
---|---|---|---|---|---|---|
VoxelNet | 2 | 56.1 | 47.8 | 65.2 | 56.3 | 11 |
MAP ↑ | NDS ↑ | PKL ↓ | FPS ↑ | |
---|---|---|---|---|
VoxelNet | 58.0 | 65.5 | 0.69 | 11 |
#Frame | Veh_L2 | Ped_L2 | Cyc_L2 | MOTA | FPS | |
---|---|---|---|---|---|---|
VoxelNet | 2 | 59.4 | 56.6 | 60.0 | 58.7 | 11 |
AMOTA ↑ | AMOTP ↓ | |
---|---|---|
VoxelNet (flip test) | 63.8 | 0.555 |
All results are tested on a Titan RTX GPU with batch size 1.
- ONCE_Benchmark: Implementation of CenterPoint on the ONCE dataset
- CenterPoint-KITTI: Reimplementation of CenterPoint on the KITTI dataset
- OpenPCDet: Implementation of CenterPoint in OpenPCDet framework (with configs for Waymo/nuScenes dataset)
- AFDet: another work inspired by CenterNet achieves good performance on KITTI/Waymo dataset
- mmdetection3d: CenterPoint in mmdet framework
- CenterPointTensorRT: CenterPoint-PointPillar for accelerated inference with TensorRT
- CenterPoint-ONNX: Convert CenterPoint-Pillar to ONNX / TensorRT
Please refer to INSTALL to set up libraries needed for distributed training and sparse convolution.
Please refer to GETTING_START to prepare the data. Then follow the instruction there to reproduce our detection and tracking results. All detection configurations are included in configs.
If you are interested in training CenterPoint on a new dataset, use CenterPoint in a new task, or use a new network architecture for CenterPoint, please refer to DEVELOP. Feel free to send us an email for discussions or suggestions.
- Support visualization with Open3D
- Colab demo
- Docker
CenterPoint is release under MIT license (see LICENSE). It is developed based on a forked version of det3d. We also incorperate a large amount of code from CenterNet and CenterTrack. See the NOTICE for details. Note that both nuScenes and Waymo datasets are under non-commercial licenses.
This project is not possible without multiple great opensourced codebases. We list some notable examples below.