DBNet is a large-scale driving behavior dataset, which provides large-scale high-quality point clouds scanned by Velodyne lasers, high-resolution videos recorded by dashboard cameras and standard drivers' behaviors (vehicle speed, steering angle) collected by real-time sensors.
Extensive experiments demonstrate that extra depth information helps networks to determine driving policies indeed. We hope it will become useful resources for the autonomous driving research community.
Created by Yiping Chen*, Jingkang Wang*, Jonathan Li, Cewu Lu, Zhipeng Luo, HanXue and Cheng Wang. (*equal contribution)
The resources of our work are available: [paper], [code], [video], [website], [challenge], [prepared data]
This work is based on our research paper, which appears in CVPR 2018. We propose a large-scale dataset for driving behavior learning, namely, DBNet. You can also check our dataset webpage for a deeper introduction.
In this repository, we release demo code and partial prepared data for training with only images, as well as leveraging feature maps or point clouds. The prepared data are accessible here. (More demo models and scripts are released soon!)
- Tensorflow 1.2.0
- Python 2.7
- CUDA 8.0+ (For GPU)
- Python Libraries: numpy, scipy and laspy
The code has been tested with Python 2.7, Tensorflow 1.2.0, CUDA 8.0 and cuDNN 5.1 on Ubuntu 14.04. But it may work on more machines (directly or through mini-modification), pull-requests or test report are well welcomed.
To train a model to predict vehicle speeds and steering angles:
python train.py --model nvidia_pn --batch_size 16 --max_epoch 125 --gpu 0
The names of the models are consistent with our paper.
Log files and network parameters will be saved to logs
folder in default.
To see HELP for the training script:
python train.py -h
We can use TensorBoard to view the network architecture and monitor the training progress.
tensorboard --logdir logs
After training, you could evaluate the performance of models using evaluate.py
. To plot the figures or calculate AUC, you may need to have matplotlib library installed.
python evaluate.py --model_path logs/nvidia_pn/model.ckpt
To get the predictions of test data:
python predict.py
The results are saved in results/results
(every segment) and results/behavior_pred.txt
(merged) by default.
To change the storation location:
python predict.py --result_dir specified_dir
The result directory will be created automatically if it doesn't exist.
Method | Setting | Accuracy | AUC | ME | AE | AME | |
---|---|---|---|---|---|---|---|
nvidia-pn | Videos + Laser Points | angle | 70.65% (<5) | 0.7799 | 29.46 | 4.23 | 20.88 |
speed | 82.21% (<3) | 0.8701 | 18.56 | 1.80 | 9.68 |
This baseline is run on dbnet-2018 challenge data and only nvidia_pn is tested. To measure difficult architectures comprehensively, several metrics are set, including accuracy under different thresholds, area under curve (AUC), max error (ME), mean error (AE) and mean of max errors (AME).
The implementations of these metrics could be found in evaluate.py
.
DBNet was developed by MVIG, Shanghai Jiao Tong University* and SCSC Lab, Xiamen University* (alphabetical order).
If you find our work useful in your research, please consider citing:
@InProceedings{DBNet2018,
author = {Yiping Chen and Jingkang Wang and Jonathan Li and Cewu Lu and Zhipeng Luo and HanXue and Cheng Wang},
title = {LiDAR-Video Driving Dataset: Learning Driving Policies Effectively},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}
Our code is released under Apache 2.0 License. The copyright of DBNet could be checked here.