This repository contains the code used for our project on point cloud object tracking. The project was done as part of the course "Computer Vision spring 2023" at Aarhus University.
The repository can be cloned using the following command:
git clone https://github.com/Point-Cloud-Object-Tracking/point-cloud-object-tracking.git --recursive
SimpleTrack
and OpenPCDet
are included as submodules. To update the submodules, run the following command:
git submodule update --init --recursive
The code is written in Python 3.9.0 and uses conda
to manage dependencies. To install conda
, follow the instructions here.
The dependencies are listed in the environment.yaml
file.
conda env export --from-history
name: pcot_3.9
channels:
- defaults
- conda-forge
dependencies:
- python==3.9
- torchaudio
- torchvision
- pytorch
- pytorch-cuda=11.8
- tensorboardx
- sharedarray
- cudatoolkit
prefix: /home/cv08f23/.conda/envs/pcot_3.9
To create a new environment with the dependencies, run the following command:
conda env create -n <env_name> -f environment.yaml
Both SimpleTrack
and OpenPCDet
comes with their own dependencies. These are listed in the requirements.txt
files in the respective folders. To install the dependencies, run the following commands:
python -m pip install -r ./SimpleTrack/requirements.txt
OpenPCDet
also depends on traveller59/spconv: Spatial Sparse Convolution Library. This library can be GPU accelerated with Nvidia CUDA, but needs to be explicitly installed with
that option to do so. Our develop environment has CUDA 12.0 installed, so we need to install spconv
with CUDA 12.0 support. This can be done by running the following commands:
python -m pip install spconv-cu120
If you have a different version of CUDA installed, you can find the version of spconv
that matches your CUDA version here
pushd OpenPCDet
python -m pip install -r ./requirements.txt
# Install this `pcdet` library and its dependent libraries by running the following command:
python setup.py develop
The datasets used in the project are listed below. The datasets are not included in the repository and must be downloaded separately.
The KITTI dataset can be downloaded from here.
We extract the data from the training
and testing
folders and place them in the following structure:
~/datasets/kitti/
├── devkit
│ ├── cpp
│ ├── mapping
│ └── matlab
├── gt_database
├── ImageSets
├── testing
│ ├── calib
│ ├── image_2
│ ├── tmp
│ └── velodyne
└── training
├── calib
├── image_2
├── label_2
└── velodyne
The nuScenes dataset can be downloaded from here.
We extract the data from the v1.0-trainval
and v1.0-test
folders and place them in the following structure:
~/datasets/nuScenes/
├── v1.0-test
│ ├── gt_database_10sweeps_withvelo
│ ├── maps
│ ├── samples
│ ├── sweeps
│ └── v1.0-test
└── v1.0-trainval
├── gt_database_10sweeps_withvelo
├── maps
├── samples
├── sweeps
└── v1.0-trainval
To make it easier to work with the datasets, we create symbolic links to the datasets in the OpenPCDet
folders. This is done by running the following commands:
ln -sf ~/datasets/kitti $PWD/OpenPCDet/data/kitti
ln -sf ~/datasets/nuScenes $PWD/OpenPCDet/data/nuScenes
NOTE if you have placed the datasets in a different location, you will need to change the paths in the above commands accordingly.
OpenPCDet
provides pretrained models for the nuScenes dataset here and KITTI here.
Here is how to train the pointpillar model on the nuScenes dataset.
cd OpenPCDet/tools
python train.py \
--cfg_file cfgs/nuscenes_models/cbgs_pointpillar.yaml \
--ckpt $PWD/OpenPCDet/output/nuscenes_models/cbgs_pointpillar/default/ckpt/latest_model.pth
Here is how to evaluate one of the 3D detection models .e.g. PointPillars on the nuScenes dataset.
It will first use the model to infer 3D bounding boxes on the test set.
Then it will use SimpleTrack
to track the objects in the test set.
Finally, it will evaluate the tracking results using the official nuScenes evaluation script,
from nuScenes-devkit/python-sdk/nuscenes/eval/detection/evaluate.py
.
./scripts/post_processing.sh \
--ckpt <path_to_ckpt> \ # e.g. OpenPCDet/output/nuscenes_models/cbgs_pointpillar/default/ckpt/latest_model.pth
--config <path_to_config> \ # e.g. OpenPCDet/output/nuscenes_models/cbgs_pointpillar/default/config.yaml
--id <experiment_id> \ # e.g. pointpillar
--batch-size <batch_size> # e.g. 16
The different metrics are saved in the results
folder, and will also be printed to stdout.
If you experience any issues with paths not being found, you can open the post_processing.sh
script and change the paths, to match the directory where you have put your copy of the datasets.
The files *.ipynb*
have been for testing purposes, and the result is described above.
The slurm*.sh
files were used to execute training and scripts on slurm node 5.