We use a LBC-style privileged distillation framework. Please follow the instructions below for different training stages.
Make sure you have followed INSTALL,md before proceeding.
All steps will visualize and store weights to the wandb
cloud (locally as well), so make sure you have it setup already.
For all following training stages, you need a multi-GPU machine or should otherwise decrease the batch size.
First, download the LAV dataset.
We have released the full 3425 trajectories. However, each trajectory is self-contained, and you may only download a subset of them to run the training code.
After downloading the dataset, specify the dataset path in the following line of config.yaml
:
You may also choose to download the split compressed files HERE.
data_dir: [PATH tO DATASET]
python -m lav.train_bev
You can monitor the training and visualize the progess in your wandb page of project lav_bev
:
python -m lav.train_seg
Similar, monitor the progess in wandb of lav_seg
:
python -m lav.train_bra
You can monitor the training and visualize the progess in your wandb page of project lav_bra
:
Write painted lidar points to the disk.
python -m lav.data_paint
This is divided into two steps.
python -m lav.train_full --perceive-only
Once it is done, update the following lines in config.yaml
:
lidar_model_dir: [TRAINED MODEL PATH]
python -m lav.train_full
Visualize the progress in wandb project page lav_full
: