Skip to content
/ LAV Public
forked from dotchen/LAV

(CVPR 2022) A minimalist, mapless, end-to-end self-driving stack for joint perception, prediction, planning and control.

License

Notifications You must be signed in to change notification settings

alshukairi/LAV

 
 

Repository files navigation

LAV

teaser

Learning from All Vehicles
Dian Chen, Philipp Krähenbühl
CVPR 2022 (also arXiV 2203.11934)

PWC

This repo contains code for paper Learning from all vehicles.

It distills a model that performs joint perception, multi-modal prediction and planning, and we hope it to be a great starter kit for end-to-end autonomous driving research.

Reference

If you find our repo, dataset or paper useful, please cite us as

@inproceedings{chen2022lav,
  title={Learning from all vehicles},
  author={Chen, Dian and Kr{\"a}henb{\"u}hl, Philipp},
  booktitle={CVPR},
  year={2022}
}

Updates

  • We have slightly updated codes optimized for leaderboard inference speed. Full codes will be released soon.

Demo Video

Demo

Also checkout our website!

Getting Started

  • To run CARLA and train the models, make sure you are using a machine with at least a mid-end GPU.
  • Please follow INSTALL.md to setup the environment.

Training

We adopt a LBC-style staged privileged distillation framework. Please refer to TRAINING.md for more details.

Evaluation

We additionally provide examplery trained weights in the weights folder if you would like to directly evaluate. They are trained on Town01, 03, 04, 06. Make sure you are launching CARLA with the -vulkan flag.

Note: Please note that this is just example weights for quickstart purposes. If you directly submit this to leaderboard you will not get 61 DS. Full leaderboard codes will be released later.

Inside the root LAV repo, run

ROUTES=[PATH TO ROUTES] ./leaderboard/scripts/run_evaluation.sh

Use ROUTES=assets/routes_lav_valid.xml to run our ablation routes, or ROUTES=leaderboard/data/routes_valid.xml for the validation routes provided by leaderboard.

Dataset

We also release our LAV dataset. Download the dataset HERE.

See TRAINING.md for more details.

Acknowledgements

We thank Tianwei Yin for the pillar generation code. The ERFNet codes are taken from the official ERFNet repo.

License

This repo is released under the Apache 2.0 License (please refer to the LICENSE file for details).

About

(CVPR 2022) A minimalist, mapless, end-to-end self-driving stack for joint perception, prediction, planning and control.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.3%
  • Dockerfile 2.7%