Justin Lidard1, Haimin Hu1, Asher Hancock2, Zixu Zhang2, Albert Gimó Contreras, Vikash Modi, Jonathan DeCastro, Deepak Gopinath, Guy Rosman, Naomi Ehrich Leonard, María Santos, Jaime Fernández Fisac
1,2equal contribution
Published as a conference paper at RSS'2024.
This repository implements KL Game, an algorithm for solving non-cooperative dynamic game with Kullback-Leibler (KL) regularization with respect to a general, stochastic, and possibly multi-modal reference policy. The repository is primarily developed and maintained by Haimin Hu, Justin Lidard, and Zixu Zhang.
Click to watch our spotlight video:
We provide a car racing example in the Notebook to showcase the policy blending feature of KL Game.
This Notebook comprises three sections, each dedicated to a closed-loop simulation using a different method: basic KL Game, multi-modal KL Game, and multi-modal reference policy.
Distributed under the BSD 3-Clause License. See LICENSE
for more information.
- Justin Lidard - @justinlidard - jlidard@princeton.edu
- Haimin Hu - @HaiminHu - haiminh@princeton.edu
This research has been supported in part by an NSF Graduate Research Fellowship. This work is partially supported by Toyota Research Institute (TRI). It, however, reflects solely the opinions and conclusions of its authors and not TRI or any other Toyota entity.
If you found this repository helpful, please consider citing our paper.
@inproceedings{lidard2024blending,
title={Blending Data-Driven Priors in Dynamic Games},
author={Lidard, Justin and Hu, Haimin and Hancock, Asher and Zhang, Zixu and Contreras, Albert Gim{\'o} and Modi, Vikash and DeCastro, Jonathan and Gopinath, Deepak and Rosman, Guy and Leonard, Naomi and Santos, Mar{\'i}a and Fisac, Jaime Fern{\'a}ndez},
booktitle={Proceedings of Robotics: Science and Systems},
year={2024}
}