Accompanying code for the paper "Optimal distributed control with stability guarantees by training a network of neural closed-loop maps".
This repository contains the code accompanying the paper titled "Optimal distributed control with stability guarantees by training a network of neural closed-loop maps" authored by Danilo Saccani, Leonardo Massai, Luca Furieri, and Giancarlo Ferrari Trecate.
For inquiries about the code, please contact:
- Danilo Saccani: danilo.saccani@epfl.ch
- Leonardo Massai: l.massai@epfl.ch
- main.py: Entry point for training the distributed operator using neural networks.
- utils.py: Contains utility functions and main parameters for the codebase.
- models.py: Defines models including the system's dynamical model, Recurrent Equilibrium Network (REN) model, and interconnection model of RENs.
- plots.py: Includes functions for plotting and visualizing training and evaluation results.
- Dependencies listed in
requirements.txt
- Cloning the Repository
git clone https://github.com/DecodEPFL/Distributed_neurSLS.git
- Navigate to the cloned directory:
cd Distributed_neurSLS
- Install the required dependencies. We recommend using a virtual environment:
python -m venv venv
source venv/bin/activate # Activate the virtual environment (Linux/macOS)
pip install -r requirements.txt
- Adjust parameters in utils.py as needed.
- Run the main script to start training:
python main.py
The following gifs show trajectories of the vehicles before and after the training of a distributed neurSLS controller, where the agents that need to coordinate in order to pass through a narrow passage while trying to keep a rectangular shape, starting from a random initial position marked with ○.
This work is licensed under a Creative Commons Attribution 4.0 International License.