Skip to content
/ parco Public

PARCO: Parallel AutoRegressive Combinatorial Optimization

License

Notifications You must be signed in to change notification settings

ai4co/parco

Repository files navigation

PARCO

arXiv Slack License: MIT

Code repository for "PARCO: Learning Parallel Autoregressive Policies for Efficient Multi-Agent Combinatorial Optimization"

Autoregressive policy (AR) and Parallel Autoregressive (PAR) decoding

PARCO Model

🚀 Usage

Installation

pip install -e .

Note: we recommend using a virtual environment. Using Conda:

conda create -n parco
conda activate parco

Data generation

You can generate data using the generate_data.py, which will automatically generate all the data we use for training and testing:

python generate_data.py

Quickstart Notebooks

We made examples for each problem that can be trained under two minutes on consumer hardware. You can find them in the examples/ folder:

Train your own model

You can train your own model using the train.py script. For example, to train a model for the HCVRP problem, you can run:

python train.py experiment=hcvrp

you can change the experiment parameter to omdcpdp or ffsp to train the model for the OMDCPDP or FFSP problem, respectively.

Note on legacy FFSP code: the initial version we made was not yet integrated in RL4CO, so we left it the parco/tasks/ffsp_old folder, so you can still use it.

Testing

You may run the test.py script to evaluate the model, e.g. with:

python test.py --problem hcvrp --decode_type greedy --batch_size 128 --sample_size 1

🤩 Citation

If you find PARCO valuable for your research or applied projects:

@article{berto2024parco,
    title={{PARCO: Learning Parallel Autoregressive Policies for Efficient Multi-Agent Combinatorial Optimization}},
    author={Federico Berto and Chuanbo Hua and Laurin Luttmann and Jiwoo Son and Junyoung Park and Kyuree Ahn and Changhyun Kwon and Lin Xie and Jinkyoo Park},
    year={2024},
    journal={arXiv preprint arXiv:2409.03811},
    note={\url{https://github.com/ai4co/parco}}
}

We will also be happy if you cite the RL4CO framework that we used to create PARCO:

@article{berto2024rl4co,
    title={{RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization Benchmark}},
    author={Federico Berto and Chuanbo Hua and Junyoung Park and Laurin Luttmann and Yining Ma and Fanchen Bu and Jiarui Wang and Haoran Ye and Minsu Kim and Sanghyeok Choi and Nayeli Gast Zepeda and Andr\'e Hottung and Jianan Zhou and Jieyi Bi and Yu Hu and Fei Liu and Hyeonah Kim and Jiwoo Son and Haeyeon Kim and Davide Angioni and Wouter Kool and Zhiguang Cao and Jie Zhang and Kijung Shin and Cathy Wu and Sungsoo Ahn and Guojie Song and Changhyun Kwon and Lin Xie and Jinkyoo Park},
    year={2024},
    journal={arXiv preprint arXiv:2306.17100},
    note={\url{https://github.com/ai4co/rl4co}}
}