Official implementation of Fair Resource Allocation in Multi-Task Learning.
The performance is evaluated under 3 scenarios:
- Image-level Classification. The CelebA dataset contains 40 tasks.
- Regression. The QM9 dataset contains 11 tasks, which can be downloaded automatically from Pytorch Geometric.
- Dense Prediction. The NYU-v2 dataset contains 3 tasks and the Cityscapes dataset (UPDATE: the small version) contains 2 tasks.
Following Nash-MTL and FAMO, we implement our method with the MTL
library.
First, create the virtual environment:
conda create -n mtl python=3.9.7
conda activate mtl
python -m pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113
Then, install the repo:
git clone https://github.com/OptMN-Lab/fairgrad.git
cd fairgrad
python -m pip install -e .
The dataset by default should be put under experiments/EXP_NAME/dataset/
folder where EXP_NAME
is chosen from {celeba, cityscapes, nyuv2, quantum_chemistry}
. To run the experiment:
cd experiments/EXP_NAME
sh run.sh
Cityscapes, NYU-v2, QM9. Please refer to Table 2,3,8 of our paper for more details.
CelebA. For detailed results of our method, please refer to issue1. For single-task results and other baselines including FAMO, CAGrad, etc, please refer to issue2 for more details.
The experiments are conducted on Meta-World benchmark. To run the experiments on MT10
and MT50
(the instructions below are partly borrowed from CAGrad):
- Create python3.6 virtual environment.
- Install the MTRL codebase.
- Install the Meta-World environment with commit id
d9a75c451a15b0ba39d8b7a8b6d18d883b8655d8
. - Copy the
mtrl_files
folder to themtrl
folder in the installed mtrl repo, then
cd PATH_TO_MTRL/mtrl_files/ && chmod +x mv.sh && ./mv.sh
- Follow the
run.sh
to run the experiments.