This is the code for project of cource Foundations of Reinforcement Learning (FoRL) Spring Semester 2023. The major contributions of this project is
- A full implementation of AMD algorithm [original paper] for arbitrary environments, using
ray==2.3.1
.- migrating to higher version of
ray
needs additional effort
- migrating to higher version of
- Two RL environments, Wolfpack and Gathering. [original paper]
For further information, please refer to our report.
I decide to use Python 3.10.4 and CUDA 11.8 as standard version. This is the default version on Euler.
These versions can be modifyed. Depending on the repo we are migrating.
On Euler, for each time you need to load the module.
module load gcc/8.2.0 python_gpu/3.10.4 cuda/11.8.0 git-lfs/2.3.0 git/2.31.1 eth_proxy
Package gcc/8.2.0
is necessary. Only this module is loaded, then can you search out result about python and cuda. You can search for the version of python and cuda you want by command
module avail ${package name: cuda, python, etc}
create a virtual env with venv
py_venv_dir="${SCRATCH}/.python_venv"
python -m venv ${py_venv_dir}/forl-proj --upgrade-deps
# To install python packages, run
${SCRATCH}/.python_venv/forl-proj/bin/pip install -r requirements.txt --cache-dir ${SCRATCH}/pip_cache
# actiavte
source "${SCRATCH}/.python_venv/forl-proj/bin/activate"
# deactivate
deactivate
On local machine, to install this exact python version I use conda (you can also use venv).
conda create --name=forl-proj python=3.10
# activate
conda activate forl-proj
# deactivate
conda deactivate
You can first edit the resources needed in start-ray.sbatch
file, and submit jobs by
# command
echo "$(cat start-ray-nodes.sbatch; echo command_to_submit_jobs )" > temp && sbatch < temp && rm temp
# command from file
echo "$(cat start-ray-nodes.sbatch file_of_job )" > temp && sbatch < temp && rm temp
For example
echo "$(cat start-ray-nodes.sbatch exp_scripts/wolfpack/amd-qadj-delay_model-conv_assump-neural.bash)" > temp && sbatch < temp && rm temp
There are some links here
Most importantly, this interactive website can generate sbatch scripts.
See the following link:
- https://docs.ray.io/en/latest/cluster/vms/user-guides/community/slurm.html#slurm-network-ray
- https://docs.ray.io/en/latest/cluster/vms/user-guides/community/slurm-basic.html
- https://github.com/NERSC/slurm-ray-cluster/blob/master/submit-ray-cluster.sbatch
- https://github.com/pengzhenghao/use-ray-with-slurm
- https://github.com/klieret/ray-tune-slurm-demo