Source code for reproducing the simulation results for the proposed algorithm of the paper "Automating Reinforcement Learning with Example-based Resets".
The instructions below were tested on Ubuntu 18.04, but should work on other Linux distros as well.
Download the source code to the current user's home directory. The contents of this folder should be under ~/autoreset_rl/
.
Conda package manager is required for installing python dependencies. Follow the link below to install conda:
https://docs.conda.io/projects/conda/en/latest/user-guide/install/
Follow the link below to set up pre-requisites for mujoco-py:
https://github.com/openai/mujoco-py#install-mujoco
cd ~/autoreset_rl
conda env create --file ./conda_env.yml
If you have any issues related to MuJoCo or OpenAI Gym when setting up the conda environment, please refer to the following links:
https://github.com/openai/mujoco-py#troubleshooting
https://github.com/openai/gym
Activate the conda environment.
cd ~/autoreset_rl
conda activate autoreset_rl
which python
Minimal (no logging) terminal commands to run the code:
python main.py --config_dir ./experiment_configs/cliff-cheetah.json
python main.py --config_dir ./experiment_configs/cliff-walker.json
python main.py --config_dir ./experiment_configs/peg-insertion_insert.json
python main.py --config_dir ./experiment_configs/peg-insertion_remove.json
Additional arguments are available (--logging, --record, --evaluation). Terminal command to view arguments:
python main.py --help
If the contents of this folder are not under ~/autoreset_rl/
, please modify the experiment config files (JSON) accordingly.
@article{kim2022automating,
title={Automating Reinforcement Learning With Example-Based Resets},
author={Kim, Jigang and Park, J. hyeon and Cho, Daesol and Kim, H. Jin},
journal={IEEE Robotics and Automation Letters},
volume={7},
number={3},
pages={6606-6613},
year={2022},
publisher={IEEE}
}