To run experiments for the paper Sub-policy Adaptation for Hierarchical Reinforcement Learning, navigate to sandbox/finetuning/README.md
to view instructions.
If you use our code for academic research, you are highly encouraged to cite the following paper:
- Alexander C. Li*, Carlos Florensa*, Ignasi Clavera, Pieter Abbeel. "Sub-policy Adaptation for Hierarchical Reinforcement Learning." Proceedings of the Eighth International Conference on Learning Representations (ICLR), 2020.
We built off of the original rllab code as well as the SNN4HRL code developed by Carlos Florensa (UC Berkeley / Covariant). Alexander Li (UC Berkeley / CMU) was the main developer on this project.
rllab is a framework for developing and evaluating reinforcement learning algorithms. It includes a wide range of continuous control tasks plus implementations of the following algorithms:
- REINFORCE
- Truncated Natural Policy Gradient
- Reward-Weighted Regression
- Relative Entropy Policy Search
- Trust Region Policy Optimization
- Cross Entropy Method
- Covariance Matrix Adaption Evolution Strategy
- Deep Deterministic Policy Gradient
rllab is fully compatible with OpenAI Gym. See here for instructions and examples.
rllab only officially supports Python 3.5+. For an older snapshot of rllab sitting on Python 2, please use the py2 branch.
rllab comes with support for running reinforcement learning experiments on an EC2 cluster, and tools for visualizing the results. See the documentation for details.
The main modules use Theano as the underlying framework, and we have support for TensorFlow under sandbox/rocky/tf.
Documentation is available online: https://rllab.readthedocs.org/en/latest/.
If you use rllab for academic research, you are highly encouraged to cite the following paper:
- Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel. "Benchmarking Deep Reinforcement Learning for Continuous Control". Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016.
rllab was originally developed by Rocky Duan (UC Berkeley / OpenAI), Peter Chen (UC Berkeley), Rein Houthooft (UC Berkeley / OpenAI), John Schulman (UC Berkeley / OpenAI), and Pieter Abbeel (UC Berkeley / OpenAI). The library is continued to be jointly developed by people at OpenAI and UC Berkeley.
Slides presented at ICML 2016: https://www.dropbox.com/s/rqtpp1jv2jtzxeg/ICML2016_benchmarking_slides.pdf?dl=0