Skip to content

Reinforcement Learning with Model-Agnostic Meta-Learning in Pytorch

License

Notifications You must be signed in to change notification settings

mfe7/pytorch-maml-rl

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reinforcement Learning with Model-Agnostic Meta-Learning (MAML)

HalfCheetahDir

Implementation of Model-Agnostic Meta-Learning (MAML) applied on Reinforcement Learning problems in Pytorch. This repository includes environments introduced in (Duan et al., 2016, Finn et al., 2017): multi-armed bandits, tabular MDPs, continuous control with MuJoCo, and 2D navigation task.

Getting started

To avoid any conflict with your existing Python setup, and to keep this project self-contained, it is suggested to work in a virtual environment with virtualenv. To install virtualenv:

pip install --upgrade virtualenv

Create a virtual environment, activate it and install the requirements in requirements.txt.

virtualenv venv
source venv/bin/activate
pip install -r requirements.txt

Requirements

  • Python 3.5 or above
  • PyTorch 1.3
  • Gym 0.15

Usage

Training

You can use the train.py script in order to run reinforcement learning experiments with MAML. Note that by default, logs are available in train.py but are not saved (eg. the returns during meta-training). For example, to run the script on HalfCheetah-Vel:

python train.py --config configs/maml/halfcheetah-vel.yaml --output-folder maml-halfcheetah-vel --seed 1 --num-workers 8

Testing

Once you have meta-trained the policy, you can test it on the same environment using test.py:

python test.py --config maml-halfcheetah-vel/config.json --policy maml-halfcheetah-vel/policy.th --output maml-halfcheetah-vel/results.npz --meta-batch-size 20 --num-batches 10  --num-workers 8

References

This project is, for the most part, a reproduction of the original implementation cbfinn/maml_rl in Pytorch. These experiments are based on the paper

Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. International Conference on Machine Learning (ICML), 2017 [ArXiv]

If you want to cite this paper

@article{finn17maml,
  author    = {Chelsea Finn and Pieter Abbeel and Sergey Levine},
  title     = {{Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks}},
  journal   = {International Conference on Machine Learning (ICML)},
  year      = {2017},
  url       = {http://arxiv.org/abs/1703.03400}
}

If you want to cite this implementation:

@misc{deleu2018mamlrl,
  author = {Tristan Deleu},
  title  = {{Model-Agnostic Meta-Learning for Reinforcement Learning in PyTorch}},
  note   = {Available at: https://github.com/tristandeleu/pytorch-maml-rl},
  year   = {2018}
}

About

Reinforcement Learning with Model-Agnostic Meta-Learning in Pytorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%