Wenlong Huang1, Pieter Abbeel1, Deepak Pathak*2, Igor Mordatch*3 (*equal advising)
1University of California, Berkeley, 2Carnegie Mellon University, 3Google Brain
This is the official demo code for our Language Models as Zero-Shot Planners paper. The code demonstrates how Large Language Models, such as GPT-3 and Codex, can generate action plans for complex human activities (e.g. "make breakfast"), even without any further training. The code can be used with any available language models from OpenAI API and Huggingface Transformers with a common interface.
If you find this work useful in your research, please cite using the following BibTeX:
@article{huang2022language,
title={Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents},
author={Huang, Wenlong and Abbeel, Pieter and Pathak, Deepak and Mordatch, Igor},
journal={arXiv preprint arXiv:2201.07207},
year={2022}
}
Local Setup or Open in Colab
- Python=3.6.13
- CUDA=11.3
git clone https://github.com/huangwl18/language-planner.git
cd language-planner/
conda create --name language-planner-env python=3.6.13
conda activate language-planner-env
pip install --upgrade pip
pip install -r requirements.txt
See demo.ipynb
(or ) for a complete walk-through of our method. Feel free to experiment with any household tasks that you come up with (or any tasks beyond household domain if you provide necessary actions in available_actions.json
)!
Note:
- It is observed that best results can be obtained with larger language models. If you cannot run Huggingface Transformers models locally or on Google Colab due to memory constraint, it is recommended to register an OpenAI API account and use GPT-3 or Codex (As of 01/2022, $18 free credits are awarded to new accounts and Codex series are free after admitted from the waitlist).
- Due to language models' high sensitivity to sampling hyperparameters, you may need to tune sampling hyperparameters for different models to obtain the best results.
- The code uses the list of available actions supported in VirtualHome 1.0's Evolving Graph Simulator. The available actions are stored in
available_actions.json
. The actions should support a large variety of household tasks. However, you may modify or replace this file if you're interested in a different set of actions or a different domain of tasks (beyond household domain). - A subset of the manually-annotated examples originally collected by the VirtualHome paper is used as available examples in the prompt. They are transformed to natural language format and stored in
available_examples.json
. Feel free to change this file for a different set of available examples.