Skip to content
This repository has been archived by the owner on Nov 24, 2022. It is now read-only.

Latest commit

 

History

History
84 lines (61 loc) · 3.68 KB

README.md

File metadata and controls

84 lines (61 loc) · 3.68 KB

Solving Unity Ml Agents with PyTorch

In this series of tutorials, we'll be solving Unity Environments with Deep Reinforcement Learning using PyTorch. The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source Unity plugin that enables games and simulations to serve as environments for training intelligent agents.

Agents can be trained using reinforcement learning, imitation learning, neuroevolution, or other machine learning methods through a simple-to-use Python API. Currently, unity only supports Tensorflow to train model and there is no support for PyTorch. To train these environments using PyTorch we'll be using the standalone version of these environments.

Index

Installation

To get started with tutorial download the repository or clone it. Than create new conda environment install required dependencies from requirements.txt.

  • Clone this repository locally.

    git clone https://github.com/deepanshut041/ml_agents-pytorch.git
    cd ml_agents-pytorch
  • Create a new Python 3.7 The Environment.

    conda create --name unityai python=3.7
    activate unityai
  • Install ml-agents and other dependencies.

    pip install -r requirements.txt

Now our environment is ready download Standalone environments and place them in unity_envs folder. You can download them from below according to your operating system.

Environments

Basic

A linear movement task where the agent must move left or right to rewarding states. The goal is to move to the most reward state.

📰 Article 📹 Video Tutorial
📁 Implementation 📃 DQN

Any questions

If you have any questions, feel free to ask me:

Don't forget to follow me on twitter, github and Medium to be alerted of the new articles that I publish

How to help

  • Clap on articles: Clapping in Medium means that you like my articles. And the more claps I have, the more my article is shared to help them to be much more visible to the deep learning community.
  • Improve our notebooks: if you found a bug or a better implementation you can send a pull request.