Implementation of a Neural Network (MLP) and Deep Q-Learning Network (DQN) using only the numpy library. The DQN is trained to play the Cartpole game.
Neural Network Construction: Notebook
This notebook provides a step-by-step procedure for constructing a multilayered perceptron.
Neural Network Implementation: NeuralNetwork
This file contains the full implementation of the neural network, with added momentum to the weight updating step. To save and load the NeuralNetwork
, use save_network
and load_network
from saveload.py.
Train DQN to Play Cartpole: Notebook
This notebook demonstrates how to use the NeuralNetwork
to implement the DQN algorithm.
Maze Harvest: Environment
Check the Agent Training Notebook to learn more about the environment.
DQN Using TensorFlow: DQN
Agent Training: Notebook
This folder contains pre-trained networks. Refer to the notebooks to learn how to load and use the networks.
This project is licensed under the terms of the GNU General Public License v3.0 - see the LICENSE file for details.