Skip to content

The project 1 solution for Udacity Deep Reinforcement Learning nano degree

Notifications You must be signed in to change notification settings

hynpu/drlnd_p1_navigation

Repository files navigation

Project 1 (Banana Navigation) for Udacity Deep Reinforcement Learning Nanodegree

The project 1 solution for Udacity Deep Reinforcement Learning nano degree. alt banana env

Run the code

    1. download this repository
    1. install the requirements in a separate Anaconda environment: pip install -r requirements.txt
    1. run the solution file Navigation.ipynb

Goal

For this project, train an agent to navigate (and collect bananas!) in a large, square world.

A gif showing the navigation and collection of bananas.

The task is episodic, and in order to solve the environment, the agent must get an average score of +13 over 100 consecutive episodes.

Approach

The project follows the following steps to build and train the agent using double DQN. alt the diagrame of DQN

1. The state and action space of this environment

The state space has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around the agent's forward direction. Given this information, the agent has to learn how to best select actions. Four discrete actions are available, corresponding to:

  • 0 - move forward.
  • 1 - move backward.
  • 2 - turn left.
  • 3 - turn right.

A reward of +1 is provided for collecting a yellow banana, and a reward of -1 is provided for collecting a blue banana. Thus, the goal of the agent is to collect as many yellow bananas as possible while avoiding blue bananas.

alt the diagrame of NN

2. Explore the environment by taking random actions

By using this code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.

Once this cell is executed, you will watch the agent's performance, if it selects an action (uniformly) at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment.

env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0]            # get the current state
score = 0                                          # initialize the score
while True:
    action = np.random.randint(action_size)        # select an action
    env_info = env.step(action)[brain_name]        # send the action to the environment
    next_state = env_info.vector_observations[0]   # get the next state
    reward = env_info.rewards[0]                   # get the reward
    done = env_info.local_done[0]                  # see if episode has finished
    score += reward                                # update the score
    state = next_state                             # roll over the state to next time step
    if done:                                       # exit loop if episode finished
        break

print("Score: {}".format(score))

3. Implement the double DQN algo to train the agent

The DeepMind system used a deep convolutional neural network, with layers of tiled convolutional filters to mimic the effects of receptive fields. Reinforcement learning is unstable or divergent when a nonlinear function approximator such as a neural network is used to represent Q. This instability comes from the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy of the agent and the data distribution, and the correlations between Q and the target values.

alt the diagrame of double DQN

The technique used experience replay, a biologically inspired mechanism that uses a random sample of prior actions instead of the most recent action to proceed. This removes correlations in the observation sequence and smooths changes in the data distribution. Iterative updates adjust Q towards target values that are only periodically updated, further reducing correlations with the target.

Because the future maximum approximated action value in Q-learning is evaluated using the same Q function as in current action selection policy, in noisy environments Q-learning can sometimes overestimate the action values, slowing the learning. A variant called Double Q-learning was proposed to correct this. Double Q-learning[19] is an off-policy reinforcement learning algorithm, where a different policy is used for value evaluation than what is used to select the next action.

In practice, two separate value functions are trained in a mutually symmetric fashion using separate experiences, Q_locla and Q_target. The double Q-learning update step is then as follows:

alt the diagrame of vDQN

4. Run experiments to measure agent performance.

The average rewards along with the traning process show as following:

alt the average rewards

About

The project 1 solution for Udacity Deep Reinforcement Learning nano degree

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published