Javascript must be enabled for the correct page display

Deep Reinforcement Learning in Atari 2600 Games

Lehmkuhl, Jannik and Bick, Daniel (2019) Deep Reinforcement Learning in Atari 2600 Games. Bachelor's Thesis, Artificial Intelligence.

[img]
Preview
Text
AI_BA_2019_JANNIKLEHMKUHL.pdf

Download (2MB) | Preview
[img] Text
Toestemming.pdf
Restricted to Registered users only

Download (141kB)

Abstract

Recent research in the domain of Reinforcement Learning (RL) has often focused on the popular deep RL algorithm Deep Q-learning (DQN). A different deep RL algorithm, similar to DQN, called Deep Quality-Value Learning (DQV), has received much less attention, albeit its great potential in outperforming DQN due to DQV's theoretical advantage of having an increased learning speed due to only approximating a state-value mapping instead of a state-action-value mapping as a target network. This thesis focuses on comparing these two aforementioned deep RL algorithms on their performances in learning to play different Atari 2600 games provided by the OpenAI gym. The impact of different exploration strategies on the learning performance of DQN and DQV will be examined, more specifically, a diversity-driven approach (Div-DQN and Div-DQV) and a noisy network approach (NoisyNet-DQN and NoisyNet-DQV) will be compared to traditional implementations of DQN and DQV. The results show that the standard DQV algorithm outperforms DQN and that DQV-based variants in general slightly outperform DQN-based variants. The NoisyNet approach shows the overall best training outcome, followed by DQV, the diversity-driven approach, and DQN.

Item Type: Thesis (Bachelor's Thesis)
Supervisor name: Wiering, M.A.
Degree programme: Artificial Intelligence
Thesis type: Bachelor's Thesis
Language: English
Date Deposited: 28 Aug 2019
Last Modified: 11 Sep 2019 06:27
URI: https://fse.studenttheses.ub.rug.nl/id/eprint/20814

Actions (login required)

View Item View Item