Javascript must be enabled for the correct page display

Deep Reinforcement Learning in Atari 2600 Games

Bick, Daniel and Lehmkuhl, Jannik (2019) Deep Reinforcement Learning in Atari 2600 Games. Bachelor's Thesis, Artificial Intelligence.

[img]
Preview
Text
AI_BA_2019_DANIELBICK.pdf

Download (2MB) | Preview
[img] Text
Toestemming.pdf
Restricted to Registered users only

Download (141kB)

Abstract

Recent research in the domain of Reinforcement Learning (RL) has often focused on the popular deep RL algorithm Deep Q-learning (DQN). A different deep RL algorithm, similar to DQN, called Deep Quality-Value Learning (DQV), has received much less attention, albeit its great potential in outperforming DQN due to DQV's theoretical advantage of having an increased learning speed due to only approximating a state-value mapping instead of a state-action-value mapping as a target network. This thesis focuses on comparing these two aforementioned deep RL algorithms on their performances in learning to play different Atari 2600 games provided by the OpenAI gym. The impact of different exploration strategies on the learning performance of DQN and DQV will be examined, more specifically, a diversity-driven approach (Div-DQN and Div-DQV) and a noisy network approach (NoisyNet-DQN and NoisyNet-DQV) will be compared to traditional implementations of DQN and DQV. The results show that the standard DQV algorithm outperforms DQN and that DQV-based variants in general slightly outperform DQN-based variants. The NoisyNet approach shows the overall best training outcome, followed by DQV, the diversity-driven approach, and DQN.

Item Type: Thesis (Bachelor's Thesis)
Supervisor:
Supervisor nameSupervisor E mail
Wiering, M.A.M.A.Wiering@rug.nl
Degree programme: Artificial Intelligence
Thesis type: Bachelor's Thesis
Language: English
Date Deposited: 28 Aug 2019
Last Modified: 11 Sep 2019 06:49
URI: http://fse.studenttheses.ub.rug.nl/id/eprint/20812

Actions (login required)

View Item View Item