Javascript must be enabled for the correct page display

Learning to Play Pac-Xon Using Different Kinds of Q-Learning

Schilperoort, Jits and Mak, Ivar (2018) Learning to Play Pac-Xon Using Different Kinds of Q-Learning. Bachelor's Thesis, Artificial Intelligence.

[img]
Preview
Text
AI_BA_2018_SCHILPEROORT.pdf

Download (780kB) | Preview
[img] Text
Toestemming.pdf
Restricted to Registered users only

Download (94kB)

Abstract

When reinforcement learning (RL) is applied in games, it is usually implemented with Q-learning. However, it has been shown that Q-learning has its flaws. A simple addition to Q-learning exists in the form of double Q-learning, which has shown promising results. In this study, it is investigated whether the advantage double Q-learning has shown in other studies also holds when combined with a multilayer perceptron (MLP) that uses a feature representation of the game state (higher order inputs). Furthermore we have set up an alternative reward function which is compared to a conventional reward function, to see whether presenting higher rewards towards the end of a level increases the performance of the algorithms. For the experiments, a game called Pac-Xon is used. Pac-Xon is an arcade video game in which the player tries to fill a level space by conquering blocks while being threatened by enemies. We found that both variants of the Q-learning algorithms can be used to successfully learn to play Pac-Xon. Furthermore double Q-learning obtains higher performances than Q-learning and the progressive reward function does not yield significantly better results than the regular reward function.

Item Type: Thesis (Bachelor's Thesis)
Supervisor name: Wiering, M.A.
Degree programme: Artificial Intelligence
Thesis type: Bachelor's Thesis
Language: English
Date Deposited: 13 Jun 2018
Last Modified: 20 Jun 2018 13:08
URI: https://fse.studenttheses.ub.rug.nl/id/eprint/17361

Actions (login required)

View Item View Item