Javascript must be enabled for the correct page display

The effect of state representation inreinforcement learning applied to Tetris

Hendriks, Gijs (2020) The effect of state representation inreinforcement learning applied to Tetris. Bachelor's Thesis, Artificial Intelligence.

[img]
Preview
Text
AI_BA_2020_Gijs_Hendriks.pdf

Download (406kB) | Preview
[img] Text
Toestemming.pdf
Restricted to Registered users only

Download (93kB)

Abstract

Reinforcement learning (RL) is a paradigm within machine learning where agents try to maximize their reward. They do so by making decisions based on the representation of the current state of the world. The state representation is an important factor in the performance and training time when applying reinforcement learning to a problem. A representation can encapsulate different degrees of information about the world. In this paper the effects of different state representations and combinations of state representations are compared. This is done for the classical game Tetris using the standard temporal difference learning method. The experiment shows that representations with redundancy built in achieve the best results of around 23 lines cleared, while other representations with more information perform worse. In comparison to similar RL systems in literature this is a decent result, however other, non-RL methods, perform even better.

Item Type: Thesis (Bachelor's Thesis)
Supervisor name: Wiering, M.A.
Degree programme: Artificial Intelligence
Thesis type: Bachelor's Thesis
Language: English
Date Deposited: 31 Jul 2020 14:45
Last Modified: 31 Jul 2020 14:45
URI: https://fse.studenttheses.ub.rug.nl/id/eprint/22952

Actions (login required)

View Item View Item