van de Wolfshaar, J. (2017) Deep Reinforcement Learnig of Video Games. Master's Thesis / Essay, Artificial Intelligence.
|
Text
Artificial_Intelligence_Deep_R_1.pdf - Published Version Download (8MB) | Preview |
|
Text
toestemming.pdf - Other Restricted to Backend only Download (78kB) |
Abstract
The ability to learn is arguably the most crucial aspect of human intelligence. In reinforcement learning, we attempt to formalize a certain type of learning that is based on rewards and penalties. These supervisory signals should guide an agent to learn optimal behavior. In particular, this research focuses on deep reinforcement learning, where the agent should learn to play video games solely from pixel input. This thesis contributes to deep reinforcement learning research by assessing several variations to an existing state-of-the-art algorithm. First, we provide an extensive analysis on how the design decisions of the agent's deep neural network affect its performance. Second, we introduce a novel neural layer that allows for local specializations in the visual input of the agents, as opposed to the global weight sharing that occurs in convolutional layers. Third, we introduce a `what' and `where' neural network architecture, inspired by the information flow of the visual cortical areas in the human brain. Finally, we explore prototype based deep reinforcement learning by introducing a novel output layer that is largely inspired by learning vector quantization. In a subset of our experiments, we show substantial improvements compared to existing alternatives.
Item Type: | Thesis (Master's Thesis / Essay) |
---|---|
Supervisor name: | Wiering, M.A. and Schomaker, L.R.B. |
Degree programme: | Artificial Intelligence |
Thesis type: | Master's Thesis / Essay |
Language: | English |
Date Deposited: | 15 Feb 2018 08:32 |
Last Modified: | 02 May 2019 09:30 |
URI: | https://fse.studenttheses.ub.rug.nl/id/eprint/15851 |
Actions (login required)
View Item |