van der Velde, Storm Tomas Martzen (2018) Learning to play Frogger using Q-Learning. Bachelor's Thesis, Artificial Intelligence.
|
Text
Bachelor_project_scriptie.pdf Download (284kB) | Preview |
|
Text
toestemming.pdf Restricted to Registered users only Download (98kB) |
Abstract
In this thesis the use of a vision grid is explored in order to train agents to play the arcade game Frogger, using Q-Learning combined with a Multiple Layer Perceptron. The game Frogger can be split up into two smaller tasks, the first being crossing the road and the second being crossing the river and reaching a goal. As these tasks are not connected, the possibility of creating an agent using two neural networks that each complete one task will be explored and compared to the performance of a single neural network. Furthermore, the use of single-action networks and learning from demonstration will also be explored. The results show that while the single-action networks and the two neural networks approaches are both able to complete the road section of the game with near perfect performance, none of the approaches were able to play the game on the level of a human. The single-action network agents were found to generalize better than the other agents on the road section. Learning from demonstration did not significantly improve performance for the two neural network or single-action network agents on the road section.
Item Type: | Thesis (Bachelor's Thesis) |
---|---|
Supervisor name: | Wiering, M.A. |
Degree programme: | Artificial Intelligence |
Thesis type: | Bachelor's Thesis |
Language: | English |
Date Deposited: | 04 Jun 2018 |
Last Modified: | 07 Jun 2018 13:20 |
URI: | https://fse.studenttheses.ub.rug.nl/id/eprint/17321 |
Actions (login required)
View Item |