Javascript must be enabled for the correct page display

Sampled Policy Gradient Compared to DPG, CACLA, and Q-Learning in the Game Agar.io

Wiehe, Anton (2018) Sampled Policy Gradient Compared to DPG, CACLA, and Q-Learning in the Game Agar.io. Bachelor's Thesis, Artificial Intelligence.

[img]
Preview
Text
AI_BA_2018_AntonWiehe.pdf

Download (416kB) | Preview
[img] Text
toestemming.pdf
Restricted to Registered users only

Download (94kB)

Abstract

The online game agar.io has become massively popular on the internet due to intuitive game design and the ability to instantly compete against players around the world. From the point of view of artificial intelligence this game is also very intriguing: The game has a continuous input and action space and allows to have diverse agents with complex strategies compete against each other. This paper analyzes how to apply reinforcement learning techniques to this game. A new offline actor-critic learning algorithm is introduced: Sampled Policy Gradient (SPG). SPG samples in the action space to calculate an approximated policy gradient by using the critic to evaluate the samples. This sampling allows SPG to search the action-Q-value space more globally than DPG, enabling it to theoretically avoid more local optima. Q-Learning is compared against the actor-critic algorithms CACLA, DPG, and the novel SPG in a pellet collection and a self play environment. Results show that Q-Learning and CACLA outperform a pre-programmed greedy bot in the pellet collection task, but all algorithms fail to outperform this bot in a fighting scenario. The SPG algorithm is analyzed to have great extendability through offline exploration and it matches DPG in performance even in its basic form without extensive sampling.

Item Type: Thesis (Bachelor's Thesis)
Supervisor name: Wiering, M.A.
Degree programme: Artificial Intelligence
Thesis type: Bachelor's Thesis
Language: English
Date Deposited: 27 Jul 2018
Last Modified: 30 Jul 2018 14:15
URI: https://fse.studenttheses.ub.rug.nl/id/eprint/18093

Actions (login required)

View Item View Item