Javascript must be enabled for the correct page display

Performance comparison of model-based and model-free deep reinforcement learning methods in video game environments

Hucko, Benjamin (2024) Performance comparison of model-based and model-free deep reinforcement learning methods in video game environments. Bachelor's Thesis, Artificial Intelligence.

[img]
Preview
Text
bAI2024HuckoB.pdf

Download (600kB) | Preview
[img] Text
toestemming.pdf
Restricted to Registered users only

Download (126kB)

Abstract

Deep reinforcement learning methods require millions of samples to converge. Gathering this many samples is expensive. Model-based algorithms claim to converge with less samples. In this theses I test how online model-based algorithms work on environments with pixel-based states. Furthermore, I test whether model-based algorithms can work on an environment where the entire environment is not in frame. I compare the SimPLe algorithm (Kaiser, Babaeizadeh, Milos, Osinski, Campbell, Czechowski, Erhan, Finn, Kozakowski, Levine, Mohiuddin, Sepassi, Tucker, and Michalewski, 2020) and the Dreamer algorithm (Hafner, Lillicrap, Ba, and Norouzi, 2020a). Model-based algorithms converge to lower performance than when the agent is trained directly with the environment, i.e. with a model-free method. The algorithms struggle to learn the environment where the entire environment is not in frame. SimPLe and Dreamer can be more sampling efficient when the environment is trained partially offline and the frames used for the initial offline training are ignored.

Item Type: Thesis (Bachelor's Thesis)
Supervisor name: Cardenas Cartagena, J. D.
Degree programme: Artificial Intelligence
Thesis type: Bachelor's Thesis
Language: English
Date Deposited: 28 Aug 2024 07:00
Last Modified: 28 Aug 2024 07:00
URI: https://fse.studenttheses.ub.rug.nl/id/eprint/34071

Actions (login required)

View Item View Item