Javascript must be enabled for the correct page display

Multi-Task Learning on classic control tasks with Deep Q Learning

Laan, Thijs van der (2022) Multi-Task Learning on classic control tasks with Deep Q Learning. Bachelor's Thesis, Artificial Intelligence.

[img]
Preview
Text
Thijs_van_der_Laan_s3986721_Bachelors_Thesis.pdf

Download (13MB) | Preview
[img] Text
toestemming.pdf
Restricted to Registered users only

Download (121kB)

Abstract

In Reinforcement Learning, an agent is often trained in only one environment. Consequently, it becomes fitted to this environment and is therefore ineffective in other environments. Human learning capabilities, on the contrary, show that learning multiple tasks at once is possible and can even be beneficial in terms of learning efficiency and time. Simultaneously learning multiple tasks is called Multi-Task Learning. This paper investigates whether a Deep Q-Learning agent using a multilayer perceptron as a function approximator could also benefit from Multi-Task Learning by simultaneously training it on the classic control problems Acrobot, Cartpole, and Mountaincar. Ultimately, we find that an agent can be trained to solve Acrobot and Cartpole comparably to a traditionally trained agent. We observe varying success between different hyperparameter configurations of epsilon-values, episodes between switching environments, or usage of regularizers. However, an agent trained in three environments shows less evidence of successful training in all environments.

Item Type: Thesis (Bachelor's Thesis)
Supervisor name: Sabatelli, M.
Degree programme: Artificial Intelligence
Thesis type: Bachelor's Thesis
Language: English
Date Deposited: 05 Jul 2022 10:18
Last Modified: 05 Jul 2022 10:18
URI: https://fse.studenttheses.ub.rug.nl/id/eprint/27603

Actions (login required)

View Item View Item