Javascript must be enabled for the correct page display

Interpretable Function Approximation with Gaussian Processes in Value-Based Model-Free Reinforcement Learning

Lende, Matthijs van der (2024) Interpretable Function Approximation with Gaussian Processes in Value-Based Model-Free Reinforcement Learning. Bachelor's Thesis, Artificial Intelligence.

[img]
Preview
Text
bAI2024MatthijsvanderLende.pdf

Download (2MB) | Preview
[img] Text
toestemming.pdf
Restricted to Registered users only

Download (126kB)

Abstract

Estimating a value function for reinforcement learning (RL) in continuous spaces is a challenging task. To address this, the field of RL employs various function approximators, including linear models and deep neural networks. Linear models are interpretable but can only model simple functions, while deep neural networks can model complex functions but tend to be black-box models. Gaussian process (GP) models aim to offer the best of both worlds by being able to model complex nonlinear functions while providing interpretable uncertainty estimates. This includes extensions such as the sparse variational GP (SVGP) and deep GP (DGP). This thesis presents a Bayesian nonparametric framework for off-policy and on-policy learning using GPs for action-value function modeling. Results on the CartPole and Lunar Lander environments show that SVGPs/DGPs significantly outperform linear function approximation, but do not yet match the speed of convergence or performance of deep RL algorithms using neural networks. These findings highlight the potential of GPs in RL as function approximators in tasks where uncertainty and interpretability is mandatory.

Item Type: Thesis (Bachelor's Thesis)
Supervisor name: Cardenas Cartagena, J.D.
Degree programme: Artificial Intelligence
Thesis type: Bachelor's Thesis
Language: English
Date Deposited: 28 Aug 2024 14:20
Last Modified: 28 Aug 2024 14:20
URI: https://fse.studenttheses.ub.rug.nl/id/eprint/34094

Actions (login required)

View Item View Item