Boers, Julia (2022) Explanation methods for regression tasks. Bachelor's Thesis, Artificial Intelligence.
|
Text
BachelorThesisJuliaBoers.pdf Download (1MB) | Preview |
|
Text
Toestemming.pdf Restricted to Registered users only Download (124kB) |
Abstract
Explainable artificial intelligence (XAI) is the field that aims to make machine learning models explainable, e.g. by developing explanation methods. The majority of research on this topic is focused on classification problems, while real-world applications are often regression problems, and explanation methods developed for classification cannot thoughtlessly be applied to regression. In this Bachelor thesis three attribution-based explanation methods are compared when applied to regression models. Two gradient-based explanation methods, Guided Backpropagation (GBP) and Integrated Gradients (IG), and one model-agnostic method, Local Interpretable Model-agnostic Explanations (LIME), were applied to two different regression models, a wine quality prediction (WQP) model and an age prediction (AP) model. The explanations were evaluated using the Deletion Area Under the Curve (DAUC) and Insertion Area Under the Curve (IAUC) metrics and a user study was performed. The evaluations did not point to one best-performing explanation method. For the WQP model IG performed best according to the DAUC score, but not significantly. LIME performed best according to the IAUC score. For the AP model GBP performed best according to the DAUC, LIME performed best according to the IAUC score and IG received the most votes in the user study, although the differences in votes were not significant.
Item Type: | Thesis (Bachelor's Thesis) |
---|---|
Supervisor name: | Valdenegro Toro, M.A. |
Degree programme: | Artificial Intelligence |
Thesis type: | Bachelor's Thesis |
Language: | English |
Date Deposited: | 05 Aug 2022 10:39 |
Last Modified: | 05 Aug 2022 10:39 |
URI: | https://fse.studenttheses.ub.rug.nl/id/eprint/28261 |
Actions (login required)
View Item |