Rijsbosch, Bram (2021) Responsible AI: Behind the Rationale of Neural Networks. Bachelor's Thesis, Artificial Intelligence.
|
Text
bAI_2021_RijsboschB.pdf Download (950kB) | Preview |
|
Text
toestemming.pdf Restricted to Registered users only Download (118kB) |
Abstract
Problems with complex machine learning models have led to growing concerns and a spiking interest in responsible articial intelligence. An important sub eld of responsible AI, explainable AI (XAI), has already led to the development of techniques capable of explaining the decision-making of these black-box systems, yet this is not enough; after all, as demonstrated in previous research, machine learning techniques may appear to perform well, obtaining high accuracy levels scores with test data, while actually reasoning with an unsound rationale. Using complex self-learning systems that unknowingly reason with an unsound rationale can have devastating real-world effects. This study therefore further explores the issues concerning the rationale of these complex machine learning systems. Using a new artificial domain, based on real-world conditions, this study confirms the result that neural networks can achieve high levels of performance in terms of classification accuracies, while not learning the conditions that define the data sets. It is demonstrated that the standard techniques, such as using more data, deeper networks or less noise, do not aid in solving this problem. Additional experiments, focused on finding more responsible practices, do reveal that using synthetic training data built upon domain knowledge can help to improve the rationale while maintaining high levels of accuracy.
Item Type: | Thesis (Bachelor's Thesis) |
---|---|
Supervisor name: | Steging, C.C. |
Degree programme: | Artificial Intelligence |
Thesis type: | Bachelor's Thesis |
Language: | English |
Date Deposited: | 01 Jul 2021 10:27 |
Last Modified: | 01 Jul 2021 10:27 |
URI: | https://fse.studenttheses.ub.rug.nl/id/eprint/24848 |
Actions (login required)
View Item |