Javascript must be enabled for the correct page display

An Analysis of Decompositional Rule Extraction for Explainable Neural Networks

Dupuis, Nicholas An Analysis of Decompositional Rule Extraction for Explainable Neural Networks. Bachelor's Thesis, Artificial Intelligence.

[img]
Preview
Text
Final Draft Nicholas Dupuis.pdf

Download (419kB) | Preview
[img] Text
toestemming.pdf
Restricted to Registered users only

Download (83kB)

Abstract

As artificially intelligent systems take a more important role in our society, it becomes important to be able to explain their decisions. Neural networks have recently been one of the most successful tools for producing intelligent systems, but the decisions of neural networks are inherently difficult to explain, as the internal state is incomprehensible. Rule extraction seeks to make that state accessible to humans, and bring explainability to neural networks. This paper analyses a decompositional approach to rule extraction as applied to feed forward networks trained with back propagation, and funds that explainability may come at the cost of losing robustness in the presence of noise, scalability and reasonable time complexity, and exibility to learn different types of relationships.

Item Type: Thesis (Bachelor's Thesis)
Supervisor:
Supervisor nameSupervisor E mail
Verheij, BartUNSPECIFIED
Degree programme: Artificial Intelligence
Thesis type: Bachelor's Thesis
Language: English
Date Deposited: 20 Nov 2018 11:37
Last Modified: 29 Jan 2019 13:24
URI: http://fse.studenttheses.ub.rug.nl/id/eprint/18830

Actions (login required)

View Item View Item