Javascript must be enabled for the correct page display

An Analysis of Decompositional Rule Extraction for Explainable Neural Networks

Dupuis, Nicholas (2018) An Analysis of Decompositional Rule Extraction for Explainable Neural Networks. Bachelor's Thesis, Artificial Intelligence.

Final Draft Nicholas Dupuis.pdf

Download (419kB) | Preview
[img] Text
Restricted to Registered users only

Download (83kB)


As artificially intelligent systems take a more important role in our society, it becomes important to be able to explain their decisions. Neural networks have recently been one of the most successful tools for producing intelligent systems, but the decisions of neural networks are inherently difficult to explain, as the internal state is incomprehensible. Rule extraction seeks to make that state accessible to humans, and bring explainability to neural networks. This paper analyses a decompositional approach to rule extraction as applied to feed forward networks trained with back propagation, and funds that explainability may come at the cost of losing robustness in the presence of noise, scalability and reasonable time complexity, and exibility to learn different types of relationships.

Item Type: Thesis (Bachelor's Thesis)
Supervisor name: Verheij, Bart
Degree programme: Artificial Intelligence
Thesis type: Bachelor's Thesis
Language: English
Date Deposited: 20 Nov 2018 11:37
Last Modified: 25 Mar 2022 15:23

Actions (login required)

View Item View Item