Javascript must be enabled for the correct page display

Exploiting Network Redundancy for Low-Cost Neural Network Realizations

Keegstra, H. (1996) Exploiting Network Redundancy for Low-Cost Neural Network Realizations. Master's Thesis / Essay, Computing Science.

Infor_Ma_1996_HKeegstra.CV.pdf - Published Version

Download (2MB) | Preview


Neural networks are constructed and trained on powerful workstations. For real world applications, however, neural networks need to be implemented on devices (e.g. embedded controllers) with limited precision, storage and computational power. The mapping of a trained ideal neural network to such a limited environment is by no means straightforward. One has to consider proper word size selection and activation function simplification, without disturbing the network behavior too much (i.e. both networks have to meet a user specified specification). This transformation process reduces the available redundancy in a trained network without affecting it's behavior. In this thesis, we present two redundancy reduction pproaches which automatically determine and exploit the available redundancy in a trained neural network to transform the network into a simplified version (with approximately the same behavior) which can be implemented with less cost. The simplification procedures address the activation function type as well as the selection of the required weight precision and range. This also includes pruning of unnecessary connections and neurons. The usefulness of the presented approaches is illustrated by an image processing and an optical character recognition (OCR) application. The first approach is a greedy algorithm which selects a neuron (or connection) and tries to replace it with a less expensive one. After a change, the algorithm checks to see if the specifications are still met. This method is slow and it usually does not yields an efficient result, however, it always guarantees a behavioral invariant transformation. A faster transformation is achieved by the local approach. This method tries to pinpoint the redundancy of a neuron with data local to that neuron, i.e. input and output swing. Based on the local data, redundancy indices are calculated which are used to determine the activation function replacement. Weight precision selection is performed by a bit—pruning procedure; it uses connection swing or a Karnin connection sensitivity analysis to determine the required number of bits of a connection. The local approach is fast and delivers efficient results, however, the transformations are not independent of each other. This means, a behavioral invariant transformation can not always be guaranteed. A combination of the global and the local approach is suggested. The redundancy indices can be used to guide the optimization procedure of the global approach. This ensures a behavioral invariant transformation, delivering a sub—optimal neural network.

Item Type: Thesis (Master's Thesis / Essay)
Degree programme: Computing Science
Thesis type: Master's Thesis / Essay
Language: English
Date Deposited: 15 Feb 2018 07:29
Last Modified: 15 Feb 2018 07:29

Actions (login required)

View Item View Item