Javascript must be enabled for the correct page display

Parallelism versus Accuracy in Error Back—Propagation Learning

Meijering, G.F. (1996) Parallelism versus Accuracy in Error Back—Propagation Learning. Master's Thesis / Essay, Computing Science.

[img]
Preview
Text
Infor_Ma_1996_GFMeijering.CV.pdf - Published Version

Download (2MB) | Preview

Abstract

Training neural networks using the error back—propagation algorithm is a time—consuming task. The learn time can be reduced by parallelizing the algorithm. Due to the layer—wise composition, neural networks are well—suited for parallel implementations, at least as long as we stick to this layer—wise composition. Now an interesting question can be formulated: Is it possible to introduce parallelism in error back—propagation learning that is not restricted to the layer—wise composition but does not damage the convergence properties of the algorithm? It turned out that that using parallelism over the layers only in some cases results in the convergence of the network. In particular, when we use correlated input data, such as in function approximation, we found that this approach can be successful. If parallelism is introduced, we need something to realize this with. An obvious solution is the usage of multiple processing units. However, if we are restricted to some kind of hardware realization with a single processor unit, we should cope with this single unit and get the best out of it. A way to achieve parallelism on a single processor unit is to perform computations with less accuracy. For example on a 64—bit processor we could perform two computations in parallel both with an accuracy of 32 bits. Before this can be realized the next question should be answered: Can computations be performed with less accuracy without damaging the convergence properties? Introducing less accuracy seems very well possible. What we noticed was that the random initialization of the network becomes more important when we use less accuracy. We also looked at the possibilities of adapting the accuracy in a flexible manner. It turned out that it's very hard to determine at which point we have to raise the accuracy, although some promising results were obtained.

Item Type: Thesis (Master's Thesis / Essay)
Degree programme: Computing Science
Thesis type: Master's Thesis / Essay
Language: English
Date Deposited: 15 Feb 2018 07:29
Last Modified: 15 Feb 2018 07:29
URI: https://fse.studenttheses.ub.rug.nl/id/eprint/8789

Actions (login required)

View Item View Item