Kam, Floris de (2024) Memory Consolidation by Deep-Q Forward-Forward Learning in Games. Bachelor's Thesis, Artificial Intelligence.
|
Text
bAI2024FPJdeKam.pdf Download (1MB) | Preview |
|
Text
toestemming.pdf Restricted to Registered users only Download (125kB) |
Abstract
Neural networks have been pivotal in transforming various fields through machine learning techniques. Training of these networks relies heavily on the backpropagation algorithm, which, despite its success, presents several limitations. These include large memory requirements and limited biological plausibility. This thesis implements the novel Forward-Forward (FF) algorithm, which locally optimizes a neural network by performing two forward passes. FF is tested on blackjack to explore its performance in simple games. This research extends the FF algorithm with Deep-Q Forward-Forward Learning (DQFFL), which combines FF with reinforcement learning to enable an FF neural network to learn on the fly. The results show performance for FF comparable to traditional backpropagation while reducing memory capacity and improving biological plausibility. The performance of DQFFL evaluated on two simple game environments indicates a promising subject for future research. This study contributes to the field of neuromorphic computing by presenting an alternative local learning rule for neural networks in reinforcement learning.
Item Type: | Thesis (Bachelor's Thesis) |
---|---|
Supervisor name: | Timmermans, J.J.M.A. |
Degree programme: | Artificial Intelligence |
Thesis type: | Bachelor's Thesis |
Language: | English |
Date Deposited: | 01 Aug 2024 14:13 |
Last Modified: | 01 Aug 2024 14:13 |
URI: | https://fse.studenttheses.ub.rug.nl/id/eprint/33800 |
Actions (login required)
View Item |