Javascript must be enabled for the correct page display

How Far Ahead Should One Look? Offline Reinforcement Learning For Instructional Sequencing

Ritas, Panagiotis (2023) How Far Ahead Should One Look? Offline Reinforcement Learning For Instructional Sequencing. Master's Thesis / Essay, Artificial Intelligence.


Download (1MB) | Preview
[img] Text
Restricted to Registered users only

Download (130kB)


Deep reinforcement learning is a topic that holds great promise in solving a variety of optimization problems. Education and specifically the topic of instructional sequencing is one such problem, in which instructions, like feedback or exercises, are adaptively sequenced to a pupil in order to help them learn better. Deep reinforcement learning has been extensively used for instructional sequencing. It is not uncommon to also use myopic models, meaning that they aim to give the best next instruction to the pupil. We aim to answer the question of finding the optimal horizon for a model look-ahead, and if myopic models are better than horizon-based models in order to maximize a pupil’s learning. We trained two offline reinforcement learning (actor-critic) models with horizons of 10 and 20 on a dataset extracted from historical interactions of pupils from an online educational platform. The offline reinforcement learning models were evaluated on a designed simulator. To answer the research question, the trained offline reinforcement learning models were ran in the simulator and compared against a myopic recurrent neural network that historically served exercises in the online educational platform. Results show that models with shorter horizons tend to induce better pedagogical policies in pupils, with the myopic recurrent neural network inducing the best pedagogical policies for the pupil’s learning.

Item Type: Thesis (Master's Thesis / Essay)
Supervisor name: Borst, J.P. and Valdenegro Toro, M.A.
Degree programme: Artificial Intelligence
Thesis type: Master's Thesis / Essay
Language: English
Date Deposited: 13 Feb 2023 12:42
Last Modified: 13 Feb 2023 12:42

Actions (login required)

View Item View Item