Javascript must be enabled for the correct page display

Autoencoder-based Deep Reinforcement Learning for ground-level walking of Human Musculoskeletal models

Falzari, Massimiliano (2022) Autoencoder-based Deep Reinforcement Learning for ground-level walking of Human Musculoskeletal models. Bachelor's Thesis, Artificial Intelligence.

[img]
Preview
Text
thesis.pdf

Download (5MB) | Preview
[img] Text
toestemming.pdf
Restricted to Registered users only

Download (125kB)

Abstract

This paper proposes using autoencoder-based deep reinforcement learning (AE-DRL) architectures for ground-level walking of human musculoskeletal models. It compares undercomplete autoencoder (AE) and variational autoencoder (VAE) in the context of physics-based simulations. The used deep reinforcement learning (DRL) is Proximal Policy optimization with Imitation Learning (PPO+IL). The architectures are trained with a two-phase approach. First, the autoencoder-based latent space is learned using gathered simulated data. Then, the DRL agent with the pre-trained encoder is trained to learn a walking policy. The results show that AEDRL methods are more efficient in learning with the same observation space than the standard DRL. Compared to the baseline (i.e. PPO+IL), AE-PPO+IL had a 131% longer mean duration of an episode and a 23% higher mean cumulative reward. VAE-PPO+IL, on the other hand, had a 102% longer mean duration of an episode and a 9% higher mean cumulative reward. Generally, AE showed better results than VAE with respect to reconstruction error (measured by the mean square error(MSE)) and DRL mean cumulative reward. VAE, in contrast, performed better in terms of root MSE from the imitation data

Item Type: Thesis (Bachelor's Thesis)
Supervisor name: Carloni, R.
Degree programme: Artificial Intelligence
Thesis type: Bachelor's Thesis
Language: English
Date Deposited: 16 Aug 2022 08:54
Last Modified: 16 Aug 2022 08:57
URI: https://fse.studenttheses.ub.rug.nl/id/eprint/28389

Actions (login required)

View Item View Item