Javascript must be enabled for the correct page display

Automatic Robot Learning Using Reinforcement Learning

Shantia, A. (2011) Automatic Robot Learning Using Reinforcement Learning. Master's Thesis / Essay, Computing Science.

Thesis_-Final_Version.pdf - Published Version

Download (2MB) | Preview
[img] Text
AkkoordA.Shantia.pdf - Other
Restricted to Registered users only

Download (32kB)


It is extremely difficult to teach robots the skills that humans take for granted. Understanding the robot's surrounding, localizing and safely navigating through an environment are examples of tasks that are very hard for robots. The current research on navigation is mainly focused on mapping a fixed and empty environment using depth sensory data and localizing the robot location based on robot odometry, sensory input and the map. The most common navigation method that is widely used is to map the environment using a 2D laser range finder and localize the robot by using iterative closest point algorithms. There are also studies on localization and mapping the environment using 3D laser data and the scale invariant feature transform to correct the robot odometry. However, these methods heavily rely on the precision of the depth sensors, have poor performance in outdoor environments, and require a fixed environment during training. In the presented method, the robot brain organizes a set of visual keywords that describe the robot’s perception of the environment similar to that of human topological navigation. The results of its experiences are processed by a model that finds cause and effect relationships between executed actions and changes in the environment. This allows the robot to learn from the consequences of its actions in the real world. The robot is resistant to non-major changes in the environment during training and testing phases. More specific, the robot takes several pictures from the environment with an RGB camera during the training phase. The raw images will be processed using the histogram of oriented gradients method (HoG) to extract salient edges in major directions. By using clustering on HoG results, similar scenes will be clustered based on visual appearances. Furthermore, a world model is made from the observations and actions taken during training. Finally, during testing, the robot selects actions that maximize the probability to reach its goal using model-based reinforcement learning algorithms. We have tested the method on the pioneer 2 robot in the AI department's robotic lab to navigate to a user selected goal from its initial position.

Item Type: Thesis (Master's Thesis / Essay)
Degree programme: Computing Science
Thesis type: Master's Thesis / Essay
Language: English
Date Deposited: 15 Feb 2018 07:46
Last Modified: 15 Feb 2018 07:46

Actions (login required)

View Item View Item