Navarro, Roberto (2020) Learning to Grasp 3D Objects Using Deep Convolutional Neural Networks. Bachelor's Thesis, Artificial Intelligence.
|
Text
AI_BA_2020_RobertoNavarro.pdf Download (627kB) | Preview |
|
Text
Toestemming.pdf Restricted to Registered users only Download (94kB) |
Abstract
In this paper the performance of the auto-encoder Generative Grasp Convolutional Neural Network (GGCNN) architecture proposed by Morrison et al. (2018) is evaluated in object classification and 3D object grasping tasks. The GGCNN is trained using the Cornell dataset. The output of the encoder part of the network is used as object representation for the object classification task. The full architecture is used for the grasping task to identify the most suitable grasp given the orthographic image of an object constructed using a Global Orthographic Object Descriptor (GOOD) from the point cloud. The ModelNet10 and Restaurant-Object datasets were used to study the impacts of the K value(for the Knn algorithm) and bin parameter of the network on classification accuracy; resulting in no significant difference in performance across 19 unique configurations for the ModelNet10 dataset and 18 different configurations for the Restaurant-Object data set. For the object grasping task, a simulation was developed in PyBullet, where a gripper executes the best grasp candidate and the success or failure of the grasp is recorded. A success rate of 83.3% was achieved for total objects grasped, while a total grasping success rate of 47% for all attempted grasps.
Item Type: | Thesis (Bachelor's Thesis) |
---|---|
Supervisor name: | Mohades Kasaei, S.H. |
Degree programme: | Artificial Intelligence |
Thesis type: | Bachelor's Thesis |
Language: | English |
Date Deposited: | 06 Aug 2020 06:52 |
Last Modified: | 06 Aug 2020 06:52 |
URI: | https://fse.studenttheses.ub.rug.nl/id/eprint/23009 |
Actions (login required)
View Item |