TY - GEN
T1 - Depth hole filling based on deep learning for robust grasp detection
AU - Seo, Sungwon
AU - Anh Tuan, Luong
AU - Auh, Eugene
AU - Moon, Hyungpil
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/7/12
Y1 - 2021/7/12
N2 - In current decades, object grasp detection of a diverse range of novel objects using vision systems has been developed. In order to achieve full performance, it requires high-quality depth images. However, commodity depth cameras often offer invalid depth pixels due to dark, shining surfaces and edges between the foreground and background of the scene. To address this problem, we propose a deep learning based depth hole filling method. The depth hole filling network learns to predict a ground truth depth map for a given sparse depth map. We generate the artificial sparse depth images from Dex-Net 2.0 by simulating the common situation of the depth hole generation in the commodity depth camera to train the network. The proposed model fills the depth hole with the RMSE value of 7.1 ± 4.1mm. The grasp detection performance using our model with the sparse depth image is comparable with the performance when using the ground truth image.
AB - In current decades, object grasp detection of a diverse range of novel objects using vision systems has been developed. In order to achieve full performance, it requires high-quality depth images. However, commodity depth cameras often offer invalid depth pixels due to dark, shining surfaces and edges between the foreground and background of the scene. To address this problem, we propose a deep learning based depth hole filling method. The depth hole filling network learns to predict a ground truth depth map for a given sparse depth map. We generate the artificial sparse depth images from Dex-Net 2.0 by simulating the common situation of the depth hole generation in the commodity depth camera to train the network. The proposed model fills the depth hole with the RMSE value of 7.1 ± 4.1mm. The grasp detection performance using our model with the sparse depth image is comparable with the performance when using the ground truth image.
UR - https://www.scopus.com/pages/publications/85112482042
U2 - 10.1109/UR52253.2021.9494670
DO - 10.1109/UR52253.2021.9494670
M3 - Conference contribution
AN - SCOPUS:85112482042
T3 - 2021 18th International Conference on Ubiquitous Robots, UR 2021
SP - 194
EP - 197
BT - 2021 18th International Conference on Ubiquitous Robots, UR 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th International Conference on Ubiquitous Robots, UR 2021
Y2 - 12 July 2021 through 14 July 2021
ER -