Applying Bilateral Guided Multi-Viewed Fusion on Asymmetrical 3D Convolution Networks for 3D LiDAR semantic segmentation

Tai Huu Phuong Tran, Jae Wook Jeon

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We present a novel approach for 3D semantic segmentation using LiDAR sensor. In this work, we focus on improving the accuracy of the cylindrical partition and asymmetrical 3D convolution networks with a modification on their dimension-decomposition based context modeling module and asymmetrical residual block. The initial version simply performs multiplication and addition operators to combine two feature branches. In our modification, we apply the bilateral guided multi-viewed fusion module to provide better features for the classification stages. We trained and tested our model on SemanticKITTI dataset, our implementation improves better accuracy than the default asymmetrical 3D convolution networks, in which uses cylindrical voxel for the point cloud representation.

Original languageEnglish
Title of host publication2022 IEEE International Conference on Consumer Electronics-Asia, ICCE-Asia 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665464345
DOIs
StatePublished - 2022
Externally publishedYes
Event2022 IEEE International Conference on Consumer Electronics-Asia, ICCE-Asia 2022 - Yeosu, Korea, Republic of
Duration: 26 Oct 202228 Oct 2022

Publication series

Name2022 IEEE International Conference on Consumer Electronics-Asia, ICCE-Asia 2022

Conference

Conference2022 IEEE International Conference on Consumer Electronics-Asia, ICCE-Asia 2022
Country/TerritoryKorea, Republic of
CityYeosu
Period26/10/2228/10/22

Keywords

  • 3D Semantic Segmentation
  • Asymmetrical 3D Convolution Network
  • Bilateral Guided Multi-Viewed Fusion

Fingerprint

Dive into the research topics of 'Applying Bilateral Guided Multi-Viewed Fusion on Asymmetrical 3D Convolution Networks for 3D LiDAR semantic segmentation'. Together they form a unique fingerprint.

Cite this