HybridNet: Indoor segmentation with range and voxel fusion

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

In this paper, we propose a HybridNet that improves performance by fusing 2D and 3D features. A voxel-based method and a projection-based method were adopted to derive the results through one scan. Our approach consists of two parallel networks, extracts features along each dimension, and converges them in a Fusion Network. In the fusion network, the voxel blocks and 2D feature maps extracted from each structure are fused to the voxel grid and then trained through convolution. For effective training of 2D networks, we use data augmentation techniques using coordinate system rotation transformation. In addition, the performance was effectively improved by using a multi-loss with weights applied to each dimension, and better performance was achieved than the result using a single loss. Our proposed method can achieve better performance by changing the performance of the 2D network and 3D network, which can be generalized using other structures.

Original languageEnglish
Title of host publication2021 21st International Conference on Control, Automation and Systems, ICCAS 2021
PublisherIEEE Computer Society
Pages1509-1515
Number of pages7
ISBN (Electronic)9788993215212
DOIs
StatePublished - 2021
Event21st International Conference on Control, Automation and Systems, ICCAS 2021 - Jeju, Korea, Republic of
Duration: 12 Oct 202115 Oct 2021

Publication series

NameInternational Conference on Control, Automation and Systems
Volume2021-October
ISSN (Print)1598-7833

Conference

Conference21st International Conference on Control, Automation and Systems, ICCAS 2021
Country/TerritoryKorea, Republic of
CityJeju
Period12/10/2115/10/21

Keywords

  • 3D Vision
  • PointCloud
  • Semantic Segmentation

Fingerprint

Dive into the research topics of 'HybridNet: Indoor segmentation with range and voxel fusion'. Together they form a unique fingerprint.

Cite this