Camera-wise Training for Enhanced Omni-directional 2D Object Detection

  • Hyung Joon Jeon
  • , Duong Nguyen Ngoc Tran
  • , Long Hoang Pham
  • , Huy Hung Nguyen
  • , Tai Huu Phuong Tran
  • , Jae Wook Jeon

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this paper, we propose a method to perform training and inference with multiple instances of the same deep neural network architecture on images taken from cameras of different directions. Across multiple cameras, depending on each of their directional characteristics, objects viewed from the cameras can form slightly different distributions in visual features. Regarding this, we emphasize the importance of camera-wise training on multiple instances of a given deep neural network for object detection. Given the Waymo Open Perception Dataset, we used multiple instances of the YOLOv5x6 architecture and trained each of them per camera. Such a training scheme on the Training Set achieves better training progression, and the inference results are shown to have AP/L1 as high as 0.6679 on the Testing Set.

Original languageEnglish
Title of host publicationIECON 2022 - 48th Annual Conference of the IEEE Industrial Electronics Society
PublisherIEEE Computer Society
ISBN (Electronic)9781665480253
DOIs
StatePublished - 2022
Externally publishedYes
Event48th Annual Conference of the IEEE Industrial Electronics Society, IECON 2022 - Brussels, Belgium
Duration: 17 Oct 202220 Oct 2022

Publication series

NameIECON Proceedings (Industrial Electronics Conference)
Volume2022-October
ISSN (Print)2162-4704
ISSN (Electronic)2577-1647

Conference

Conference48th Annual Conference of the IEEE Industrial Electronics Society, IECON 2022
Country/TerritoryBelgium
CityBrussels
Period17/10/2220/10/22

Keywords

  • autonomous driving
  • camera-wise training
  • object detection
  • YOLOv5x6

Fingerprint

Dive into the research topics of 'Camera-wise Training for Enhanced Omni-directional 2D Object Detection'. Together they form a unique fingerprint.

Cite this