A hybrid matching algorithm based on contour and motion information for depth estimation

Tae Woo Kim, Chunsoo Ahn, Taemin Cho, Jitae Shin, Hokyom Kim

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Depth estimation is an important part of free viewpoint television (FTV) because accuracy of depth information affects directly the synthesized video quality at virtual viewpoint. However, generating an accurate depth map is a computationally complex process, which makes real-time implementation challenging. In order to obtain accurate depth information with low complexity, a hybrid matching technique is proposed. It is composed of three types of matching mode: pixel matching, 3x3 and 5x5 block matching. By using contour and motion information in matching mode selection, the proposed technique is more complied with human visual system which is more sensitive moving regions than static one. Experimental results show that the proposed algorithm can not only enhance the synthesized visual quality, but also reduce the complexity in matching process.

Original languageEnglish
Title of host publicationProceedings of the 7th International Conference on Ubiquitous Information Management and Communication, ICUIMC 2013
DOIs
StatePublished - 2013
Event7th International Conference on Ubiquitous Information Management and Communication, ICUIMC 2013 - Kota Kinabalu, Malaysia
Duration: 17 Jan 201319 Jan 2013

Publication series

NameProceedings of the 7th International Conference on Ubiquitous Information Management and Communication, ICUIMC 2013

Conference

Conference7th International Conference on Ubiquitous Information Management and Communication, ICUIMC 2013
Country/TerritoryMalaysia
CityKota Kinabalu
Period17/01/1319/01/13

Keywords

  • Contour and motion information
  • Depth estimation
  • Free-viewpoint television (FTV)
  • Human visual system (HVS)
  • Hybrid matching (HM)

Fingerprint

Dive into the research topics of 'A hybrid matching algorithm based on contour and motion information for depth estimation'. Together they form a unique fingerprint.

Cite this