Structured light based depth edge detection for object shape recovery

Cheolhwon Kim, Jiyoung Park, Juneho Yi, Matthew Turk

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

13 Scopus citations

Abstract

This research features a novel approach that efficiently detects depth edges in real world scenes. Depth edges play a very important role in many computer vision problems because they represent object contours. We strategically project structured light and exploit distortion of light pattern in the structured light image along depth discontinuities to reliably detect depth edges. Distortion along depth discontinuities may not occur or be large enough to detect depending on the distance from the camera or projector. For practical application of the proposed approach, we have presented methods that guarantee the occurrence of the distortion along depth discontinuities for a continuous range of object location. Experimental results show that the proposed method accurately detect depth edges of human hand and body shapes as well as general objects.

Original languageEnglish
Title of host publication2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005 - Workshops
PublisherIEEE Computer Society
ISBN (Electronic)0769526608
DOIs
StatePublished - 2005
Event2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005 - Workshops - San Diego, United States
Duration: 21 Sep 200523 Sep 2005

Publication series

NameIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Volume2005-September
ISSN (Print)2160-7508
ISSN (Electronic)2160-7516

Conference

Conference2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005 - Workshops
Country/TerritoryUnited States
CitySan Diego
Period21/09/0523/09/05

Fingerprint

Dive into the research topics of 'Structured light based depth edge detection for object shape recovery'. Together they form a unique fingerprint.

Cite this