Skip to main navigation Skip to search Skip to main content

ANALYZING VISIBLE ARTICULATORY MOVEMENTS IN SPEECH PRODUCTION FOR SPEECH-DRIVEN 3D FACIAL ANIMATION

  • Hyung Kyu Kim
  • , Sangmin Lee
  • , Hak Gu Kim
  • Chung-Ang University
  • University of Illinois at Urbana-Champaign

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Speech-driven 3D facial animation aims to generate realistic facial meshes based on input speech signals. However, due to a lack of understanding of visible articulatory movements, current state-of-the-art methods result in inaccurate lip and jaw movements. Traditional evaluation metrics, such as lip vertex error (LVE), often fail to represent the quality of visual results. Based on our observation, we reveal the problems with existing evaluation metrics and raise the necessity for separate evaluation approaches for 3D axes. Comprehensive analysis shows that most recent methods struggle to precisely predict lip and jaw movements in 3D space.

Original languageEnglish
Title of host publication2024 IEEE International Conference on Image Processing, ICIP 2024 - Proceedings
PublisherIEEE Computer Society
Pages3575-3579
Number of pages5
ISBN (Electronic)9798350349399
DOIs
StatePublished - 2024
Externally publishedYes
Event31st IEEE International Conference on Image Processing, ICIP 2024 - Abu Dhabi, United Arab Emirates
Duration: 27 Oct 202430 Oct 2024

Publication series

NameProceedings - International Conference on Image Processing, ICIP
ISSN (Print)1522-4880

Conference

Conference31st IEEE International Conference on Image Processing, ICIP 2024
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Period27/10/2430/10/24

Keywords

  • Lip synchronization
  • Speech-driven 3D facial animation
  • Visible articulatory

Fingerprint

Dive into the research topics of 'ANALYZING VISIBLE ARTICULATORY MOVEMENTS IN SPEECH PRODUCTION FOR SPEECH-DRIVEN 3D FACIAL ANIMATION'. Together they form a unique fingerprint.

Cite this