AudioGenX: Explainability on Text-to-Audio Generative Models

Hyunju Kang, Geonhee Han, Yoonjae Jeong, Hogun Park

Research output: Contribution to journalConference articlepeer-review

Abstract

Text-to-audio generation models (TAG) have achieved significant advances in generating audio conditioned on text descriptions. However, a critical challenge lies in the lack of transparency regarding how each textual input impacts the generated audio. To address this issue, we introduce AudioGenX, an Explainable AI (XAI) method that provides explanations for text-to-audio generation models by highlighting the importance of input tokens. AudioGenX optimizes an Explainer by leveraging factual and counterfactual objective functions to provide faithful explanations at the audio token level. This method offers a detailed and comprehensive understanding of the relationship between text inputs and audio outputs, enhancing both the explainability and trustworthiness of TAG models. Extensive experiments demonstrate the effectiveness of AudioGenX in producing faithful explanations, benchmarked against existing methods using novel evaluation metrics specifically designed for audio generation tasks.

Original languageEnglish
Pages (from-to)17733-17741
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume39
Issue number17
DOIs
StatePublished - 11 Apr 2025
Event39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 - Philadelphia, United States
Duration: 25 Feb 20254 Mar 2025

Fingerprint

Dive into the research topics of 'AudioGenX: Explainability on Text-to-Audio Generative Models'. Together they form a unique fingerprint.

Cite this