Skip to main navigation Skip to search Skip to main content

Hallucination-Aware Optimization for Large Language Model-Empowered Communications

  • Yinqiu Liu
  • , Guangyuan Liu
  • , Ruichen Zhang
  • , Dusit Niyato
  • , Zehui Xiong
  • , Dong In Kim
  • , Kaibin Huang
  • , Hongyang Du

Research output: Contribution to journalReview articlepeer-review

Abstract

Large Language Models (LLMs) have significantly advanced communications fields, such as Telecom Q&A, mathematical modeling, and optimization solving. However, LLMs encounter an inherent issue known as hallucination, i.e., generating fact-conflicting or irrelevant content. This problem critically undermines the applicability of LLMs in communication systems yet has not been systematically explored. Hence, this article provides a comprehensive review of LLM applications in communications, with a particular emphasis on hallucination mitigation. Specifically, we analyze hallucination causes and summarize hallucination mitigation strategies from both model- and system-based perspectives. Afterward, we review representative LLM-empowered communication schemes, detailing hallucination issues and comparing their mitigation strategies. Finally, we present a case study of a Telecom-oriented LLM that utilizes a novel hybrid approach to reduce hallucination and improve the service experience. On the model side, we publish a Telecom hallucination dataset and apply direct preference optimization to fine-tune LLMs, resulting in a 20.6% correct rate improvement. Moreover, we construct a mobile-edge mixture- of-experts architecture for optimal LLM expert activation. Our research aims to propel the field of LLM-empowered communications forward by detecting and minimizing hallucination impacts.

Original languageEnglish
JournalIEEE Communications Magazine
DOIs
StateAccepted/In press - 2026

Fingerprint

Dive into the research topics of 'Hallucination-Aware Optimization for Large Language Model-Empowered Communications'. Together they form a unique fingerprint.

Cite this