Skip to main navigation Skip to search Skip to main content

Artificial Intelligence-Driven Drafting of Chest X-Ray Reports: 2025 Position Statement From the Korean Society of Thoracic Radiology Based on an Expert Survey

  • Won Gi Jeong
  • , Eui Jin Hwang
  • , Gong Yong Jin
  • , Ju Hyung Lee
  • , Se Ri Kang
  • , Hongseok Ko
  • , Bomi Gil
  • , Jin Hwan Kim
  • , Tae Jung Kim
  • , Chan Ho Park
  • , Kyongmin Sarah Beck
  • , Min Ji Son
  • , Jeong Joo Woo
  • , Seung Jin Yoo
  • , Jin Young Yoo
  • , Soon Ho Yoon
  • , Ji Won Lee
  • , Kyung Nyeo Jeon
  • , Yeon Joo Jeong
  • , Soo Youn Ham
  • Su Jin Hong, Wonju Hong, Jin Mo Goo
  • National Cancer Center Korea
  • Chonnam National University
  • Seoul National University
  • Jeonbuk National University
  • Wonkwang University
  • Kangwon National University
  • The Catholic University of Korea
  • Chungnam National University
  • Soonchunhyang University
  • CHA University
  • Eulji University
  • Hanyang University
  • Pusan National University
  • Gyeongsang National University
  • Kangbuk Samsung Hospital
  • Hallym University

Research output: Contribution to journalArticlepeer-review

Abstract

Objective: Generative artificial intelligence (AI) systems can be used to draft automated chest X-ray (CXR) reports. Although promising in terms of efficiency and workforce shortages, their accuracy, reliability, and clinical utility remain uncertain. This article presents the Korean Society of Thoracic Radiology (KSTR) position statement on AI-assisted CXR report drafting, derived from a Delphi survey of experts who used the software on a modest case set. Materials and Methods: Twenty thoracic radiologists completed a Delphi survey after reviewing 60 CXR cases using an AI-based tool for automated report drafting (KARA-CXR, version 1.0.0.3; KakaoBrain, Seoul, Republic of Korea). Prior to the Delphi survey, the participants individually reviewed 60 CXR cases at their respective workplaces as part of the survey preparation process. The 60 cases were distributed evenly across six clinical settings (health screening, inpatient, emergency department, intensive care unit, respiratory outpatient, and non-respiratory outpatient), with 10 cases in each setting. The participants individually selected CXR cases in which they had worked. The entire selection and review processes were completed within 1 month. Subsequently, two Delphi rounds were conducted. Participants rated 12 key questions (72 items) regarding the clinical applicability of the AI-based tool on a 9-point Likert scale. Consensus required ≥70% agreement. Results: Consensus emerged for 41 of 72 items (56.9%). Respondents adopted a neutral stance on most questions concerning accuracy and clinical integration; they were neither impressed nor disappointed with the tool. A favorable view emerged only for health-screening examinations. Conversely, the stand-alone use of the AI-based tool in routine practice was opposed. Participants stressed the need for further performance optimization before deployment and advocated society-endorsed education and guidelines before adoption. Conclusion: The KSTR supports the use of an AI-based automated CXR report-drafting tool only in health-screening settings with radiologist validation and opposes its standalone use in routine practice, recommending performance optimization and society-endorsed education and guidelines before its adoption.

Original languageEnglish
Pages (from-to)1100-1108
Number of pages9
JournalKorean Journal of Radiology
Volume26
Issue number10
DOIs
StatePublished - Nov 2025
Externally publishedYes

Keywords

  • Artificial intelligence
  • Consensus
  • Diagnostic imaging/methods
  • Natural language processing
  • Radiography, thoracic

Fingerprint

Dive into the research topics of 'Artificial Intelligence-Driven Drafting of Chest X-Ray Reports: 2025 Position Statement From the Korean Society of Thoracic Radiology Based on an Expert Survey'. Together they form a unique fingerprint.

Cite this