Effects of Health-related Deepfakes on Misperceptions: Moderating Effects of Issue Relevance and Accuracy Motivation

Jiyoung Lee, Michael Hameleers

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Among its many applications, artificial intelligence (AI) can be used to manipulate audiovisual content for malicious intents. Such manipulations–which are commonly known as deepfakes–present a significant obstacle to maintaining truth in sectors that serve the public interest, and it is necessary to take proactive fact-checking actions to respond to the threat they pose. The current study conducted an online experiment focusing on the topic of cancer prevention. In this experiment, we further examined the influence of individual differences, such as issue relevance and motns, in health information consumption. When compared to textual disinformation, health-related audiovisual deepfakes were found to have a significant effect on increasing misperceptions but no such effect on fact-checking intentions. We also found that exposure to such deepfakes discouraged individuals with high issue relevance from engaging in fact-checking. Deepfakes were also shown to have a particularly potent effect on increasing misperceptions among individuals with high (illusory) accuracy motivations. These findings underscore the need for increased awareness regarding the detrimental effects of health deepfakes in particular and the urgent importance of further elucidating individual variations to ultimately develop more comprehensive approaches to combat deepfakes.

Original languageEnglish
JournalMedia Psychology
DOIs
StateAccepted/In press - 2024

Keywords

  • Deepfake
  • fact-checking
  • health
  • issue relevance
  • misperception
  • motivated reasoning

Fingerprint

Dive into the research topics of 'Effects of Health-related Deepfakes on Misperceptions: Moderating Effects of Issue Relevance and Accuracy Motivation'. Together they form a unique fingerprint.

Cite this