Abstract
Given that the nature of training data is the primary cause of algorithmic bias, do laypersons realize that systematic misrepresentation and under-representation of certain races in the training data can affect AI performance in a way that privileges some races over others? To answer this question, we conducted three between-subjects online experiments (N = 769 in total) with a prototype of an AI system that recognizes emotion-based facial expressions. Our results show that, by and large, training data representativeness is not an effective cue to communicate algorithmic bias. Instead, users rely on AI’s performance bias to perceive racial bias in AI algorithms. In addition, the race of the users matters. Black participants perceive the system to be more biased when all facial images used to represent unhappy emotions in the training data are those of Black individuals. This finding highlights a significant human cognitive limitation that should be accounted for when communicating algorithmic bias arising from biases in the training data.
| Original language | English |
|---|---|
| Journal | Media Psychology |
| DOIs | |
| State | Accepted/In press - 2025 |
Fingerprint
Dive into the research topics of 'Racial Bias in AI Training Data: Do Laypersons Notice?'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver