Abstract
This study employs an experiment to test subjects' perceptions of an artificial intelligence (AI) crime-predicting agent that produces clearly racist predictions. It used a 2 (human crime predictor/AI crime predictor) x 2 (high/low seriousness of crime) design to test the relationship between the level of autonomy and responsibility for the unjust results. The seriousness of crime was manipulated to examine the relationship between the perceived threat and trust in the authority's decisions. Participants (N = 334) responded to an online questionnaire after reading one of four scenarios with the same story depicting a crime predictor unjustly reporting a higher likelihood of subsequent crimes for a black defendant than for a white defendant for similar crimes. The results indicate that people think that an AI crime predictor has significantly less autonomy than a human crime predictor. However, both the identity of the crime predictor and the seriousness of the crime showed insignificant results on the level of responsibility assigned to the predictor. Also, a clear positive relationship between autonomy and responsibility was found in both human and AI crime predictor scenarios. The implications of the findings for applications and theory are discussed.
| Original language | English |
|---|---|
| Pages (from-to) | 79-84 |
| Number of pages | 6 |
| Journal | Computers in Human Behavior |
| Volume | 100 |
| DOIs | |
| State | Published - Nov 2019 |
| Externally published | Yes |
Keywords
- Artificial intelligence
- Attribution theory
- CASA
- Human-AI Communication
- Predictive policing
- Racism
Fingerprint
Dive into the research topics of 'Racism, responsibility and autonomy in HCI: Testing perceptions of an AI agent'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver