Abstract
Trust is essential in individuals’ perception, behavior, and evaluation of intelligent agents. Because, it is the primary motive for people to accept new technology, it is crucial to repair trust when damaged. This study investigated how intelligent agents should apologize to recover trust and how the effectiveness of the apology is different when the agent is human-like compared to machine-like based on two seemingly competing frameworks of the Computers-Are-Social-Actors paradigm and automation bias. A 2 (agent: Human-like vs. Machine-like) X 2 (apology attribution: Internal vs. External) between-subject design experiment was conducted (N = 193) in the context of the stock market. Participants were presented with a scenario to make investment choices based on an artificial intelligence agent's advice. To see the trajectory of the initial trust-building, trust violation, and trust repair process, we designed an investment game that consists of five rounds of eight investment choices (40 investment choices in total). The results show that trust was repaired more efficiently when a human-like agent apologizes with internal rather than external attribution. However, the opposite pattern was observed among participants who had machine-like agents; the external rather than internal attribution condition showed better trust repair. Both theoretical and practical implications are discussed.
| Original language | English |
|---|---|
| Article number | 101595 |
| Journal | Telematics and Informatics |
| Volume | 61 |
| DOIs | |
| State | Published - Aug 2021 |
Keywords
- Anthropomorphism
- Apology attribution
- Artificial intelligence
- Automation bias
- CASA paradigm
- Trust repair
Fingerprint
Dive into the research topics of 'How should intelligent agents apologize to restore trust? Interaction effects between anthropomorphism and apology attribution on trust repair'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver