“I Believe AI Can Learn from the Error. Or Can It Not?”: The Effects of Implicit Theories on Trust Repair of the Intelligent Agent

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

After an intelligent agent makes an error, regaining lost trust is an important issue. In the process of trust repair, individuals’ underlying perception of the potential for improvement in an intelligent agent and the way an agent apologizes may influence the agent’s trust repair process. In this study, we investigated the influence of implicit theories of artificial intelligence and their apology style on trust repair after the trust violation. A 2 (implicit theory: Incremental vs. Entity) X 2 (apology attribution: Internal vs. External) between-subject design experiment was conducted (N = 150). Participants were asked to make 40 different decisions in a stock market investment game created for this study. Each time participants made an investment decision, an intelligent agent gave them a recommendation. The results show that trust was damaged less severely in the incremental rather than the entity theory condition and in the external rather than internal attribution apology condition after the trust violation. Trust was restored more strongly in the Entity-External condition than in the others. We discuss both theoretical and practical implications.

Original languageEnglish
Pages (from-to)115-128
Number of pages14
JournalInternational Journal of Social Robotics
Volume15
Issue number1
DOIs
StatePublished - Jan 2023

Keywords

  • Anthropomorphism
  • Apology attribution
  • Artificial intelligence
  • Implicit theories
  • Trust repair

Fingerprint

Dive into the research topics of '“I Believe AI Can Learn from the Error. Or Can It Not?”: The Effects of Implicit Theories on Trust Repair of the Intelligent Agent'. Together they form a unique fingerprint.

Cite this