When the machine learns from users, is it helping or snooping?

Research output: Contribution to journalArticlepeer-review

15 Scopus citations

Abstract

Media systems that personalize their offerings keep track of users’ tastes by constantly learning from their activities. Some systems use this characteristic of machine learning to encourage users with statements like “the more you use the system, the better it can serve you in the future.” However, it is not clear whether users indeed feel encouraged and consider the system to be helpful and beneficial, or begin to worry about jeopardizing their privacy in the process. We conducted a between-subjects experiment (N = 269) to find out. Guided by the HAII-TIME model (Sundar, 2020), we examined the effects of both explicit and implicit cues on the interface which conveyed that the machine is learning. Data indicate that users consider the system to be a helper and tend to trust it more when the system is transparent about its learning, regardless of the quality of its performance and the degree of explicitness in conveying the fact that it is learning from their activities. The study found no evidence to suggest privacy concerns arising from the machine disclosing that it is learning from its users. We discuss theoretical and practical implications of deploying machine learning cues to enhance user experience of AI-embedded systems.

Original languageEnglish
Article number107427
JournalComputers in Human Behavior
Volume138
DOIs
StatePublished - Jan 2023
Externally publishedYes

Keywords

  • Algorithm
  • HAII-TIME
  • Helper heuristic
  • Machine learning cue
  • Perceived frustration
  • Privacy concern
  • System performance
  • Trust

Fingerprint

Dive into the research topics of 'When the machine learns from users, is it helping or snooping?'. Together they form a unique fingerprint.

Cite this