Thursday 26 September 2019

Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions

an article by Eun Go  (Western Illinois University, Macomb, USA) and S. Shyam Sundar (The Pennsylvania State University, USA) published in Computers in Human Behavior Volume 97 (August 2019)

Highlights

  • A compensation effect of high anthropomorphic visual cues on low message interactivity.
  • Another compensation effect of high message interactivity on low anthropomorphic visual cues.
  • An expectancy violation effects when the identity cue is combined with message interactivity.
  • Revealing the identity of the machine can capitalize on expectations.

Abstract

Chatbots are replacing human agents in a number of domains, from online tutoring to customer-service to even cognitive therapy. But, they are often machine-like in their interactions.

What can we do to humanize chatbots?

Should they necessarily be driven by human operators for them to be considered human? Or, will an anthropomorphic visual cue on the interface and/or a high-level of contingent message exchanges provide humanness to automated chatbots?

We explored these questions with a 2 (anthropomorphic visual cues: high vs. low anthropomorphism) × 2 (message interactivity: high vs. low message interactivity) × 2 (identity cue: chat-bot vs. human) between-subjects experiment (N = 141) in which participants interacted with a chat agent on an e-commerce site about choosing a digital camera to purchase.

Our findings show that a high level of message interactivity compensates for the impersonal nature of a chatbot that is low on anthropomorphic visual cues. Moreover, identifying the agent as human raises user expectations for interactivity. Theoretical as well as practical implications of these findings are discussed.


No comments: