Trust Violations Due To Error or Choice: The Differential Effects on Trust Repair in Human-Human and Human-Robot Interaction
article
Many decisions in life involve tradeoffs: to gain something, one often has to lose something in return. As robots become more autonomous, their decisions will extend beyond mere assessments (e.g., detecting a threat) to making choices (e.g., taking the faster or the safer route). The aim of the current research was to study perceived trustworthiness in scenarios involving adverse consequences due to 1) an assessment error, versus 2) a choice. Perceived trustworthiness (ability, benevolence, integrity) was measured repeatedly during a computer task simulating a military mission. Participants teamed with either a virtual human or a robotic partner who led the way and warned for potential danger. After encountering a hazard, the partner explained that it 1) failed to detect the threat (error) or 2) prioritized the mission and chose the fastest route despite the risk (choice). Results showed that: a) the error-explanation repaired all trustworthiness dimensions, b) the choice-explanation only repaired perceptions of ability, not benevolence or integrity, c) no differences were found between human and robotic partners. Our findings suggest that trust violations due to choices are harder to repair than those due to errors. Implications and future research directions are discussed.
TNO Identifier
1015408
Source
ACM Transactions on Human-Robot Interaction, 14(4)
Article nr.
75