Should autonomous agents apologize for their mistakes? Trust development and social norms in military and civilian human-agents team
other
Autonomous agents are increasingly deployed as teammates instead of as tools. These systems can err and errors can lead to a breach in the human’s trust, compromising collaboration. This forces us to think about how to deal with error when designing autonomous systems. We explored the influence of uncertainty communication and apology on the development of trust in a Human-Agent Team (HAT) in face of a trust violation. Data from a (1) civilian and (2) military group of participants were obtained through two online studies. The task resembled a military house-searchmission that the participant performed together with an autonomous drone as their teammate. Our results show that civilian and military participants respond differently to a mistake by the agent and to its attempt to repair trust. The difference in findings between participant groups emphasizes that agent behaviour should be compatible to the common practices of the target population.
TNO Identifier
956282
Publisher
TNO
Source title
Kurt Lewin Institute (KLI) conferentie 2021 April 20 and Wednesday April 21st, 2021 online
Collation
1 p.
Files
To receive the publication files, please send an e-mail request to TNO Repository.