Maintaining human-AI trust: Understanding breakdowns and repair
doctoral thesis
People are increasingly working with Artificial Intelligence (AI) agents, whether as softwarebased systems such as AI-chatbots and voice assistants, or embedded in hardware devices like autonomous vehicles, advanced robots, and drones. The idea of Human-AI (H-AI) collaboration is promising, since humans and AI possess complementary skills that, when combined, can enhance performance beyond the capabilities of its individual members. Here, the real challenge is not just determining which tasks are better suited for humans or machines working independently, but in finding ways to enhance their respective strengths through effective interaction. Working together towards a common goal requires good cooperation, coordination, and communication, and it is within these areas that the true challenges lie. A key component in these activities is trust, as it allows individuals to depend on each other’s contributions to complete tasks and achieve shared goals. More specifically, maintaining balanced trust (i.e., neither too much nor too little) is crucial for safe and effective H-AI collaborations. Finding this balance, a process known as trust calibration, should enable people to determine when to rely on AI agents and when to override them. To facilitate this, we need to understand how H-AI trust is built, breaks down, and recovers (i.e., the ‘trust lifecycle’). This dissertation focusses on how to maintain H-AI trust, by examining how trust breaks down (i.e., trust violations) and the mechanisms through which trust can be repaired. © 2025 Esther Kox
TNO Identifier
1014273
ISBN
978-90-365-6551-6
978-90-365-6552-3
978-90-365-6552-3
Publisher
University of Twente
Collation
184 p.
Place of publication
Enschede