Trusting the X in XAI: Effects of different types of explanations by a self-driving car on trust, explanation satisfaction and mental models

conference paper
There is an increasing demand for opaque intelligent systems to explain themselves to humans, in order to increase user trust and the formation of adequate mental models. Previous research has shown effects of different types of explanations on user preferences and performance. However, this research has not addressed the differential effects of intentional and causal explanations on both users’ trust and mental models, nor has it employed multiple trust measurement scales at multiple points in time. In the current research, the effects of three types of explanations (causal, intentional, mixed) on trust development, mental models, and user satisfaction were investigated in the context of a self-driving car. Results showed that participants were least satisfied with causal explanations, that intentional explanations were most effective in establishing high levels of trust, and that mixed explanations led to the best functional understanding of the system and resulted in the least changes in trust over time.
TNO Identifier
956010
Publisher
Human Factors an Ergonomics Society Inc.
Source title
Proceedings of the 2020 HFES 64th International Annual Meeting
Place of publication
Santa Monica, CA
Pages
339-343
Files
To receive the publication files, please send an e-mail request to TNO Repository.