Do you get it? User-evaluated explainable BDI agents
conference paper
In this paper we focus on explaining to humans the behavior of autonomous agents, i.e., explainable agents. Explainable agents are useful for many reasons including scenario-based training (e.g. disaster training), tutor and pedagogical systems, agent development and debugging, gaming, and interactive storytelling. As the aim is to generate for humans plausible and insightful explanations, user evaluation of different explanations is essential. In this paper we test the hypothesis that different explanation types are needed to explain different types of actions. We present three different, generically applicable, algorithms that automatically generate different types of explanations for actions of BDI-based agents. Quantitative analysis of a user experiment (n=30), in which users rated the usefulness and naturalness of each explanation type for different agent actions, supports our hypothesis. In addition, we present feedback from the users about how they would explain the actions themselves. Finally, we hypothesize guidelines relevant for the development of explainable BDI agents. © 2010 Springer-Verlag Berlin Heidelberg.
Topics
TNO Identifier
425136
ISSN
03029743
ISBN
3642161774 9783642161773
Source title
8th International Workshop on Multi Agent Systems Technologies, MATES 2010, 27 September 2010 through 29 September 2010, Leipzig. Conference code: 82112
Pages
28-39
Files
To receive the publication files, please send an e-mail request to TNO Repository.