The design and validation of an intuitive confidence measure
conference paper
Explainable AI becomes increasingly important as the use of intelligent systems becomes more widespread in high-risk domains. In these domains it is important that the user knows to which degree the system’s decisions can be trusted. To facilitate this, we present the Intuitive Confidence Measure (ICM): A lazy learning meta-model that can predict how likely a given decision is correct. ICM is intended to be easy to understand which we validated in an experiment. We compared ICM with two different methods of computing confidence measures: The numerical output of the model and an actively learned metamodel. The validation was performed using a smart assistant for maritime professionals. Results show that ICM is easier to understand but that each user is unique in its desires for explanations. This user studies with domain experts shows what users need in their explanations and that personalization is crucial.
Topics
TNO Identifier
788264
ISSN
16130073
Publisher
CEUR-WS
Source title
2018 Joint ACM IUI Workshops, ACMIUI-WS 2018, 11 March 2018
Editor(s)
Said, A.
Komatsu, T.
Komatsu, T.
Files
To receive the publication files, please send an e-mail request to TNO Repository.