Aiding human reliance decision making using computational models of trust
conference paper
This paper involves a human-agent system in which there is an operator charged with a pattern recognition task, using an automated decision aid. The objective is to make this human-agent system operate as effectively as possible. Effectiveness is gained by an increase of appropriate reliance on the operator and the aid. We studied whether it is possible to contribute to this objective by, apart from the operator, letting the aid as well calibrate trust in order to make reliance decisions. In addition, the aid's calibration of trust in reliance decision making capabilities of both the operator and itself is also expected to contribute, through reliance decision making on a metalevel, which we call metareliance decision making. In this paper we present a formalization of these two approaches: a reliance (RDMM) and metareliance decision making model (MetaRDMM), respectively.
A combination of laboratory and simulation experiments shows significant improvements compared to reliance decision making solely done by operators.
A combination of laboratory and simulation experiments shows significant improvements compared to reliance decision making solely done by operators.
Topics
Human-computer interactionDecision support systemsFeature extractionPattern recognitionPattern recognition systemsProblem solvingTechnologyAgent systemsAutomated decision aidComputational modellingDecision-making modelsIntelligent agent technologyInternational conferencesSimulation experimentsWeb intelligenceDecision making
TNO Identifier
19127
Article nr.
4427610
Source title
Proceedings of the Workshop on Communication between Human and Artificial Agents (CHAA'07), co-located with The 2007 IEEE IAT/WIC/ACM International Conference on Intelligent Agent Technology, IEEE Computer Society Press, Fremont, California, USA.
Pages
372-376
Files
To receive the publication files, please send an e-mail request to TNO Repository.