Title
Moral Decision Making in Human-Agent Teams: Human Control and the Role of Explanations
Author
van der Waa, J.
Verdult, S.
van den Bosch, K.
van Diggelen, J.
Haije, T.
van der Stigchel, B.
Cocu, I.
Publication year
2021
Abstract
With the progress of Artificial Intelligence, intelligent agents are increasingly being deployed in tasks for which ethical guidelines and moral values apply. As artificial agents do not have a legal position, humans should be held accountable if actions do not comply, implying humans need to exercise control. This is often labeled as Meaningful Human Control (MHC). In this paper, achieving MHC is addressed as a design problem, defining the collaboration between humans and agents. We propose three possible team designs (Team Design Patterns), varying in the level of autonomy on the agent’s part. The team designs include explanations given by the agent to clarify its reasoning and decision-making. The designs were implemented in a simulation of a medical triage task, to be executed by a domain expert and an artificial agent. The triage task simulates making decisions under time pressure, with too few resources available to comply with all medical guidelines all the time, hence involving moral choices. Domain experts (i.e., health care professionals) participated in the present study. One goal was to assess the ecological relevance of the simulation. Secondly, to explore the control that the human has over the agent to warrant moral compliant behavior in each proposed team design. Thirdly, to evaluate the role of agent explanations on the human’s understanding in the agent’s reasoning. Results showed that the experts overall found the task a believable simulation of what might occur in reality. Domain experts experienced control over the team’s moral compliance when consequences were quickly noticeable. When instead the consequences emerged much later, the experts experienced less control and felt less responsible. Possibly due to the experienced time pressure implemented in the task or over trust in the agent, the experts did not use explanations much during the task; when asked afterwards they however considered these to be useful. It is concluded that a team design should emphasize and support the human to develop a sense of responsibility for the agent’s behavior and for the team’s decisions. The design should include explanations that fit with the assigned team roles as well as the human cognitive state. © Copyright © 2021 van der Waa, Verdult, van den Bosch, van Diggelen, Haije, van der Stigchel and Cocu.
Subject
Artificial intelligence
Ethical AI
Explainable AI
Human study
Human-agent teaming
Meaningful human control
Moral AI
Team design patterns
To reference this document use:
http://resolver.tudelft.nl/uuid:6725df55-aedf-4c1e-9fe9-21361345717c
DOI
https://doi.org/10.3389/frobt.2021.640647
TNO identifier
957064
ISSN
2296-9144
Source
Frontiers in Robotics and AI, 8 (8)
Document type
article