Allocation of moral decision-making in human-agent teams: A pattern approach
van der Waa, J.
van Diggelen, J.
Cavalcante Siebert, L.
Harris, D. (editor)
Li, W.C. (editor)
Artificially intelligent agents will deal with more morally sensitive situations as the field of AI progresses. Research efforts are made to regulate, design and build Artificial Moral Agents (AMAs) capable of making moral decisions. This research is highly multidisciplinary with each their own jargon and vision, and so far it is unclear whether a fully autonomous AMA can be achieved. To specify currently available solutions and structure an accessible discussion around them, we propose to apply Team Design Patterns (TDPs). The language of TDPs describe (visually, textually and formally) a dynamic allocation of tasks for moral decision making in a human-agent team context. A task decomposition is proposed on moral decision-making and AMA capabilities to help define such TDPs. Four TDPs are given as examples to illustrate the versatility of the approach. Two problem scenarios (surgical robots and drone surveillance) are used to illustrate these patterns. Finally, we discuss in detail the advantages and disadvantages of a TDP approach to moral decision making. © Springer Nature Switzerland AG 2020.
Dynamic task allocation
To reference this document use:
Meaningful human control
Team Design Patterns
17th International Conference on Engineering Psychology and Cognitive Ergonomics, EPCE 2020, held as part of the 22nd International Conference on Human-Computer Interaction, HCII 2020; Copenhagen; Denmark; 19 July 2020 through 24 July 2020, 12187, 203-220
Lecture Notes in Computer Science