Defining a method for ethical decision-making for automated vehicles
conference paper
Although AI and automated driving have made significant progress over the past decade, there is currently no satisfactory ethical framework for decision-making by automated vehicles. Most approaches rely on a single theory of normative ethics, which may lead to conflicting results for different theories [1] as it does not account for the complexity of human decision-making. An exception to this is the framework called Augmented Utilitarianism, proposed by Aliman & Kester [2][3] as a more integrated non-normative approach. This framework is unique in combining a virtue ethics, deontological, and consequentialism approach and is grounded in moral psychology and neuroscience. In this paper, we build on Augmented Utilitarianism and design a method to define a set of relevant societally aligned attributes for AV decision-making. To this purpose, we presented 100 participants with specific traffic scenarios and tasked them to indicate the relevance of an initial set of 11 attributes, including physical harm, psychological harm, and moral responsibility, as well as to indicate missing yet relevant attributes. As an independent variable, we distinguished critical and non-critical traffic scenarios (with respect to the risk of harm). The experimental results motivate the extension of the initial set of attributes to two new attributes: environmental harm and environmental damage. In addition, four attributes had significantly different results in critical and non-critical scenarios: physical damage, psychological damage, the legality of the AV, and self-preservation. This suggests that the weight of the attributes in the mathematical model may vary according to the criticality of the situation
TNO Identifier
1008858
Source
CEUR 2nd Workshop on Bias, Ethics and Fairness in Artificial Intelligence
Files
To receive the publication files, please send an e-mail request to TNO Repository.