Role of emotions in responsible military AI
van Diggelen, J.
van den Bosch, K.
Following the rapid rise of military Artificial Intelligence (AI), people have warned against mankind withdrawing from the immediate war-related events resulting in the “dehumanization” of war (Joerden, 2018). The premise that machines decide what is destroyed and what is spared, and when to take a human life, would be deeply morally objectionable. After all, a machine isn’t a conscious being, doesn’t have emotions like empathy, compassion, or remorse, and isn’t subject to military law (Sparrow, 2007). This argument has sparked the call for meaningful human control, requiring moral decisions to be made by humans, not machines (Amoroso & Tamburrini, 2020). The United States have proposed a similar principle, named “appropriate levels of human judgment”Footnote1. Likewise, the NATO principles of responsible use of AI in Defence (NATO, 2021) state that “AI applications will be developed and used with appropriate levels of judgment and care”.
To reference this document use:
Ethics and Information Technology, 25 (25)