Role of emotions in responsible military AI

article
Following the rapid rise of military Artificial Intelligence (AI), people have warned against mankind withdrawing from the immediate war-related events resulting in the “dehumanization” of war (Joerden, 2018). The premise that machines decide what is destroyed and what is spared, and when to take a human life, would be deeply morally objectionable. After all, a machine isn’t a conscious being, doesn’t have emotions like empathy, compassion, or remorse, and isn’t subject to military law (Sparrow, 2007). This argument has sparked the call for meaningful human control, requiring moral decisions to be made by humans, not machines (Amoroso & Tamburrini, 2020). The United States have proposed a similar principle, named “appropriate levels of human judgment”Footnote1. Likewise, the NATO principles of responsible use of AI in Defence (NATO, 2021) state that “AI applications will be developed and used with appropriate levels of judgment and care”.
TNO Identifier
982704
ISSN
13881957
Source
Ethics and Information Technology, 25(1)
Article nr.
17
Files
To receive the publication files, please send an e-mail request to TNO Repository.