Performance or Explainability? A Law of Armed Conflict Perspective
bookPart
Machine learning techniques lie at the centre of many recent advancements in artificial intelligence (AI), including in weapon systems. While powerful, these techniques utilise opaque models whose internal workings are generally quite difficult to explain, which necessitated the development of explainable AI (XAI). In the military domain, both performance and explainability are important and legally required by international humanitarian law (IHL). In practice, however, these two desiderata are in conflict, as improving explainability may involve paying an opportunity cost in performance and vice versa. It is unclear how IHL requires States to address this dilemma. In this article, we attempt to operationalise normative IHL requirements in terms of P (performance) and X (explainability) to derive qualitative guidelines for decision-makers on this issue. We first explain the explainability-performance trade-off, what causes it, and what its consequences are. Then, we explore relevant IHL principles that include P and X as requirements, and develop four tenets derived from these principles. We demonstrate how IHL prescribes minimum values for both P and X, but that once these values are achieved, P should be prioritised over X. We conclude by formulating a general guideline and provide an example of how this would impact model choice. (C) The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.
TNO Identifier
991939
ISSN
23521902
Publisher
Springer Nature
Source title
Law, Governance and Technology Series
Pages
255-279
Files
To receive the publication files, please send an e-mail request to TNO Repository.