Pre-release: Human Systems Integration for Meaningful Human Control over AI-based Systems
report
The use of artificial intelligence (AI) in the military domain promises significant advancements in warfare: ultra-fast decision making, a force multiplier and improved accuracy and coverage from rapidly analysing massive data sets. As these “smart” AI-enabled systems become increasingly more autonomous, concerns about controllability are growing. The question arises how to be assured of maintaining appropriate, effective and “meaningful” human control (MHC) and oversight. The concept of MHC explicitly emphasizes the ethical and legal aspects of human control (“meaningful”), and it implicitly assigns responsibility and accountability. Although agreed within NATO that future battlespace will continue to require human judgement to ensure meaningful human control, perspectives differ on what it exactly entails. Accordingly, it remains unclear what specific design guidelines should be derived from MHC. The Research Task Group (NATO-HFM-330-RTG) was commissioned to explore the meaning, characteristics and potential solutions surrounding meaningful human control (MHC) over future AI-based systems. One important aspect is that MHC should not be seen as a simple attribute, but as part of a complex socio-technical system: a continuous, iterative cycle conducted within a wider political, legal, regulatory, ethical, organizational, cultural, technological and system-of-systems context. As such, a combination of approaches applied at different phases across the whole lifecycle of a system are required to ensure that meaningful human control is realized. Safety research, with a similar wider systems’ perspective, offers established standardized practices for design, evaluation and testing which could potentially also be useful. A closer look at the layers of control, including for example deployment, design/development, and testing, but also governance of military AI systems, therefore is required. There are a range of potential practical solutions across different layers of control within the wider socio-technical system to help assure that the proper underlying conditions for MHC exist in future NATO operational environments. Throughout this report, 17 candidate methods were identified and these methods reflect concepts that were discussed over the 3-year period of the RTG. These potential solutions vary from design guidelines, to situational awareness metrics and organizational training. Although there is not a one-size fits all solution, one overarching conclusion is that a properly administered human-centered design process, in which different stakeholders collaborate to identify and mitigate many potential issues associated with MHC even in the earliest phases of AI lifecycle, is essential. The hope is that the identified methods in the report stimulate ideas for further research into promising and practical MHC solutions across the entire system & mission lifecycle as not all candidate solutions have been tested in real military operations. As another means to understand MHC within the complex socio-technical system, we introduce the ‘holistic bowtie model’ of MHC. The holistic bowtie model combines aspects on the level of individual human-machine systems, system of systems, organization, society up to a large system level like the earth and its surrounding space. The holistic bowtie model can help understanding how different methods that promote MHC relate to each other. We finish the report with suggestion for further research and propose to focus on MHC in the information domain (e.g. cognitive warfare) and within envisioned multi-domain operations, the role of participatory design, and how large-scale system-of-systems may impact MHC.
Topics
TNO Identifier
1017837
ISBN
978-92-837-2586-2
Source title
Human Factors and Medicine Panel, March 5, 2025
Collation
116 p.