We need better images of AI and better conversations about AI

article
In this article, we critique the ways in which the people involved in the development and application of AI systems often visualize and talk about AI systems. Often, they visualize such systems as shiny humanoid robots or as free-floating electronic brains. Such images convey misleading messages; as if AI works independently of people and can reason in ways superior to people. Instead, we propose to visualize AI systems as parts of larger, sociotechnical systems. Here, we can learn, for example, from cybernetics. Similarly, we propose that the people involved in the design and deployment of an algorithm would need to extend their conversations beyond the four boxes of the Error Matrix, for example, to critically discuss false positives and false negatives. We present two thought experiments, with one practical example in each. We propose to understand, visualize, and talk about AI systems in relation to a larger, complex reality; this is the requirement of requisite variety. We also propose to enable people from diverse disciplines to collaborate around boundary objects, for example: a drawing of an AI system in its sociotechnical context; or an ‘extended’ Error Matrix. Such interventions can promote meaningful human control, transparency, and fairness in the design and deployment of AI systems.
TNO Identifier
1002518
Source
AI and Ethics, 40, pp. 3615-3626.
Pages
3615-3626