Data-driven AI development: An integrated and iterative bias mitigation approach
van Engers, T.
This paper presents an explanatory case study aimed at exploring bias leading to discriminatory decisions generated by artificial intelligence decision making systems (AI-DMS). Particularly machine learning-based AI-DMS depends on data concealing bias emerging from society. This bias could be transitioned to AI-DMS models and consequently lead to undesirable biased predictions. Preventing bias is an actual theme both in academia and industry. Academic literature generally seems to be focused on particular bias mitigation methods, while integrating these methods in the development process of AI-DMS models remains underexposed. In this study, the concepts of bias identification and bias mitigation methods are explored to conceive an integrated approach of bias identification and mitigation in the AI-DMS model development process. Reviewing this approach with a case study showed that its application contributes to the development of fair and accurate AI-DMS models. The proposed iterative approach enables the combination of multiple bias mitigation methods. Additionally, its step-by-step design empowers designers to be aware of bias pitfalls in AI, opening doors for an “unbiased by design” model development. From a governance perspective, the proposed approach might serve as an instrument for AI-DMS models’ internal auditing purposes.
Artificial intelligence decision-making systems
To reference this document use:
Explainable artificial intelligence
CEUR Workshop Proceedings, 3rd EXplainable AI in Law Workshop, XAILA 2020, 9 December 2020