Data-driven AI development: An integrated and iterative bias mitigation approach
conference paper
This paper presents an explanatory case study aimed at exploring bias leading to discriminatory decisions generated by artificial intelligence decision making systems (AI-DMS). Particularly machine learning-based AI-DMS depends on data concealing bias emerging from society. This bias could be transitioned to AI-DMS models and consequently lead to undesirable biased predictions. Preventing bias is an actual theme both in academia and industry. Academic literature generally seems to be focused on particular bias mitigation methods, while integrating these methods in the development process of AI-DMS models remains underexposed. In this study, the concepts of bias identification and bias mitigation methods are explored to conceive an integrated approach of bias identification and mitigation in the AI-DMS model development process. Reviewing this approach with a case study showed that its application contributes to the development of fair and accurate AI-DMS models. The proposed iterative approach enables the combination of multiple bias mitigation methods. Additionally, its step-by-step design empowers designers to be aware of bias pitfalls in AI, opening doors for an “unbiased by design” model development. From a governance perspective, the proposed approach might serve as an instrument for AI-DMS models’ internal auditing purposes.
Topics
Artificial intelligence decision-making systemsBiasBias mitigationExplainable artificial intelligenceIT-auditLegal complianceDecision makingIterative methodsAcademic literatureDevelopment processIntegrated approachIntelligence decisionInternal auditingIterative approachMitigation methodsModel developmentArtificial intelligence
TNO Identifier
968191
ISSN
16130073
Publisher
CEUR-WS
Source title
CEUR Workshop Proceedings, 3rd EXplainable AI in Law Workshop, XAILA 2020, 9 December 2020
Files
To receive the publication files, please send an e-mail request to TNO Repository.