Privacy-Preserving Contrastive Explanations with Local Foil Trees
conference paper
We present the first algorithm that combines privacy-preserving technologies and state-of-the-art explainable AI to enable privacy-friendly explanations of black-box AI models. We provide a secure algorithm for contrastive explanations of black-box machine learning models that securely trains and uses local foil trees. Our work shows that the quality of these explanations can be upheld whilst ensuring the privacy of both the training data, and the model itself.
TNO Identifier
973180
Source title
Cyber Security, Cryptology, and Machine Learning 6th International Symposium, CSCML 2022, Be'er Sheva, Israel, June 30 – July 1, 2022
Pages
88-98
Files
To receive the publication files, please send an e-mail request to TNO Repository.