Challenges in Algorithmic Fairness when using Multi-Party Computation Models

conference paper
While the topics of Secure Multi-Party Computation (MPC) and Algorithmic Fairness (or in short, fairness) are essential in the area of Responsible AI, they are typically researched separately. However, when multiple parties train a model in a privacy-preserving manner, fairness of the model is not guaranteed in any way. In fact, the multiple parties involved could run into several challenges when wanting to measure and mitigate unfairness. We reflect on the existing technical solutions in this field and identify three practical challenges. First of all, when computing with multiple parties, the focus lies on the computed result of a mathematical model, the output. Fairness assessments also cover the outcome of a model, i.e. what the output entails in deployment. Without proper agreements, the individual parties in an MPC setting could act differently upon the output and have a conflicting definition of fairness. Secondly, in a multi-party setting, the data is distributed. Therefore, a difference can arise between global fairness (evaluated across all data) and local fairness (across local data). Finally, fairness is not a static measure. All sorts of feedback loops can occur, some directly affecting the model when it is retrained with new data. Working with multiple parties could make this even more problematic, because each party can have feedback loops in their own system which influence the total system and fairness for others as well. In this position paper, we hope to pave the way for integrating fairness challenges into future MPC studies, an important new field of research.
TNO Identifier
1003268
Source title
Joint International Scientific Conferences on AI and Machine Learning BNAIC/BeNeLearn 2024 November 18th, 19th and 20th 2024, Utrecht
Pages
1-16