Abstract
As in traditional machine learning models, models trained with federated learning may exhibit disparate performance across demographic groups. Model holders must identify these disparities to mitigate undue harm to the groups. However, measuring a model’s performance in a group requires access to information about group membership which, for privacy reasons, often has limited availability. We propose novel locally differentially private mechanisms to measure differences in performance across groups while protecting the privacy of group membership. To analyze the effectiveness of the mechanisms, we bound their error in estimating a disparity when optimized for a given privacy budget. Our results show that the error rapidly decreases for realistic numbers of participating clients, demonstrating that, contrary to what prior work suggested, protecting privacy is not necessarily in conflict with identifying performance disparities of federated models.
Original language | English |
---|---|
Title of host publication | Proceedings of Algorithmic Fairness through the Lens of Causality and Privacy: A hybrid NeurIPS 2022 Workshop |
Number of pages | 19 |
Publication status | Accepted/In press - 20 Oct 2022 |
Event | Algorithmic Fairness through the Lens of Causality and Privacy: A hybrid NeurIPS 2022 Workshop - New Orleans, United States Duration: 3 Dec 2022 → 3 Dec 2022 https://www.afciworkshop.org/afcp2022 |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
ISSN (Print) | 2640-3498 |
Workshop
Workshop | Algorithmic Fairness through the Lens of Causality and Privacy |
---|---|
Abbreviated title | AFCP |
Country/Territory | United States |
City | New Orleans |
Period | 3/12/22 → 3/12/22 |
Internet address |
Keywords
- differential privacy
- algorithmic fairness
- federated learning