Authors :
Vishal Yadav; Shreeja Kale
Volume/Issue :
Volume 9 - 2024, Issue 8 - August
Google Scholar :
https://tinyurl.com/28kjr4ak
Scribd :
https://tinyurl.com/yvx2epyk
DOI :
https://doi.org/10.38124/ijisrt/IJISRT24AUG1319
Abstract :
Federated Learning (FL) enables collaborative
model training across decentralized devices while
preserving user data privacy. However, disparities in data
distributions among clients can lead to biased models that
perform unfairly across different demographic groups.
This paper proposes a fairness-aware Federated Learning
framework equipped with real-time bias detection and
correction mechanisms. Our approach adjusts model
updates to address biases detected at local client levels
before aggregating them at the central server. We
demonstrate the effectiveness of our method through
empirical evaluations on multiple datasets, showcasing
significant improvements in fairness and model accuracy.
Our proposed framework involves a multi-tiered
approach to ensure fairness in the model training process.
Firstly, it employs local bias detection techniques at the
client level to identify disparities in model performance
across different groups. Clients then utilize bias
correction mechanisms to adjust their model updates,
addressing any detected biases before sending updates to
the central server. The central server aggregates these
bias-corrected updates, ensuring that the global model
benefits from equitable learning while maintaining
overall performance.
Keywords :
Bias Detection, Real-Time Systems, Fairness- Aware Learning.
References :
- Bonawitz, K., Eichner, H., Grieskamp, W., et al. (2019). "Towards Federated Learning at Scale: System Design." Proceedings of the 2nd SysML Conference.
- Hard, A., Rao, K., Mathews, R., Ramaswamy, S. (2018). "Federated Learning for Mobile Keyboard Prediction." arXiv preprint arXiv:1811.03604.
- Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R. (2012). "Fairness through Awareness." Proceedings of the 3rd Innovations in Theoretical Computer Science Conference.
- Zafar, M. B., Valera, I., Rodriguez, M. G., & Gummadi, K. P. (2017). "Fairness Constraints: Mechanisms for Fair Classification." Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS).
- Agarwal, A., Dudik, M., & Wu, Z. S. (2018). "Fair Regression: Quantitative Definitions and Reduction-Based Algorithms." Proceedings of the 35th International Conference on Machine Learning (ICML).
- Y McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & Arcas, B. A. y. (2017). "Communication-Efficient Learning of Deep Networks from Decentralized Data." Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS).
- Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C. (2013). "Learning Fair Representations." Proceedings of the 30th International Conference on Machine Learning (ICML).
Federated Learning (FL) enables collaborative
model training across decentralized devices while
preserving user data privacy. However, disparities in data
distributions among clients can lead to biased models that
perform unfairly across different demographic groups.
This paper proposes a fairness-aware Federated Learning
framework equipped with real-time bias detection and
correction mechanisms. Our approach adjusts model
updates to address biases detected at local client levels
before aggregating them at the central server. We
demonstrate the effectiveness of our method through
empirical evaluations on multiple datasets, showcasing
significant improvements in fairness and model accuracy.
Our proposed framework involves a multi-tiered
approach to ensure fairness in the model training process.
Firstly, it employs local bias detection techniques at the
client level to identify disparities in model performance
across different groups. Clients then utilize bias
correction mechanisms to adjust their model updates,
addressing any detected biases before sending updates to
the central server. The central server aggregates these
bias-corrected updates, ensuring that the global model
benefits from equitable learning while maintaining
overall performance.
Keywords :
Bias Detection, Real-Time Systems, Fairness- Aware Learning.