Lightweight Client Weighting for Robust Federated Learning Under Label-Flipping Attacks
DOI:
https://doi.org/10.54097/vb073p29Keywords:
Machine Learning, Label-Flipping Attack, Federated Learning, Client Weighting.Abstract
Federated learning (FL) enables privacy-preserving machine learning across decentralized clients without sharing raw data. However, it faces significant challenges, particularly from label-flipping (LF) attacks, where malicious clients mislabel data, compromising model performance. This paper proposes a robust aggregation method that mitigates the impact of LF attacks by combining validation-based weighting with update consistency. This method adaptively adjusts client weights based on their validation accuracy and the consistency of their model updates across rounds. Unlike traditional approaches, which rely on costly computation or access to trusted validation sets, this lightweight method requires no access to local data or extensive computation, making it suitable for real-world applications. Experimental results using the Fashion-MNIST dataset show that the proposed method effectively maintains competitive accuracy under LF attacks, outperforming standard aggregation methods such as FedAvg and Trimmed Mean. This strategy offers a practical, scalable solution for FL in adversarial settings, providing robust defense against label-flipping attacks without compromising the system's efficiency.
References
[1] McMahan, B., Moore, E., Ramage, D., Hampson, S., Aguera y Arcas, B: 'Communication-efficient learning of deep networks from decentralized data'. Proc. 20th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, FL, USA, 2017, 54, pp. 1273–1282
[2] Sun, G., Cong, Y., Dong, J., Wang, Q., Lyu, L., Liu, J.: 'Data poisoning attacks on federated machine learning', IEEE Internet of Things Journal, 2022, 9, (13), pp. 11365–11375
[3] Zhao, L., Jiang, J., Feng, B., Wang, Q., Shen, C., Li, Q.: 'SEAR: Secure and efficient aggregation for Byzantine-robust federated learning', IEEE Transactions on Dependable and Secure Computing, 2022, 19, (5), pp. 3329–3342
[4] Blanchard, P., El Mhamdi, E.M., Guerraoui, R., Stainer, J.: 'Machine learning with adversaries: Byzantine tolerant gradient descent'. Advances in Neural Information Processing Systems (NeurIPS), 2017, 30
[5] Pillutla, K., Kakade, S.M., Harchaoui, Z.: 'Robust aggregation for federated learning', IEEE Transactions on Signal Processing, 2022, 70, pp. 1142–1154
[6] Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: 'Federated learning: Challenges, methods, and future directions', IEEE Signal Processing Magazine, 2020, 37, (3), pp. 50–60
[7] Zhang, J., Ge, C., Hu, F., Chen, B.: 'RobustFL: Robust federated learning against poisoning attacks in industrial IoT systems', IEEE Transactions on Industrial Informatics, 2022, 18, (9), pp. 6388–6397
[8] Li, S., Ngai, E.C.-H., Voigt, T.: 'An experimental study of Byzantine-robust aggregation schemes in federated learning', IEEE Transactions on Big Data, 2024, 10, (6), pp. 975–988
[9] Jiang, Y., Zhang, W., Chen, Y.: 'Data quality detection mechanism against label flipping attacks in federated learning', IEEE Transactions on Information Forensics and Security, 2023, 18, pp. 1625–1637
[10] Li, D., Wong, W.E., Wang, W., Yao, Y., Chau, M.: 'Detection and mitigation of label-flipping attacks in federated learning systems with KPCA and K-means'. Proc. 8th International Conference on Dependable Systems and Applications (DSA), Yinchuan, China, 2021, pp. 551–559
[11] Sun, T., Li, D., Wang, B.: 'Decentralized federated averaging', IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45, (4), pp. 4289–4301
[12] Ma, Z., Ma, J., Miao, Y., Li, Y., Deng, R.H.: 'ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning', IEEE Transactions on Information Forensics and Security, 2022, 17, pp. 1639–1654
[13] Moore, E., Imteaj, A., Rezapour, S., Amini, M.H.: 'A survey on secure and private federated learning using blockchain: Theory and application in resource-constrained computing', IEEE Internet of Things Journal, 2023, 10, (24), pp. 21942–21958
[14] Chang, J.M., Zhuang, D., Samaraweera, G.D.: 'Privacy Preserving Machine Learning' (Manning Publications, 2023)
[15] Posner, J., Tseng, L., Aloqaily, M., Jararweh, Y.: 'Federated learning in vehicular networks: Opportunities and solutions', IEEE Network, 2021, 35, (2), pp. 152–159
[16] Ahmed, J., Nguyen, T.N., Ali, B., Javed, M.A., Mirza, J.: 'On the physical layer security of federated learning based IoMT networks', IEEE Journal of Biomedical and Health Informatics, 2023, 27, (2), pp. 691–697
[17] Ek, S., Portet, F., Lalanda, P., Vega, G.: 'Artifact: A federated learning aggregation algorithm for pervasive computing: Evaluation and comparison'. Proc. IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Atlanta, GA, USA, March 2021, pp. 448–449
[18] Kairouz, P., McMahan, H.B. (Eds.): 'Advances and open problems in federated learning', Foundations and Trends® in Machine Learning, 2021, 14, (1–2), pp. 1–210
[19] Ang, F., Chen, L., Zhao, N., Chen, Y., Wang, W., Yu, F.R.: 'Robust federated learning with noisy communication', IEEE Transactions on Communications, 2020, 68, (6), pp. 3452–3464
[20] Xia, G., Chen, J., Yu, C., Ma, J.: 'Poisoning attacks in federated learning: A survey', IEEE Access, 2023, 11, pp. 10708–10722
[21] Zhou, Y., et al.: 'Adaptive privacy-preserving federated learning via gradient compression'. arXiv preprint, 2022, available at: https://arxiv.org/abs/2205.13692
[22] Uprety, A., Rawat, D.B., Li, J.: 'Privacy preserving misbehavior detection in IoV using federated machine learning'. Proc. 18th Annual IEEE Consumer Communications and Networking Conference (CCNC), Las Vegas, NV, USA, 2021, pp. 1–6
[23] Mai, P., Yan, R., Pang, Y.: 'RFLPA: A robust federated learning framework against poisoning attacks with secure aggregation'. Presented at Advances in Neural Information Processing Systems (NeurIPS), 2024, Poster #4505
[24] Yin, D., Chen, Y., Kannan, R., Bartlett, P.: 'Byzantine-robust distributed learning: Towards optimal statistical rates'. Proc. 35th International Conference on Machine Learning (ICML), Stockholm, Sweden, July 2018, 80, pp. 5650–5659
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







