PPBRFL: Privacy-Preserving Byzantine-Robust Federated Learning

Authors

  • Qun Zhou

DOI:

https://doi.org/10.54097/6jamgy43

Keywords:

Federated Learning, Byzantine Robustness, Privacy-preserving, Data Privacy

Abstract

Federated learning is a distributed machine learning approach that allows the neural network to be trained without exposing private user data. Despite its advantages, federated learning schemes still face two critical security challenges: user privacy disclosure and Byzantine robustness. The adversary may try to infer the private data from the trained local gradients or compromise the global model update. To tackle the above challenges, we propose PPBRFL, a privacy-preserving Byzantine-robust federated learning scheme. To resist Byzantine attacks, we design a novel Byzantine-robust aggregation method based on

cosine similarity, which can guarantee the global model update and improve the model’s classification accuracy. Furthermore, we introduce a reward and penalty mechanism that considers users’ behavior to mitigate the impact of Byzantine users on the global model. To protect user privacy, we utilize symmetric homomorphic encryption to encrypt the users’ trained local models, which requires low computation cost while maintaining model accuracy. We conduct the experimental assessment of the performance of PPBRFL. The experimental results show that PPBRFL maintains model classification accuracy while ensuring privacy preservation and Byzantine robustness compared to traditional federated learning scheme.

Downloads

Download data is not yet available.

References

J. Roman and A. Jameel, “Backpropagation and recurrent neural networks in financial analysis of multiple stock market returns,” in Proceedings of HICSS-29: 29th Hawaii international conference on system sciences, vol. 2. IEEE, 1996, pp. 454–460.

M. Bakator and D. Radosav, “Deep learning and medical diagnosis: A review of literature,” Multimodal Technologies and Interaction, vol. 2, no. 3, p. 47, 2018.

A. B. Nassif, I. Shahin, I. Attili, M. Azzeh, and K. Shaalan, “Speech recognition using deep neural networks: A systematic review,” IEEE access, vol. 7, pp. 19 143–19 165, 2019.

G. Xu, H. Li, S. Liu, K. Yang, and X. Lin, “Verifynet: Secure and verifiable federated learning,” IEEE Transactions on Information Forensics and Security, vol. 15, pp. 911–926, 2019.

J. Konecnˇ y, H. B. McMahan, F. X. Yu, P. Richt ` arik, A. T. ´ Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” arXiv preprint arXiv:1610.05492, 2016.

K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konecnˇ y, S. Mazzocchi, ` B. McMahan et al., “Towards federated learning at scale: 9 System design,” Proceedings of machine learning and systems, vol. 1, pp. 374–388, 2019.

L. T. Phong, Y. Aono, T. Hayashi, L. Wang, and S. Moriai, “Privacy-preserving deep learning via additively homomorphic encryption,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 5, pp. 1333– 1345, 2018.

R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” in Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, 2015, pp. 1310–1321.

F. Mo, H. Haddadi, K. Katevas, E. Marin, D. Perino, and N. Kourtellis, “Ppfl: privacy-preserving federated learning with trusted execution environments,” in Proceedings of the 19th annual international conference on mobile systems, applications, and services, 2021, pp. 94–108.

H. Fang and Q. Qian, “Privacy preserving machine learning with homomorphic encryption and federated learning,” Future Internet, vol. 13, no. 4, p. 94, 2021.

J. Ma, S.-A. Naas, S. Sigg, and X. Lyu, “Privacy-preserving federated learning based on multi-key homomorphic encryption,” International Journal of Intelligent Systems, vol. 37, no. 9, pp. 5880–5901, 2022.

X. Zhang, A. Fu, H. Wang, C. Zhou, and Z. Chen, “A privacy-preserving and verifiable federated learning scheme,” in ICC 2020-2020 IEEE International Conference on Communications (ICC). IEEE, 2020, pp. 1–6.

W. Wang, X. Li, X. Qiu, X. Zhang, J. Zhao, and V. Brusic, “A privacy preserving framework for federated learning in smart healthcare systems,” Information Processing & Management, vol. 60, no. 1, p. 103167, 2023.

S. Truex, N. Baracaldo, A. Anwar, T. Steinke, H. Ludwig, R. Zhang, and Y. Zhou, “A hybrid approach to privacy-preserving federated learning,” in Proceedings of the 12th ACM workshop on artificial intelligence and security, 2019, pp. 1–11.

J. Xu, S.-L. Huang, L. Song, and T. Lan, “Signguard: Byzantine-robust federated learning through collaborative malicious gradient filtering,” arXiv preprint arXiv:2109.05872, 2021.

P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, “Machine learning with adversaries: Byzantine tolerant gradient descent,” Advances in neural information processing systems, vol. 30, 2017.

D. Yin, Y. Chen, R. Kannan, and P. Bartlett, “Byzantine-robust distributed learning: Towards optimal statistical rates,” in International Conference on Machine Learning. PMLR, 2018, pp. 5650–5659.

X. Cao, M. Fang, J. Liu, and N. Z. Gong, “Fltrust: Byzantine-robust federated learning via trust bootstrapping,” arXiv preprint arXiv:2012.13995, 2020.

A. Fu, X. Zhang, N. Xiong, Y. Gao, H. Wang, and J. Zhang, “Vfl: A verifiable federated learning with privacy-preserving for big data in industrial iot,” IEEE Transactions on Industrial Informatics, vol. 18, no. 5, pp. 3316–3326, 2020.

H. Gao, N. He, and T. Gao, “Sverifl: Successive verifiable federated learning with privacy-preserving,” Information Sciences, vol. 622, pp. 98–114, 2023.

R. Xu, N. Baracaldo, Y. Zhou, A. Anwar, and H. Ludwig, “Hybridalpha: An efficient approach for privacypreserving federated learning,” in Proceedings of the 12th ACM workshop on artificial intelligence and security, 2019, pp. 13–23.

R. Hu, Y. Guo, H. Li, Q. Pei, and Y. Gong, “Personalized federated learning with differential privacy,” IEEE Internet of Things Journal, vol. 7, no. 10, pp. 9530–9539, 2020.

A. Triastcyn and B. Faltings, “Federated learning with bayesian differential privacy,” in 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019, pp. 2587–2596.

H. Ye, J. Liu, H. Zhen, W. Jiang, B. Wang, and W. Wang, “Vrefl: Verifiable and reconnection-efficient federated learning in iot scenarios,” Journal of Network and Computer Applications, vol. 207, p. 103486, 2022.

Z. Ma, J. Ma, Y. Miao, Y. Li, and R. H. Deng, “Shieldfl: Mitigating model poisoning attacks in privacy-preserving federated learning,” IEEE Transactions on Information Forensics and Security, vol. 17, pp. 1639–1654, 2022.

C. Fang, Y. Guo, N. Wang, and A. Ju, “Highly efficient federated learning with strong privacy preservation in cloud computing,” Computers & Security, vol. 96, p. 101889, 2020.

B. Jia, X. Zhang, J. Liu, Y. Zhang, K. Huang, and Y. Liang, “Blockchain-enabled federated learning data protection aggregation scheme with differential privacy and homomorphic encryption in iiot,” IEEE Transactions on Industrial Informatics, vol. 18, no. 6, pp. 4049–4058, 2021.

H. Zhou, G. Yang, H. Dai, and G. Liu, “Pflf: Privacy-preserving federated learning framework for edge computing,” IEEE Transactions on Information Forensics and Security, vol. 17, pp. 1905–1918, 2022.

V. Mugunthan, A. Polychroniadou, D. Byrd, and T. H. Balch, “Smpai: Secure multi-party computation for federated learning,” in Proceedings of the NeurIPS 2019 Workshop on Robust AI in Financial Services. MIT Press Cambridge, MA, USA, 2019, pp. 1–9.

C. Zhou, A. Fu, S. Yu, W. Yang, H. Wang, and Y. Zhang, “Privacy-preserving federated learning in fog computing,” IEEE Internet of Things Journal, vol. 7, no. 11, pp. 10 782–10 793, 2020.

L. Zhao, J. Jiang, B. Feng, Q. Wang, C. Shen, and Q. Li, “Sear: Secure and efficient aggregation for byzantine-robust federated learning,” IEEE Transactions on Dependable and Secure Computing, vol. 19, no. 5, pp. 3329–3342, 2021.N. Kawasaki, “Parametric study of thermal and chemical nonequilibrium nozzle flow,” M.S. thesis, Dept. Electron. Eng., Osaka Univ., Osaka, Japan, 1993.

Z. Zhang, L. Wu, C. Ma, J. Li, J. Wang, Q. Wang, and S. Yu, “Lsfl: A lightweight and secure federated learning scheme for edge computing,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 365– 379, 2022.

B. Zhao, P. Sun, L. Fang, T. Wang, and K. Jiang, “Fedcom: A byzantine-robust local model aggregation rule 10 using data commitment for federated learning,” arXiv preprint arXiv:2104.08020, 2021.

H. Guo, H. Wang, T. Song, Y. Hua, Z. Lv, X. Jin, Z. Xue, R. Ma, and H. Guan, “Siren: Byzantine-robust federated learning via proactive alarming,” in Proceedings of the ACM Symposium on Cloud Computing, 2021, pp. 47–60.

L. Munoz-Gonz ˜ alez, K. T. Co, and E. C. Lupu, ´ “Byzantine-robust federated machine learning through adaptive model averaging,” arXiv preprint arXiv:1909.05125, 2019.E. E. Reber, R. L. Michell, and C. J. Carter, “Oxygen absorption in the Earth’s atmosphere,” Aerospace Corp., Los Angeles, CA, Tech. Rep. TR-0200 (420-46)-3, Nov. 1988.

R. Wang, X. Wang, H. Chen, S. Picek, Z. Liu, and K. Liang, “Brief but powerful: Byzantine-robust and privacy-preserving federated learning via model segmentation and secure clustering,” arXiv preprint arXiv:2208.10161, 2022.

S. Li, E. Ngai, and T. Voigt, “Byzantine-robust aggregation in federated learning empowered industrial iot,” IEEE Transactions on Industrial Informatics, vol. 19, no. 2, pp. 1165–1175, 2021.

X. Tang, M. Shen, Q. Li, L. Zhu, T. Xue, and Q. Qu, “Pile: Robust privacy-preserving federated learning via verifiable perturbations,” IEEE Transactions on Dependable and Secure Computing, 2023.

L. Deng, “The mnist database of handwritten digit images for machine learning research [best of the web],” IEEE signal processing magazine, vol. 29, no. 6, pp. 141–142, 2012.

A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778

Downloads

Published

03-02-2024

Issue

Section

Articles

How to Cite

Zhou, Q. (2024). PPBRFL: Privacy-Preserving Byzantine-Robust Federated Learning. Frontiers in Computing and Intelligent Systems, 7(1), 18-24. https://doi.org/10.54097/6jamgy43