Authors :
Andrews Ocran; Japhet Effah
Volume/Issue :
Volume 10 - 2025, Issue 5 - May
Google Scholar :
https://tinyurl.com/3d2d6r7p
DOI :
https://doi.org/10.38124/ijisrt/25may617
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
Federated Learning (FL) is a promising decentralised machine learning model that enables multiple devices to
collaboratively train a shared model without sharing their private data. While this approach enhances data privacy and
regulatory compliance, it is significantly vulnerable to a range of security threats and adversarial attacks. This research
seeks to investigates various attack vectors in FL, such as poisoning attacks, Byzantine attacks, Sybil attacks, and gradient
inversion and also evaluates their impact on model performance and data confidentiality. Through a comprehensive analysis
and empirical reviews of existing literature, the study explores mitigation strategies, attack model and threat taxonomy to
classify adversarial behaviours. Key findings from the reviews suggests that while existing defence mechanisms show
promise, they often suffer from trade-offs between model accuracy, system scalability, and computational overhead. The
study was concluded by identifying gaps in current literature, such as the need for adaptive mitigation strategies and more
realistic threat models, and offers recommendations for future work. By addressing these challenges, the research
strengthens the robustness and trustworthiness of federated learning systems in real-world applications.
Keywords :
Federated Learning (FL); Attacks, Privacy Preservation, Aggregation, Trusted Execution Environments (TEEs), Federated Averaging (FedAvg), Inference Attacks, Resilience Strategies, Machine Learning, Byzantine Attack.
References :
- Aggarwal, M., Khullar, V., Rani, S., Thomas André Prola, Shyama Barna Bhattacharjee, Sarowar Morshed Shawon and Goyal, N. (2024). Federated Learning on Internet of Things: Extensive and Systematic Review. Computers, materials & continua/Computers, materials & continua (Print), 0(0), pp.1–10. doi:https://doi.org/10.32604/cmc.2024.049846.
- Aljanabi, M., Omran, A.H., Mijwil, M.M., Mostafa , A., El-kenawy, E.-S.M., Yousif Mohammed, S. and Ibrahim, A. (2023). Data poisoning: issues, challenges, and needs. doi:https://doi.org/10.1049/icp.2024.0951.
- Almutairi, S. and Barnawi, A. (2023). Federated learning vulnerabilities, threats and defenses: A systematic review and future directions. Internet of Things, 24, pp.100947–100947. doi:https://doi.org/10.1016/j.iot.2023.100947.
- Araki, T., Furukawa, Y., Lindell, Y., Nof, A. and Ohara, K. (2016). High-Throughput Semi-Honest Secure Three-Party Computation with an Honest Majority. doi:https://doi.org/10.1145/2976749.2978331.
- Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D. and Shmatikov, V. (2019). How To Backdoor Federated Learning. arXiv:1807.00459 [cs]. [online] Available at: https://arxiv.org/abs/1807.00459.
- Benmalek, M., Benrekia, M.A. and Challal, Y. (2022). Security of Federated Learning: Attacks, Defensive Mechanisms, and Challenges. Revue d’Intelligence Artificielle, 36(1), pp.49–59. doi:https://doi.org/10.18280/ria.360106.
- Betul Yurdem, Murat Kuzlu, Mehmet Kemal Gullu, Ferhat Ozgur Catak and Tabassum, M. (2024). Federated Learning: Overview, Strategies, Applications, Tools and Future Directions. Heliyon, [online] 10(19), pp.e38137–e38137. doi:https://doi.org/10.1016/j.heliyon.2024.e38137.
- Betul Yurdem, Murat Kuzlu, Mehmet Kemal Gullu, Ferhat Ozgur Catak and Tabassum, M. (2024). Federated Learning: Overview, Strategies, Applications, Tools and Future Directions. Heliyon, [online] 10(19), pp.e38137–e38137. doi:https://doi.org/10.1016/j.heliyon.2024.e38137.
- Bhatti, D.M.S., Ali, M., Yoon, J. and Choi, B.J. (2025). Efficient Collaborative Learning in the Industrial IoT Using Federated Learning and Adaptive Weighting Based on Shapley Values. Sensors, 25(3), p.969. doi:https://doi.org/10.3390/s25030969.
- Bonawitz, K., Eichner, H., Grieskamp, W., Huba, D., Ingerman, A., Ivanov, V., Kiddon, C., Konečný, J., Mazzocchi, S., McMahan, H.B., Van Overveldt, T., Petrou, D., Ramage, D. and Roselander, J. (2019). Towards Federated Learning at Scale: System Design. [online] arXiv.org. doi:https://doi.org/10.48550/arXiv.1902.01046.
- Brisimi, T.S., Chen, R., Mela, T., Olshevsky, A., Paschalidis, I.Ch. and Shi, W. (2018). Federated learning of predictive models from federated Electronic Health Records. International Journal of Medical Informatics, [online] 112, pp.59–67. doi:https://doi.org/10.1016/j.ijmedinf.2018.01.007.
- Cao, X., Fang, M., Liu, J. and Gong, N.Z. (2022). FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. arXiv:2012.13995 [cs]. [online] Available at: https://arxiv.org/abs/2012.13995 [Accessed 8 Aug. 2022].
- Chen, C., Liu, J., Tan, H., Li, X., Wang, K.I-Kai., Li, P., Sakurai, K. and Dou, D. (2024a). Trustworthy Federated Learning: Privacy, Security, and Beyond. arXiv (Cornell University). doi:https://doi.org/10.48550/arxiv.2411.01583.
- Chen, D., Yu, N., Zhang, Y. and Fritz, M. (2020). GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models. Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, [online] pp.343–362. doi:https://doi.org/10.1145/3372297.3417238.
- Chen, J., Li, M., Liu, T., Zheng, H., Du, H. and Cheng, Y. (2024b). Rethinking the defense against free-rider attack from the perspective of model weight evolving frequency. Information Sciences, 668, pp.120527–120527. doi:https://doi.org/10.1016/j.ins.2024.120527.
- Criado, M.F., Casado, F.E., Iglesias, R., Regueiro, C.V. and Barro, S. (2022). Non-IID data and Continual Learning processes in Federated Learning: A long road ahead. Information Fusion, 88, pp.263–280. doi:https://doi.org/10.1016/j.inffus.2022.07.024.
- Ding, X., Liu, Z., You, X., Li, X. and Vasilakos, A.V. (2024). Improved gradient leakage attack against compressed gradients in federated learning. Neurocomputing, [online] 608, p.128349. doi:https://doi.org/10.1016/j.neucom.2024.128349.
- Douceur, J.R. (2002). The Sybil Attack. Peer-to-Peer Systems, pp.251–260. doi:https://doi.org/10.1007/3-540-45748-8_24.
- Enthoven, D. and Al-Ars, Z. (2021). An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies. pp.173–196. doi:https://doi.org/10.1007/978-3-030-70604-3_8.
- Feng, Y., Guo, Y., Hou, Y., Wu, Y., Lao, M., Yu, T. and Liu, G. (2025). A survey of security threats in federated learning. Complex & Intelligent Systems, 11(2). doi:https://doi.org/10.1007/s40747-024-01664-0.
- Ge, L., Li, H., Wang, X. and Wang, Z. (2023). A review of secure federated learning: privacy leakage threats, protection technologies, challenges and future directions. Neurocomputing, [online] p.126897. doi:https://doi.org/10.1016/j.neucom.2023.126897.
- Gong, Y., Wang, S., Yu, T., Jiang, X. and Sun, F. (2024). Improving adversarial robustness using knowledge distillation guided by attention information bottleneck. Information Sciences, [online] 665, p.120401. doi:https://doi.org/10.1016/j.ins.2024.120401.
- Guo, P., Zeng, S., Chen, W., Zhang, X., Ren, W., Zhou, Y. and Qu, L. (2024). A New Federated Learning Framework Against Gradient Inversion Attacks. [online] arXiv.org. Available at: https://arxiv.org/abs/2412.07187 [Accessed 20 Apr. 2025].
- Hu, K., Gong, S., Zhang, Q., Seng, C., Xia, M. and Jiang, S. (2024). An overview of implementing security and privacy in federated learning. Artificial intelligence review, 57(8). doi:https://doi.org/10.1007/s10462-024-10846-8.
- Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I.P. and Tygar, J.D. (2011). Adversarial machine learning. Proceedings of the 4th ACM workshop on Security and artificial intelligence - AISec ’11. doi:https://doi.org/10.1145/2046684.2046692.
- Huang, Y., Chu, L., Zhou, Z., Wang, L., Liu, J., Pei, J. and Zhang, Y. (2021). Personalized Cross-Silo Federated Learning on Non-IID Data. arXiv:2007.03797 [cs, stat]. [online] Available at: https://arxiv.org/abs/2007.03797.
- Huang, Y., Gupta, S., Song, Z., Arora, S. and Li, K. (2024). Evaluating gradient inversion attacks and defenses. Federated Learning, pp.105–122. doi:https://doi.org/10.1016/b978-0-44-319037-7.00014-4.
- Huang, Y., Huo, Z. and Fan, Y. (2024). DRA: A data reconstruction attack on vertical federated k-means clustering. Expert Systems with Applications, 250, pp.123807–123807. doi:https://doi.org/10.1016/j.eswa.2024.123807.
- Jebreel, Najeeb Moharram, Domingo-Ferrer, J., Sánchez, D. and Blanco-Justicia, A. (2022). Defending against the Label-flipping Attack in Federated Learning. [online] arXiv.org. Available at: https://arxiv.org/abs/2207.01982 [Accessed 7 Apr. 2025].
- Kasyap, H. and Tripathy, S. (2024). Beyond data poisoning in federated learning. Expert Systems with Applications, 235, pp.121192–121192. doi:https://doi.org/10.1016/j.eswa.2023.121192.
- Lamport, L. (1983). The Weak Byzantine Generals Problem. Journal of the ACM, [online] 30(3), pp.668–676. doi:https://doi.org/10.1145/2402.322398.
- Lenaerts-Bergmans, B. (2024). What Is Data Poisoning? | CrowdStrike. [online] Crowdstrike.com. Available at: https://www.crowdstrike.com/en-us/cybersecurity-101/cyberattacks/data-poisoning/.
- Li, Z., Huang, X., Li, Y. and Chen, G. (2023). A comparative study of adversarial training methods for neural models of source code. Future Generation Computer Systems, 142, pp.165–181. doi:https://doi.org/10.1016/j.future.2022.12.030.
- Liang, T., Glossner, J., Wang, L., Shi, S. and Zhang, X. (2021). Pruning and quantization for deep neural network acceleration: A survey. Neurocomputing, 461, pp.370–403. doi:https://doi.org/10.1016/j.neucom.2021.07.045.
- Liman, M.D., Osanga, S.I., Alu, E.S. and Zakariya, S. (2024). Regularization Effects in Deep Learning Architecture. Journal of the Nigerian Society of Physical Sciences, p.1911. doi:https://doi.org/10.46481/jnsps.2024.1911.
- Liu, P., Xu, X. and Wang, W. (2022). Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives. Cybersecurity, 5(1). doi:https://doi.org/10.1186/s42400-021-00105-6.
- Lutho Ntantiso, Bagula, A.B., Ajayi, O. and Ngongo, F.K. (2023). A Review of Federated Learning: Algorithms, Frameworks & Applications. [online] ResearchGate. Available at: https://www.researchgate.net/publication/369417303_A_Review_of_Federated_Learning_Algorithms_Frameworks_Applications [Accessed 6 May 2025].
- Lyu, L., Yu, H. and Yang, Q. (2020a). Threats to Federated Learning: A Survey. arXiv:2003.02133 [cs, stat]. [online] Available at: https://arxiv.org/abs/2003.02133.
- Lyu, L., Yu, H. and Yang, Q. (2020b). Threats to Federated Learning: A Survey. [online] Available at: https://arxiv.org/pdf/2003.02133.
- McMahan, H.B., Blaise, Ramage, D., Moore, E. and Hampson, S. (2023). Communication-Efficient Learning of Deep Networks from Decentralized Data. [online] alphaXiv. Available at: https://www.alphaxiv.org/abs/1602.05629 [Accessed 30 Apr. 2025].
- Naik, D. and Naik, N. (2024). An Introduction to Federated Learning: Working, Types, Benefits and Limitations. Advances in intelligent systems and computing, pp.3–17. doi:https://doi.org/10.1007/978-3-031-47508-5_1.
- Nanayakkara, S.I., Pokhrel, S.R. and Li, G. (2024). Understanding global aggregation and optimization of federated learning. Future Generation Computer Systems, 159, pp.114–133. doi:https://doi.org/10.1016/j.future.2024.05.009.
- Qayyum, A., Janjua, M.U. and Qadir, J. (2022). Making federated learning robust to adversarial attacks by learning data and model association. Computers & Security, 121, p.102827. doi:https://doi.org/10.1016/j.cose.2022.102827.
- Qi, P., Chiaro, D., Guzzo, A., Ianni, M., Fortino, G. and Piccialli, F. (2024). Model aggregation techniques in federated learning: A comprehensive survey. Future Generation Computer Systems, 150, pp.272–293. doi:https://doi.org/10.1016/j.future.2023.09.008.
- Radford, A., Narasimhan, K., Salimans, T. and Sutskever, I. (2018). Improving Language Understanding by Generative Pre-Training. [online] Available at: https://www.mikecaptain.com/resources/pdf/GPT-1.pdf.
- Radford, A., Wu, J., Child, R., Luan, D., Amodei, D. and Sutskever, I. (2019). Language Models are Unsupervised Multitask Learners. [online] Available at: https://storage.prod.researchhub.com/uploads/papers/2020/06/01/language-models.pdf.
- Ribero, M., Vikalo, H. and de Veciana, G. (2025). Federated Learning at Scale: Addressing Client Intermittency and Resource Constraints. IEEE Journal of Selected Topics in Signal Processing, 19(1), pp.60–73. doi:https://doi.org/10.1109/jstsp.2024.3430118.
- Sagar, S., Li, C.-S., Loke, S.W. and Choi, J. (2023). Poisoning Attacks and Defenses in Federated Learning: A Survey. [online] arXiv.org. Available at: https://arxiv.org/abs/2301.05795 [Accessed 7 Apr. 2025].
- Sheller, M.J., Edwards, B., Reina, G.A., Martin, J., Pati, S., Kotrotsou, A., Milchenko, M., Xu, W., Marcus, D., Colen, R.R. and Bakas, S. (2020). Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Scientific Reports, [online] 10(1), p.12598. doi:https://doi.org/10.1038/s41598-020-69250-1.
- Struppek, L., Hintersdorf, D., Friedrich, F., Brack, M., Schramowski, P. and Kersting, K. (2023). Class Attribute Inference Attacks: Inferring Sensitive Class Information by Diffusion-Based Attribute Manipulations. arXiv (Cornell University). doi:https://doi.org/10.48550/arxiv.2303.09289.
- Sun, T., Li, D. and Wang, B. (2023). Decentralized Federated Averaging. IEEE Transactions on Pattern Analysis and Machine Intelligence, [online] 45(4), pp.4289–4301. doi:https://doi.org/10.1109/TPAMI.2022.3196503.
- Truong, N., Sun, K., Wang, S., Guitton, F. and Guo, Y. (2021). Privacy preservation in federated learning: An insightful survey from the GDPR perspective. Computers & Security, [online] 110, p.102402. doi:https://doi.org/10.1016/j.cose.2021.102402.
- Verde, L., Marulli, F. and Marrone, S. (2021). Exploring the Impact of Data Poisoning Attacks on Machine Learning Model Reliability. Procedia Computer Science, [online] 192, pp.2624–2632. doi:https://doi.org/10.1016/j.procs.2021.09.032.
- Vungarala, S.K. (2023). Stochastic gradient descent vs Gradient descent — Exploring the differences. [online] Medium. Available at: https://medium.com/@seshu8hachi/stochastic-gradient-descent-vs-gradient-descent-exploring-the-differences-9c29698b3a9b.
- Wang, B., Li, H., Liu, X. and Guo, Y. (2024). FRAD: Free-Rider Attacks Detection Mechanism for Federated Learning in AIoT. IEEE Internet of Things Journal, 11(3), pp.4377–4388. doi:https://doi.org/10.1109/jiot.2023.3298606.
- Wei, Q. and Rao, G. (2024). EPFL-DAC: Enhancing Privacy in Federated Learning with Dynamic Aggregation and Clipping. Computers & Security, 143, p.103911. doi:https://doi.org/10.1016/j.cose.2024.103911.
- Wu, X., Chen, Y., Yu, H. and Yang, Z. (2024). Privacy-preserving federated learning based on noise addition. Expert Systems with Applications, [online] 267, p.126228. doi:https://doi.org/10.1016/j.eswa.2024.126228.
- Xie, X., Hu, C., Ren, H. and Deng, J. (2024). A survey on vulnerability of federated learning: A learning algorithm perspective. Neurocomputing, 573, pp.127225–127225. doi:https://doi.org/10.1016/j.neucom.2023.127225.
- Xu, Z., Zhang, Y., Andrew, G., Choquette, C., Kairouz, P., Mcmahan, B., Rosenstock, J. and Zhang, Y. (2023). Federated Learning of Gboard Language Models with Differential Privacy. arXiv (Cornell University). doi:https://doi.org/10.18653/v1/2023.acl-industry.60.
- Yang, H., Wang, Z., Chou, B., Xu, S., Wang, H., Wang, J. and Zhang, Q. (2019). An Empirical Study of the Impact of Federated Learning on Machine Learning Model Accuracy. [online] Arxiv.org. Available at: https://arxiv.org/html/2503.20768v1 [Accessed 5 May 2025].
- Yang, M., Cheng, H., Chen, F., Liu, X., Wang, M. and Li, X. (2023). Model poisoning attack in differential privacy-based federated learning. Information Sciences, [online] 630, pp.158–172. doi:https://doi.org/10.1016/j.ins.2023.02.025.
- Yin, Y., Chen, H., Gao, Y., Sun, P., Wu, L., Li, Z., & Liu, W. (2024). Feature-based Full-target Clean-label Backdoor Attacks. Applied Intelligence.
- Zeng, D., Wu, Z., Liu, S., Pan, Y., Tang, X. and Xu, Z. (2024). Understanding Generalization of Federated Learning: the Trade-off between Model Stability and Optimization. arXiv (Cornell University). doi:https://doi.org/10.48550/arxiv.2411.16303.
- Zhang, C., Yang, S., Mao, L. and Ning, H. (2024a). Anomaly detection and defense techniques in federated learning: a comprehensive review. Artificial intelligence review, 57(6). doi:https://doi.org/10.1007/s10462-024-10796-1.
- Zhang, W., Yu, C., Meng, Z., Shen, S. and Zhang, K. (2024b). Explore Patterns to Detect Sybil Attack during Federated Learning in Mobile Digital Twin Network. ICC 2022 - IEEE International Conference on Communications, pp.3969–3974. doi:https://doi.org/10.1109/icc51166.2024.10622975.
- Zhang, X., Chen, C., Xie, Y., Chen, X., Zhang, J. and Xiang, Y. (2023). A survey on privacy inference attacks and defenses in cloud-based Deep Neural Network. Computer Standards & Interfaces, 83, p.103672. doi:https://doi.org/10.1016/j.csi.2022.103672.
- Zhu, L., Liu, Z. and Han, S. (2019). Deep Leakage from Gradients. arXiv (Cornell University). doi:https://doi.org/10.48550/arxiv.1906.08935.
- Zi, B., Agrawal, A., Coburn, C., Asghar, H.J., Bhaskar, R., Kaafar, M.A., Webb, D. and Dickinson, P. (2021). On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models. arXiv (Cornell University), pp.232–251. doi:https://doi.org/10.1109/eurosp51992.2021.00025.
Federated Learning (FL) is a promising decentralised machine learning model that enables multiple devices to
collaboratively train a shared model without sharing their private data. While this approach enhances data privacy and
regulatory compliance, it is significantly vulnerable to a range of security threats and adversarial attacks. This research
seeks to investigates various attack vectors in FL, such as poisoning attacks, Byzantine attacks, Sybil attacks, and gradient
inversion and also evaluates their impact on model performance and data confidentiality. Through a comprehensive analysis
and empirical reviews of existing literature, the study explores mitigation strategies, attack model and threat taxonomy to
classify adversarial behaviours. Key findings from the reviews suggests that while existing defence mechanisms show
promise, they often suffer from trade-offs between model accuracy, system scalability, and computational overhead. The
study was concluded by identifying gaps in current literature, such as the need for adaptive mitigation strategies and more
realistic threat models, and offers recommendations for future work. By addressing these challenges, the research
strengthens the robustness and trustworthiness of federated learning systems in real-world applications.
Keywords :
Federated Learning (FL); Attacks, Privacy Preservation, Aggregation, Trusted Execution Environments (TEEs), Federated Averaging (FedAvg), Inference Attacks, Resilience Strategies, Machine Learning, Byzantine Attack.