⚠ Official Notice: www.ijisrt.com is the official website of the International Journal of Innovative Science and Research Technology (IJISRT) Journal for research paper submission and publication. Please beware of fake or duplicate websites using the IJISRT name.



Differential Privacy Enhanced FL for Financial Fraud


Authors : Hemant Singh; Shree Bejon Sarkar Bappy; Dr. Mahadev

Volume/Issue : Volume 11 - 2026, Issue 4 - April


Google Scholar : https://tinyurl.com/bdz84ruv

Scribd : https://tinyurl.com/99w8petz

DOI : https://doi.org/10.38124/ijisrt/26apr1093

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : Financial fraud has become increasingly prevalent with the rapid growth of digital transactions, posing serious challenges to data security, user privacy, and regulatory compliance. Traditional centralized machine learning approaches for fraud detection require the aggregation of sensitive financial data, which increases the risk of data breaches and unauthorized access.To address these limitations, Federated Learning (FL) has emerged as a decentralized paradigm that enables collaborative model training across multiple institutions without sharing raw data. However, despite its advantages, federated learning remains vulnerable to privacy leakage through model updates and gradient-based attacks, which can expose sensitive information. In this paper, we propose a Differential Privacy (DP)-enhanced Federated Learning framework for secure and efficient financial fraud detection. The proposed approach integrates privacy-preserving mechanisms such as gradient clipping and Gaussian noise addition into the federated training process to ensure strong privacy guarantees. This framework enables multiple financial institutions to collaboratively train a global model while preserving the confidentiality of local transaction data. Experimental results demonstrate that the proposed model effectively mitigates privacy risks while maintaining high predictive performance. Although a slight reduction in accuracy is observed due to noise injection, the model achieves a balanced trade-off between privacy preservation and detection performance. The proposed system provides a scalable, secure, and privacy-preserving solution suitable for real-world financial applications.

Keywords : Federated Learning, Differential Privacy, Financial Fraud Detection, Privacy-Preserving Machine Learning, Distributed Learning, Data Security.

References :

  1. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), 2017.
  2. P. Kairouz et al., “Advances and open problems in federated learning,” Foundations and Trends in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021.
  3. C. Dwork, “Differential privacy,” Automata, Languages and Programming, Springer, 2006.
  4. M. Abadi et al., “Deep learning with differential privacy,” Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS), 2016.
  5. N. H. Tran et al., “Federated learning over wireless networks: Optimization model design and analysis,” IEEE INFOCOM, 2019.
  6. Y. Wei et al., “Federated learning with differential privacy,” IEEE Transactions on Information Forensics and Security, 2020.
  7. Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology, 2019.
  8. L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” Advances in Neural Information Processing Systems (NeurIPS), 2019.
  9. A. Geiping et al., “Inverting gradients—How easy is it to break privacy in federated learning?” Advances in Neural Information Processing Systems, 2020.
  10. B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the GAN: Information leakage from collaborative deep learning,” ACM CCS, 2017.
  11. M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” ACM CCS, 2015.
  12. U. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks,” IEEE Symposium on Security and Privacy, 2019.
  13. F. Pedregosa et al., “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, 2011.
  14. Kaggle, “Credit Card Fraud Detection Dataset,” Available: https://www.kaggle.com/mlg-ulb/creditcardfraud
  15. R. Shokri et al., “Membership inference attacks against machine learning models,” IEEE Symposium on Security and Privacy, 2017.
  16. L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001.
  17. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, 1995.
  18. Y. Yang, Q. Liu, and T. Chen, “Federated learning for privacy-preserving AI,” IEEE Internet of Things Journal, 2019.
  19. S. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” Advances in Neural Information Processing Systems, 2017.
  20. T. Fawcett, “An introduction to ROC analysis,” Pattern Recognition Letters, 2006.
  21. D. West and S. Bhattacharya, “Intelligent financial fraud detection: A comprehensive review,” Computers & Security, 2016.
  22. M. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive privacy analysis of deep learning,” IEEE S&P, 2019.
  23. A. B. Author et al., “Gradient inversion attacks in deep learning,” Conference on Machine Learning Security, 2020.
  24. J. Gama et al., “A survey on concept drift adaptation,” ACM Computing Surveys, 2014.

Financial fraud has become increasingly prevalent with the rapid growth of digital transactions, posing serious challenges to data security, user privacy, and regulatory compliance. Traditional centralized machine learning approaches for fraud detection require the aggregation of sensitive financial data, which increases the risk of data breaches and unauthorized access.To address these limitations, Federated Learning (FL) has emerged as a decentralized paradigm that enables collaborative model training across multiple institutions without sharing raw data. However, despite its advantages, federated learning remains vulnerable to privacy leakage through model updates and gradient-based attacks, which can expose sensitive information. In this paper, we propose a Differential Privacy (DP)-enhanced Federated Learning framework for secure and efficient financial fraud detection. The proposed approach integrates privacy-preserving mechanisms such as gradient clipping and Gaussian noise addition into the federated training process to ensure strong privacy guarantees. This framework enables multiple financial institutions to collaboratively train a global model while preserving the confidentiality of local transaction data. Experimental results demonstrate that the proposed model effectively mitigates privacy risks while maintaining high predictive performance. Although a slight reduction in accuracy is observed due to noise injection, the model achieves a balanced trade-off between privacy preservation and detection performance. The proposed system provides a scalable, secure, and privacy-preserving solution suitable for real-world financial applications.

Keywords : Federated Learning, Differential Privacy, Financial Fraud Detection, Privacy-Preserving Machine Learning, Distributed Learning, Data Security.

Paper Submission Last Date
31 - May - 2026

SUBMIT YOUR PAPER CALL FOR PAPERS
Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe