Authors :
Deepika Rajwade; Vishal Bhardwaj; Ridhima Vishwakarma; Ashish Kumar Pandey; Dr. Sayed Athar Ali Hashmi; Dr. Nusrat Ali Hashmi
Volume/Issue :
Volume 10 - 2025, Issue 10 - October
Google Scholar :
https://tinyurl.com/yzew397e
Scribd :
https://tinyurl.com/2jkzd4y2
DOI :
https://doi.org/10.38124/ijisrt/25oct892
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 30 to 40 days to display the article.
Abstract :
The rapid proliferation of machine learning models trained on sensitive data has intensified global privacy
concerns and regulatory demands for the “right to be forgotten.” Machine unlearning has emerged as a promising
paradigm to remove specific data influences from trained models without complete retraining. This paper presents a
conceptual hybrid framework that integrates influence estimation, selective parameter adjustment, and verification
mechanisms to achieve efficient and verifiable unlearning in deep learning systems. Rather than providing empirical
benchmarks, this work synthesizes theoretical foundations and algorithmic design strategies to establish a unified basis for
balancing computational efficiency, model utility, and regulatory compliance. The proposed approach also highlights the
ethical and accountability dimensions of unlearning, emphasizing its role in trustworthy and privacy-preserving AI. This
framework offers a structured pathway for future experimental validation and real-world deployment of scalable
unlearning solutions.
Keywords :
Machine Unlearning, Data Privacy, Right to be Forgotten, Deep Learning, GDPR Compliance, Ethical AI.
References :
- L. Bourtoule, V. Chandrasekaran, C. A. Choquette-Choo, H. Jia, A. Travers, B. Zhang, D. Lie, and N. Papernot, "Machine Unlearning," in *2021 IEEE Symposium on Security and Privacy (SP)*, San Francisco, CA, USA, 2021, pp. 141-159. doi: 10.1109/SP40001.2021.00019.
- Y. Cao and J. Yang, "Towards Making Systems Forget with Machine Unlearning," in *2015 IEEE Symposium on Security and Privacy*, San Jose, CA, USA, 2015, pp. 463-480. doi: 10.1109/SP.2015.35.
- Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), *Official Journal of the European Union*, L 119/1, 4.5.2016.
- J. Brophy and D. Lowd, "Machine Unlearning for Random Forests," in *International Conference on Machine Learning (ICML)*, 2021, pp. 1092-1104.
- A. Thudi, H. Jia, I. Shumailov, and N. Papernot, "On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning," in *31st USENIX Security Symposium (USENIX Security 22)*, Boston, MA, 2022, pp. 4007–4022.
- C. Guo, T. Goldstein, A. Hannun, and L. van der Maaten, "Certified Data Removal from Machine Learning Models," in *Proceedings of the 37th International Conference on Machine Learning (ICML)*, vol. 119, 2020, pp. 3832–3842.
- A. Sekhari, J. Acharya, G. Kamath, and A. T. Suresh, "Remember What You Want to Forget: Algorithms for Machine Unlearning," in *Advances in Neural Information Processing Systems (NeurIPS)*, vol. 34, 2021, pp. 18075–18086.
- A. Golatkar, A. Achille, and S. Soatto, "Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks," in *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, 2020, pp. 9304-9312.
- T. T. Nguyen, T. T. Huynh, P. L. Nguyen, A. W.-C. Liew, H. Yin, and Q. V. H. Nguyen, "A Survey of Machine Unlearning," *arXiv preprint arXiv:2209.02299*, 2022.
- R. Shokri, M. Stronati, C. Song, and V. Shmatikov, "Membership Inference Attacks Against Machine Learning Models," in *2017 IEEE Symposium on Security and Privacy (SP)*, San Jose, CA, USA, 2017, pp. 3-18. doi: 10.1109/SP.2017.41.
- V. Suriyakumar and N. Papernot, "How to Forget and Unlearn: A Survey of Machine Unlearning," *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2024. doi: 10.1109/TPAMI.2024.3356188.
- C. Dwork, A. Roth, and others, "The Algorithmic Foundations of Differential Privacy," *Foundations and Trends in Theoretical Computer Science*, vol. 9, no. 3–4, pp. 211–407, 2014. doi: 10.1561/0400000042.
- M. Chen, Z. Zhang, T. Wang, M. Backes, M. Humbert, and Y. Zhang, "When Machine Unlearning Jeopardizes Privacy," in *Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security*, 2021, pp. 896-911.
- A. Warnecke, L. Pirch, C. Wressnegger, and K. Rieck, "Machine Unlearning of Features and Labels," in *Network and Distributed System Security Symposium (NDSS)*, 2023.
- K. Liu, B. Li, J. Gao, and Q. Xu, "Federated Unlearning: A Survey on Methods, Design Guidelines, and Evaluation Metrics," *ACM Computing Surveys*, vol. 56, no. 5, pp. 1-36, 2024.
- S. A. A. Hashmi, "Cybersecurity Challenges in Live Streaming: Protecting Digital Anchors from Deepfake and Identity Theft,"2024. doi: https://doi.org/10.5281/zenodo.17085678 .
- S. A. A. Hashmi, "Impact of Digital Governance on Economic Policy Implementation,", 2024. doi: https://doi.org/10.5281/zenodo.17085702.
- A. K. Pandey, V. Bhardwaj, and S. A. A. Hashmi, "Geospatial AI for Climate Change Mitigation and Urban Resilience," *International Journal of Environmental Science and Technology*, 2025. https://doi.org/10.5281/zenodo.17288310
- B. Shameem, N. Sahu, A. K. Tiwari, S. Tewalkar, and T. Kashyap, “AI-Powered Cyber Threat Intelligence: An Integrated Data-Driven Model,” International Research Journal of Engineering and Technology (IRJET), vol. 12, no. 10, pp. 278-286, Oct. 2025.
The rapid proliferation of machine learning models trained on sensitive data has intensified global privacy
concerns and regulatory demands for the “right to be forgotten.” Machine unlearning has emerged as a promising
paradigm to remove specific data influences from trained models without complete retraining. This paper presents a
conceptual hybrid framework that integrates influence estimation, selective parameter adjustment, and verification
mechanisms to achieve efficient and verifiable unlearning in deep learning systems. Rather than providing empirical
benchmarks, this work synthesizes theoretical foundations and algorithmic design strategies to establish a unified basis for
balancing computational efficiency, model utility, and regulatory compliance. The proposed approach also highlights the
ethical and accountability dimensions of unlearning, emphasizing its role in trustworthy and privacy-preserving AI. This
framework offers a structured pathway for future experimental validation and real-world deployment of scalable
unlearning solutions.
Keywords :
Machine Unlearning, Data Privacy, Right to be Forgotten, Deep Learning, GDPR Compliance, Ethical AI.