Cyber-Security and Artificial Intelligence: A Study on Adversarial Machine Learning (AML)


Authors : Mahesh Nathilal Mistry

Volume/Issue : Volume 11 - 2026, Issue 2 - February


Google Scholar : https://tinyurl.com/yc37fbj6

Scribd : https://tinyurl.com/yyyepu3x

DOI : https://doi.org/10.38124/ijisrt/26feb327

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : The impact of Artificial Intelligence (AI) and Machine Learning (ML) technologies on the practice of cybersecurity has created new pathways and new risks. While intelligent models have achieved impressive accuracy in intrusion detection, malware classification, and anomaly detection, they also remain quite vulnerable to adversarial machine learning (AML) attacks. These attacks entail the insertion of maliciously crafted and often imperceptible alterations to input data to cause models to misclassify malicious inputs as benign. Such weaknesses become a significant problem when the reliability and safety of a system is at stake. This research describes the impact of adversarial attacks on machine learning models used in cyber-security and the possible defensive approaches to improve robustness. Using benchmark datasets and established models, we examine the effect of the adversarial tools, including Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and Deep Fool, to measure the degradation of model performance. The impact of the defences, which include adversarial training and detection-based approaches, is evaluated for effectiveness in attack mitigation. The results illustrate the impact of adversarial perturbations and the significant reduction in model detection accuracy.

References :

  1. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., … Isard, M. (2016). TensorFlow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467.
  2. Goodfellow, I. J., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
  3. Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images. Technical Report, University of Toronto.
  4. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2018). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
  5. Moosavi-Dezfooli, S. M., Fawzi, A., & Frossard, P. (2016). DeepFool: A simple and accurate method to fool deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  6. Papernot, N., McDaniel, P., & Goodfellow, I. J. (2016). Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277.
  7. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., … Duchesnay, É. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825–2830.

The impact of Artificial Intelligence (AI) and Machine Learning (ML) technologies on the practice of cybersecurity has created new pathways and new risks. While intelligent models have achieved impressive accuracy in intrusion detection, malware classification, and anomaly detection, they also remain quite vulnerable to adversarial machine learning (AML) attacks. These attacks entail the insertion of maliciously crafted and often imperceptible alterations to input data to cause models to misclassify malicious inputs as benign. Such weaknesses become a significant problem when the reliability and safety of a system is at stake. This research describes the impact of adversarial attacks on machine learning models used in cyber-security and the possible defensive approaches to improve robustness. Using benchmark datasets and established models, we examine the effect of the adversarial tools, including Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and Deep Fool, to measure the degradation of model performance. The impact of the defences, which include adversarial training and detection-based approaches, is evaluated for effectiveness in attack mitigation. The results illustrate the impact of adversarial perturbations and the significant reduction in model detection accuracy.

Paper Submission Last Date
28 - February - 2026

SUBMIT YOUR PAPER CALL FOR PAPERS
Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe