Authors :
Deepak Kumar Kejriwal; Anshul Goel; Ashwin Sharma
Volume/Issue :
Volume 10 - 2025, Issue 4 - April
Google Scholar :
https://tinyurl.com/2r48ex9v
Scribd :
https://tinyurl.com/5nusayun
DOI :
https://doi.org/10.38124/ijisrt/25apr469
Google Scholar
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 15 to 20 days to display the article.
Abstract :
As integrative applications of artificial intelligence (AI) in cybersecurity systems are flourishing, these systems
are increasingly coming under attack by adversaries, much more so for those attacks that somehow evade gradient-based
methodologies for defense. With gradient-based traditional attack paradigms such as Fast Gradient Sign Method (FGSM)
and Projected Gradient Descent (PGD), adversarial samples are generated whereby their defense methods obfuscate
gradients or manipulate gradients for training, that is known as "gradient masking" and "adversarial training,"
respectively, can offer some measure of resistance to these attacks. Notwithstanding some confidence in these
countermeasures, newer adversarial proposals are surfacing to exploit the vulnerability of black-box machine learning
models with respect to decision boundaries and will bypass totally the gradient-dependent defenses. To address this ever-
evolving potential threat with great vigor and impetus, we propose a framework of adversarial attack free from gradients,
which can destroy traditional intelligence-based techniques for the being-a-security-system. This study also proposes a
quantum-inspired defense mechanism that utilizes noise-robust quantum kernel methods to improve model resilience
against such adversarial challenges. Introducing quantum principles into cybersecurity defenses leads to the development
of a hybrid classical-quantum support vector machine (QSVM) establishing adversarial fortification alongside
performance on clean data. Evaluations on widely recognized datasets in cybersecurity include malware detection and
network intrusion datasets, where gradient-free adversarial attacks can elaborate more than 85% attack success rates
against conventional deep learning models far beyond the capability of traditional adversarial methods. However,
adversarial susceptibility is reduced significantly from our quantum-inspired approach from 40 to 60%, paving the way
for practical cybersecurity applications.
Keywords :
Adversarial Attacks, Cybersecurity, Gradient-Free Attacks, Quantum-Inspired Defenses, Machine Learning Security, Black-Box Attacks and Noise-Tolerant Quantum Kernels.
References :
- Naseer, M., Khan, S. H., Khan, H., Khan, F. S., & Porikli, F. (2019). Cross-domain transferability of adversarial perturbations. arXiv preprint arXiv:1905.11736.
- Yuan, L., Zheng, X., Zhou, Y., Hsieh, C.-J., & Chang, K.-W. (2021). on the transferability of adversarial attacks against neural text classifier. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 1612–1625.
- Naseer, M., Khan, S. H., Khan, H., Khan, F. S., & Porikli, F. (2019). Cross-domain transferability of adversarial perturbations. arXiv preprint arXiv:1905.11736.
- West, M. T., Tsang, S.-L., Low, J. S., Hill, C. D., Leckie, C., Hollenberg, L. C. L., Erfani, S. M., & Usman, M. (2023). Towards quantum enhanced adversarial robustness in machine learning. arXiv preprint arXiv:2306.12688.
- KEJRIWAL, D. K., & SHARMA, A. (2024). A Hybrid Neuro-Symbolic Framework for Real-Time Detection of Adversarial Attacks in Autonomous Systems.
- Radanliev, P. (2024). Artificial intelligence and quantum cryptography. Journal of Analytical Science and Technology, 15(4).
- Ijiga, O. M., Idoko, I. P., Ebiega, G. I., Olajide, F. I., Olatunde, T. I., & Ukaegbu, C. (2024). Harnessing adversarial machine learning for advanced threat detection: AI-driven strategies in cybersecurity risk assessment and fraud prevention. J. Sci. Technol, 11, 001-024.
- Sarker, I. H. (2023). Multi‐aspects AI‐based modeling and adversarial learning for cybersecurity intelligence and robustness: A comprehensive overview. Security and Privacy, 6(5), e295.
- Perumal, A. P., Chintale, P., Molleti, R., & Desaboyina, G. (2024). Risk Assessment of Artificial Intelligence Systems in Cybersecurity. American Journal of Science and Learning for Development, 3(7), 49-60.
- Raza, H. (2021). Proactive cyber defense with AI: Enhancing risk assessment and threat detection in cybersecurity ecosystems. Journal Name Missing.
- Malatji, M., & Tolah, A. (2024). Artificial intelligence (AI) cybersecurity dimensions: a comprehensive framework for understanding adversarial and offensive AI. AI and Ethics, 1-28.
- Ibitoye, O., Abou-Khamis, R., Shehaby, M. E., Matrawy, A., & Shafiq, M. O. (2019). The Threat of Adversarial Attacks on Machine Learning in Network Security--A Survey. arXiv preprint arXiv:1911.02621.
- Doukas, N., Stavroulakis, P., & Bardis, N. (2021). Review of artificial intelligence cyber threat assessment techniques for increased system survivability. Malware Analysis Using Artificial Intelligence and Deep Learning, 207-222.
- Suryotrisongko, H., Musashi, Y., Tsuneda, A., & Sugitani, K. (2022). Adversarial robustness in hybrid quantum-classical deep learning for botnet dga detection. Journal of Information Processing, 30, 636-644.
- Hdaib, M., Rajasegarar, S., & Pan, L. (2024). Quantum deep learning-based anomaly detection for enhanced network security. Quantum Machine Intelligence, 6(1), 26.
- Yang, Y., Zhang, S., Yan, L., & Chang, Y. (2024). Hybrid Classical Quantum Neural Network with High Adversarial Robustness.
- Cao, H., Si, C., Sun, Q., Liu, Y., Li, S., & Gope, P. (2022). Abcattack: A gradient-free optimization black-box attack for fooling deep image classifiers. Entropy, 24(3), 412.
- Wang, J., Wang, W. C., Hu, X. X., Qiu, L., & Zang, H. F. (2024). Black-winged kite algorithm: a nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artificial Intelligence Review, 57(4), 98.
- Pan, R., Xing, S., Diao, S., Sun, W., Liu, X., Shum, K., ... & Zhang, T. (2023). Plum: Prompt learning using metaheuristic. arXiv preprint arXiv:2311.08364.
- Sarker, I. H. (2023). Multi‐aspects AI‐based modeling and adversarial learning for cybersecurity intelligence and robustness: A comprehensive overview. Security and Privacy, 6(5), e295.
- Khaleel, T. A. (2024, May). Developing robust machine learning models to defend against adversarial attacks in the field of cybersecurity. In 2024 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA) (pp. 1-7). IEEE.
- McCarthy, A., Ghadafi, E., Andriotis, P., & Legg, P. (2022). Functionality-preserving adversarial machine learning for robust classification in cybersecurity and intrusion detection domains: A survey. Journal of Cybersecurity and Privacy, 2(1), 154-190.
- Ijiga, O. M., Idoko, I. P., Ebiega, G. I., Olajide, F. I., Olatunde, T. I., & Ukaegbu, C. (2024). Harnessing adversarial machine learning for advanced threat detection: AI-driven strategies in cybersecurity risk assessment and fraud prevention. J. Sci. Technol, 11, 001-024.
- Girdhar, M., Hong, J., & Moore, J. (2023). Cybersecurity of autonomous vehicles: A systematic literature review of adversarial attacks and defense models. IEEE Open Journal of Vehicular Technology, 4, 417-437.
- Croce, F., Andriushchenko, M., Sehwag, V., Debenedetti, E., Flammarion, N., Chiang, M., ... & Hein, M. (2020). Robustbench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670.
- Malatji, M., & Tolah, A. (2024). Artificial intelligence (AI) cybersecurity dimensions: a comprehensive framework for understanding adversarial and offensive AI. AI and Ethics, 1-28.
- Ghaffari Laleh, N., Truhn, D., Veldhuizen, G. P., Han, T., van Treeck, M., Buelow, R. D., ... & Kather, J. N. (2022). Adversarial attacks and adversarial robustness in computational pathology. Nature communications, 13(1), 5711.
- Salem, A. H., Azzam, S. M., Emam, O. E., & Abohany, A. A. (2024). Advancing cybersecurity: a comprehensive review of AI-driven detection techniques. Journal of Big Data, 11(1), 105.
- Sewak, M., Sahay, S. K., & Rathore, H. (2023). Deep reinforcement learning in the advanced cybersecurity threat detection and protection. Information Systems Frontiers, 25(2), 589-611.
- Huynh, L., Hong, J., Mian, A., Suzuki, H., Wu, Y., & Camtepe, S. (2023). Quantum-inspired machine learning: a survey. arXiv preprint arXiv:2308.11269.
As integrative applications of artificial intelligence (AI) in cybersecurity systems are flourishing, these systems
are increasingly coming under attack by adversaries, much more so for those attacks that somehow evade gradient-based
methodologies for defense. With gradient-based traditional attack paradigms such as Fast Gradient Sign Method (FGSM)
and Projected Gradient Descent (PGD), adversarial samples are generated whereby their defense methods obfuscate
gradients or manipulate gradients for training, that is known as "gradient masking" and "adversarial training,"
respectively, can offer some measure of resistance to these attacks. Notwithstanding some confidence in these
countermeasures, newer adversarial proposals are surfacing to exploit the vulnerability of black-box machine learning
models with respect to decision boundaries and will bypass totally the gradient-dependent defenses. To address this ever-
evolving potential threat with great vigor and impetus, we propose a framework of adversarial attack free from gradients,
which can destroy traditional intelligence-based techniques for the being-a-security-system. This study also proposes a
quantum-inspired defense mechanism that utilizes noise-robust quantum kernel methods to improve model resilience
against such adversarial challenges. Introducing quantum principles into cybersecurity defenses leads to the development
of a hybrid classical-quantum support vector machine (QSVM) establishing adversarial fortification alongside
performance on clean data. Evaluations on widely recognized datasets in cybersecurity include malware detection and
network intrusion datasets, where gradient-free adversarial attacks can elaborate more than 85% attack success rates
against conventional deep learning models far beyond the capability of traditional adversarial methods. However,
adversarial susceptibility is reduced significantly from our quantum-inspired approach from 40 to 60%, paving the way
for practical cybersecurity applications.
Keywords :
Adversarial Attacks, Cybersecurity, Gradient-Free Attacks, Quantum-Inspired Defenses, Machine Learning Security, Black-Box Attacks and Noise-Tolerant Quantum Kernels.