Investigating the Harmful Implications of Generative AI in the Military Field


Authors : Mohammed Yasser

Volume/Issue : Volume 10 - 2025, Issue 9 - September


Google Scholar : https://tinyurl.com/yjsshhdy

Scribd : https://tinyurl.com/3jjx7zbz

DOI : https://doi.org/10.38124/ijisrt/25sep1357

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.

Note : Google Scholar may take 30 to 40 days to display the article.


Abstract : Generative Artificial Intelligence (AI) has rapidly emerged as both a technological innovation and a global security concern. Its application in the military domain raises unique ethical, legal, and strategic challenges. This paper examines harmful implications of generative AI in warfare, supported by published data from surveys and policy reports. Ipsos (2023) found that 69% of respondents globally are concerned about autonomous weapons, and 73% are concerned about surveillance misuse. Similarly, a UK government survey highlighted that 45% of respondents fear job displacement, 35% worry about loss of human creativity, and 34% fear losing control over AI. Meanwhile, Brookings (2018) found that only 30% of respondents support AI use in warfare, but support increases to 45% if adversaries are already using AI-based weapons. These statistics reflect widespread societal concern about the destabilizing consequences of AI militarization. This paper analyzes these concerns across five domains: misinformation, autonomous weapons, accountability, bias in decision support, and adversarial vulnerabilities. It argues that generative AI may exacerbate risks of misinformation campaigns, unlawful targeting, biased decision-making, and loss of accountability, demanding urgent international regulation.

Keywords : Generative AI, Military Applications, Autonomous Weapons, Misinformation, Ethics, Security Risks.

References :

  1. Ipsos (2023). Halifax International Security Forum Report: Public Perceptions of AI. https://www.ipsos.com/en-th/halifax-report-2023-ai
  2. UK Government (2023). Public Attitudes to Data and AI Tracker Survey, Wave 3. https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-3
  3. Brookings Institution (2018). Brookings Survey on AI in Warfare. https://www.brookings.edu/articles/brookings-survey-finds-divided-views-on-artificial-intelligence-for-warfare
  4. MDPI (2019). Artificial Intelligence Applications in Military Systems. Electronics, 10(7), 871. https://www.mdpi.com/2079-9292/10/7/871
  5. Slattery, S. et al. (2024). The AI Risk Repository: Taxonomies of Risks. arXiv preprint arXiv:2408.12622. https://arxiv.org/abs/2408.12622
  6. RAND Corporation. "Generative Artificial Intelligence Threats to Information Ecosystems." RAND, 2024. https://www.rand.org/pubs/perspectives/PEA3089-1.html
  7. ICCT. "The Weaponisation of Deepfakes." ICCT Policy Brief, 2023. https://icct.nl/sites/default/files/2023-12/The%20Weaponisation%20of%20Deepfakes.pdf
  8. U.S. Department of Defense. "Contextualizing Deepfake Threats to Organizations." CSI Deepfake Threats, 2023. https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF
  9. Brookings. "Deepfakes and International Conflict." Brookings, 2023. https://www.brookings.edu/articles/deepfakes-and-international-conflict/
  10. Carnegie Endowment. "Understanding the Global Debate on Lethal Autonomous Weapons Systems." 2024. https://carnegieendowment.org/research/2024/08/understanding-the-global-debate-on-lethal-autonomous-weapons-systems-an-indian-perspective?lang=en

Generative Artificial Intelligence (AI) has rapidly emerged as both a technological innovation and a global security concern. Its application in the military domain raises unique ethical, legal, and strategic challenges. This paper examines harmful implications of generative AI in warfare, supported by published data from surveys and policy reports. Ipsos (2023) found that 69% of respondents globally are concerned about autonomous weapons, and 73% are concerned about surveillance misuse. Similarly, a UK government survey highlighted that 45% of respondents fear job displacement, 35% worry about loss of human creativity, and 34% fear losing control over AI. Meanwhile, Brookings (2018) found that only 30% of respondents support AI use in warfare, but support increases to 45% if adversaries are already using AI-based weapons. These statistics reflect widespread societal concern about the destabilizing consequences of AI militarization. This paper analyzes these concerns across five domains: misinformation, autonomous weapons, accountability, bias in decision support, and adversarial vulnerabilities. It argues that generative AI may exacerbate risks of misinformation campaigns, unlawful targeting, biased decision-making, and loss of accountability, demanding urgent international regulation.

Keywords : Generative AI, Military Applications, Autonomous Weapons, Misinformation, Ethics, Security Risks.

CALL FOR PAPERS


Paper Submission Last Date
31 - December - 2025

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe