Authors :
Md. Farhad Rahman; Shamsad Binte Ehsan; Tawhidur Rahman; Monira Mostafa
Volume/Issue :
Volume 11 - 2026, Issue 2 - February
Google Scholar :
https://tinyurl.com/hf4r8sdd
Scribd :
https://tinyurl.com/52j9ch34
DOI :
https://doi.org/10.38124/ijisrt/26feb600
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
The presence of Artificial Intelligence (AI) systems on clouds systems deployed on the cloud infrastructure is
becom- ing critical to the activities of the U.S. government, the country’s defense, and controlled industries. Though the
FedRAMP ap- proved government cloud platforms offer minimum safeguards to infrastructure and data, they are not
suitable to deal with AI specific risks like model poisoning, adversarial manipulation, training data breach, and AI supply
chain risks. This breach injects the issue of national security with the integration of AI systems into mission critical cloud
environments. This paper aims at creating a national security oriented framework of securing AI systems used in the U.S
government cloud infrastructure. The framework will attempt to address the lack of connection between current federal
cloud security mandates and the risk profile specific to AI systems that perform throughout the entire AI lifecycle. The
main deliverable of this work is the standards aligned security framework, which incorporates AI specific threat modeling
with the established federal guidelines, such as the NIST AI Risk Management Framework, NIST SP 800-53, Zero Trust
Architecture principles, and FedRAMP security controls. The framework cross over AI induced risks to the technical,
operational, and governance controls that apply to multi tenant government cloud environments. The results of this paper
indicate that the existing FedRAMP based cloud deployments were greatly missing governance and control in the areas
of AI model integrity, data provenance, and shared responsibility in terms of AI security. The discussion shows that the
current models of cloud security need clarified extensions to AI to handle the risks of national security related to cloud
hosted AI solutions.
Keywords :
Artificial Intelligence Security (AIS); Government Cloud Security; FedRAMP; NIST AI Risk Management Frame- work; Gov Cloud; NIST 800-53.
References :
- N. Papernot et al., “Security and Privacy Issues in Deep Learning,” arXiv preprint arXiv:1807.11655, 2018. [Online]. Available: https://arxiv.org/ abs/1807.11655
- “Securing Machine Learning in the Cloud: A Systematic Review,” NCBI/PMC, 2021. [Online]. Available: https://www.ncbi.nlm.nih.gov/ pmc/articles/PMC7931962/
- NIST, “Artificial Intelligence Risk Management Framework (AI RMF 1.0),” NIST AI 100-1, 2023. [Online]. Available: https://nvlpubs.nist. gov/nistpubs/ai/nist.ai.100-1.pdf
- NIST, “Adversarial Machine Learning: A Taxonomy and Terminology,” NIST AI 100-2, 2025. [Online]. Available: https://csrc.nist.gov/pubs/ai/ 100/2/e2025/final
- “MITRE ATLAS: Adversarial Threat Landscape for AI Sys- tems,” 2024. [Online]. Available: https://www.redhat.com/en/blog/ harden-your-ai-systems-applying-industry-standards-real-world
- “The Role of AI in Zero Trust Architecture: A Review,” 2024. [Online]. Available: https://jesit.springeropen.com/articles/10. 1186/s43067-024-00155-z
- FedRAMP, “FedRAMP Program,” 2026. [Online]. Available: https://www.fedramp.gov
- Amazon Web Services, “AWS GovCloud (U.S.) Security Overview,” 2026. [Online]. Available: https://aws.amazon.com/govcloud-us/
- Microsoft, “Azure Government Compliance Documentation,” 2026. [Online]. Available: https://learn.microsoft.com/en-us/azure/ azure-government/documentation-government-compliance
- “FedRAMP Rev. 5 Baselines Transition,” 2025. [Online]. Available: https://www.fedramp.gov/documents-templates/
- StandardFusion, “NIST SP 800-53 Rev. 5 and FedRAMP,” 2026. [Online]. Available: https://www.standardfusion.com/blog/ nist-sp-800-53-rev-5-and-fedramp
- M. O. Faruq, “Vendor risk management in cloud-centric architectures: A systematic review of SOC 2, FedRAMP, and ISO 27001 practices,” International Journal of Business and Economics Insights, vol. 4, no. 1,pp. 1–32, 2024.
- M. O. Faruq and M. J. I. Saidur, “Aligning FedRAMP and NIST frameworks in cloud-based governance models: Challenges and best practices,” Review of Applied Science and Technology, vol. 1, no. 1, pp. 1–37, 2022.
- H. Teuscher, “Automating the RMF: Lessons from the FedRAMP 20x pilot,” arXiv preprint arXiv:2510.09613, 2025.
- M. Sayduzzaman, S. Sazzad, M. Rahman, T. Rahman, and M. K. Uddin, “Managing escalating cyber threats: Perspectives and policy insights for Bangladesh,” Technical report, 2024.
- M. Sayduzzaman and M. H. Nawab, “Blockchain-backed ML-based zero-trust honeypot for forensic-ready cyber-physical system security in Industry X,” Journal of Computational Science and Applications, vol. 2, no. 2, pp. 1–10, 2025.
The presence of Artificial Intelligence (AI) systems on clouds systems deployed on the cloud infrastructure is
becom- ing critical to the activities of the U.S. government, the country’s defense, and controlled industries. Though the
FedRAMP ap- proved government cloud platforms offer minimum safeguards to infrastructure and data, they are not
suitable to deal with AI specific risks like model poisoning, adversarial manipulation, training data breach, and AI supply
chain risks. This breach injects the issue of national security with the integration of AI systems into mission critical cloud
environments. This paper aims at creating a national security oriented framework of securing AI systems used in the U.S
government cloud infrastructure. The framework will attempt to address the lack of connection between current federal
cloud security mandates and the risk profile specific to AI systems that perform throughout the entire AI lifecycle. The
main deliverable of this work is the standards aligned security framework, which incorporates AI specific threat modeling
with the established federal guidelines, such as the NIST AI Risk Management Framework, NIST SP 800-53, Zero Trust
Architecture principles, and FedRAMP security controls. The framework cross over AI induced risks to the technical,
operational, and governance controls that apply to multi tenant government cloud environments. The results of this paper
indicate that the existing FedRAMP based cloud deployments were greatly missing governance and control in the areas
of AI model integrity, data provenance, and shared responsibility in terms of AI security. The discussion shows that the
current models of cloud security need clarified extensions to AI to handle the risks of national security related to cloud
hosted AI solutions.
Keywords :
Artificial Intelligence Security (AIS); Government Cloud Security; FedRAMP; NIST AI Risk Management Frame- work; Gov Cloud; NIST 800-53.