Authors :
Whenume O. Hundeyin; Samson A. Adegbenro; Yankat P. Rindap; Chinedu Austin Adaba
Volume/Issue :
Volume 10 - 2025, Issue 10 - October
Google Scholar :
https://tinyurl.com/2k25psz4
Scribd :
https://tinyurl.com/yax6453t
DOI :
https://doi.org/10.38124/ijisrt/25oct209
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 30 to 40 days to display the article.
Abstract :
The integration of Artificial Intelligence (AI) into Information Technology environment has transformed
organizational processes, yet it has also introduced challenges around privacy, accountability, and regulatory compliance.
This study explores how privacy-preserving AI (PPAI) techniques can strengthen IT governance and compliance, and
identifies the governance controls internal IT auditors require in AI developments. A qualitative exploratory research design
was adopted, drawing from nine peer-reviewed articles, regulatory framework (as GDPR, EU AI Act, the NIST AI RMF,
and ISO/IEC standards) IT governance models (COBIT, ISACA guidelines, ISO/IEC 27001). The analytical process
combined thematic analysis and comparative mapping to PPAI techniques with IT governance control and assurance
checkpoints. The findings reveal that federated learning operationalizes privacy-by-design by minimizing raw data transfer,
aligning with GDPR and similar principle, Secure aggregation, homomorphic encryption, and differential privacy
strengthen confidentiality and safeguard model outputs against inference attacks, while immutable logging and
explainability provide accountability and auditability consistent with ISO/IEC 27701 and NIST AI RMF. From an assurance
perspective, auditors must expand evaluations to cover AI-specific risks, including model integrity, federated learning
protocols, and privacy-preserving outputs. The study concludes that PPAI serves not only as a technical safeguard but also
as a governance enabler. Recommendations include embedding PPAI in IT operations, updating governance standards, and
developing dynamic audit framework tailored to AI-driven environments.
References :
- Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R. and Chatila, R., 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, pp.82-115.
- Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H.B., Patel, S., Ramage, D., Segal, A. and Seth, K., 2017, October. Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (pp. 1175-1191).
- Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B. and Anderson, H., 2018. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228.
- Chang, Y., Fang, C. and Sun, W., 2021. A Blockchain‐Based Federated Learning Method for Smart Healthcare. Computational Intelligence and Neuroscience, 2021(1), p.4376418.
- Chin, T., Li, Q., Mirone, F. and Papa, A., 2025. Conflicting impacts of shadow AI usage on knowledge leakage in metaverse-based business models: A Yin-Yang paradox framing. Technology in Society, 81, p.102793.
- De Haes, S., Van Grembergen, W., Joshi, A. and Huygh, T., 2019. COBIT as a Framework for Enterprise Governance of IT. In Enterprise governance of information technology: Achieving alignment and value in digital organizations (pp. 125-162). Cham: Springer International Publishing.
- Dwork, C. and Roth, A., 2014. The algorithmic foundations of differential privacy. Foundations and trends® in theoretical computer science, 9(3–4), pp.211-407.
- Edge, (2024). Top Cloud Security Statistics in 2024. Microsoft (no date) Bing. Available at: https://www.bing.com/ck/a?\ (Accessed: August 24, 2025).
- Floridi, L. and Taddeo, M., 2016. What is data ethics?. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), p.20160360.
- Hamdi, Z., Norman, A.A., Molok, N.N.A. and Hassandoust, F., 2019, December. A comparative review of ISMS implementation based on ISO 27000 series in organizations of different business sectors. In Journal of Physics: Conference Series (Vol. 1339, No. 1, p. 012103). IOP Publishing.
- ISACA, (2019): COBIT 2019 Framework: Introduction and Methodology. ISACA. Schaumburg, IL.
- ISO/IEC, (2017). ISO/IEC 38502:2017 Information technology - Governance of IT – Framework and Model. Geneva, Switzerland: ISO/IEC.
- Jobin, A., Ienca, M. and Vayena, E., 2019. The global landscape of AI ethics guidelines. Nature machine intelligence, 1(9), pp.389-399.
- Kairouz, P., McMahan, H.B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A.N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R. and D’Oliveira, R.G., 2021. Advances and open problems in federated learning. Foundations and trends® in machine learning, 14(1–2), pp.1-210.
- Kalodanis, K., Feretzakis, G., Anastasiou, A., Rizomiliotis, P., Anagnostopoulos, D. and Koumpouros, Y., 2025. A Privacy-Preserving and Attack-Aware AI Approach for High-Risk Healthcare Systems Under the EU AI Act. Electronics, 14(7), p.1385.
- Khan, M.S. (2025). How to audit AI and autonomous agents: A practical guide for internal auditors and GRC teams, Linkedin.com. Available at: https://www.linkedin.com/pulse/how-audit-ai-autonomous-agents-practical-guide-internal-khan-av3mf (Accessed: August 23, 2025).
- Long, Y., Chen, Y., Ren, W., Dou, H. and Xiong, N.N., 2020. Depet: A decentralized privacy-preserving energy trading scheme for vehicular energy network via blockchain and k-anonymity. Ieee Access, 8, pp.192587-192596.
- McIntosh, T.R., Susnjak, T., Liu, T., Watters, P., Xu, D., Liu, D., Nowrozy, R. and Halgamuge, M.N., 2024. From cobit to iso 42001: Evaluating cybersecurity frameworks for opportunities, risks, and regulatory compliance in commercializing large language models. Computers & Security, 144, p.103964.
- McMahan, B., Moore, E., Ramage, D., Hampson, S. and y Arcas, B.A., 2017, April. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics (pp. 1273-1282). PMLR.
- Mindmapai, (2025). AI, privacy, and encryption: A comprehensive guide. Mindmapai.app. Available at: https://mindmapai.app/mind-mapping/ai-privacy-and-encryption (Accessed: September 12, 2025).
- Norberg, P.A., Horne, D.R. and Horne, D.A., 2007. The privacy paradox: Personal information disclosure intentions versus behaviors. Journal of consumer affairs, 41(1), pp.100-126.
- Olson, D. (2025) Federated learning and privacy-preserving AI, Linkedin.com. Available at: https://www.linkedin.com/pulse/federated-learning-privacy-preserving-ai-douglas-olson-yyjbc (Accessed: August 23, 2025)
- O'neil, C., 2017. Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
- Peacock, S.E., 2014. How web tracking changes user agency in the age of Big Data: The used user. Big Data & Society, 1(2), p.2053951714564228.
- Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.F., Breazeal, C., Crandall, J.W., Christakis, N.A., Couzin, I.D., Jackson, M.O. and Jennings, N.R., 2019. Machine behaviour. Nature, 568(7753), pp.477-486.
- Ramachandran, A. (2024) The transformative impact of artificial intelligence on internal controls, controls audit procedures and testing: A comprehensive analysis, Linkedin.com. Available at: https://www.linkedin.com/pulse/transformative-impact-artificial-intelligence-audit-ramachandran-zbwqe (Accessed: August 23, 2025).
- Rieke, N., Hancox, J., Li, W., Milletari, F., Roth, H.R., Albarqouni, S., Bakas, S., Galtier, M.N., Landman, B.A., Maier-Hein, K. and Ourselin, S., 2020. The future of digital health with federated learning. NPJ digital medicine, 3(1), p.119.
- Rubio, J.L. and Arcilla, M., 2019. How to optimize the implementation of itil through a process ordering algorithm. Applied Sciences, 10(1), p.34.
- Salako, A.O., Fabuyi, J.A., Aideyan, N.T., Selesi-Aina, O., Dapo-Oyewole, D.L. and Olaniyi, O.O., 2024. Advancing information governance in AI-driven cloud ecosystem: Strategies for enhancing data security and meeting regulatory compliance. Asian Journal of Research in Computer Science, 17(12), pp.66-88.
- Shokri, R., Stronati, M., Song, C. and Shmatikov, V., 2017, May. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP) (pp. 3-18). IEEE.
- Szarmach, J. (2025). AI governance controls mega-map, AI Governance Library. Available at: https://www.aigl.blog/ai-governance-controls-mega-map-feb-2025/ (Accessed: September 12, 2025).
- Tang, X., Zhu, L., Shen, M., Peng, J., Kang, J., Niyato, D. and Abd El-Latif, A.A., 2022. Secure and trusted collaborative learning based on blockchain for artificial intelligence of things. IEEE Wireless Communications, 29(3), pp.14-22.
- Truong, N., Sun, K., Wang, S., Guitton, F. and Guo, Y., 2021. Privacy preservation in federated learning: An insightful survey from the GDPR perspective. Computers & Security, 110, p.102402.
- Wang, N., Yang, W., Wang, X., Wu, L., Guan, Z., Du, X. and Guizani, M., 2024. A blockchain based privacy-preserving federated learning scheme for Internet of Vehicles. Digital Communications and Networks, 10(1), pp.126-134.
- Wilkin, C.L. and Chenhall, R.H., 2020. Information technology governance: Reflections on the past and future directions. Journal of Information Systems, 34(2), pp.257-292.
- Wirtz, B.W., Weyerer, J.C. and Kehl, I., 2022. Governance of artificial intelligence: A risk and guideline-based integrative framework. Government information quarterly, 39(4), p.101685.
- Yilmaz, E. and Can, O., 2024. Unveiling shadows: Harnessing artificial intelligence for insider threat detection. Engineering, Technology & Applied Science Research, 14(2), pp.13341-13346.
- Zhao, J., Bagchi, S., Avestimehr, S., Chan, K., Chaterji, S., Dimitriadis, D., Li, J., Li, N., Nourian, A. and Roth, H., 2025. The federation strikes back: A survey of federated learning privacy attacks, defenses, applications, and policy landscape. ACM Computing Surveys, 57(9), pp.1-37.
The integration of Artificial Intelligence (AI) into Information Technology environment has transformed
organizational processes, yet it has also introduced challenges around privacy, accountability, and regulatory compliance.
This study explores how privacy-preserving AI (PPAI) techniques can strengthen IT governance and compliance, and
identifies the governance controls internal IT auditors require in AI developments. A qualitative exploratory research design
was adopted, drawing from nine peer-reviewed articles, regulatory framework (as GDPR, EU AI Act, the NIST AI RMF,
and ISO/IEC standards) IT governance models (COBIT, ISACA guidelines, ISO/IEC 27001). The analytical process
combined thematic analysis and comparative mapping to PPAI techniques with IT governance control and assurance
checkpoints. The findings reveal that federated learning operationalizes privacy-by-design by minimizing raw data transfer,
aligning with GDPR and similar principle, Secure aggregation, homomorphic encryption, and differential privacy
strengthen confidentiality and safeguard model outputs against inference attacks, while immutable logging and
explainability provide accountability and auditability consistent with ISO/IEC 27701 and NIST AI RMF. From an assurance
perspective, auditors must expand evaluations to cover AI-specific risks, including model integrity, federated learning
protocols, and privacy-preserving outputs. The study concludes that PPAI serves not only as a technical safeguard but also
as a governance enabler. Recommendations include embedding PPAI in IT operations, updating governance standards, and
developing dynamic audit framework tailored to AI-driven environments.