Authors :
Muhammad Yaaseen Hossenbux
Volume/Issue :
Volume 11 - 2026, Issue 2 - February
Google Scholar :
https://tinyurl.com/44nst3b7
Scribd :
https://tinyurl.com/3scwjyxs
DOI :
https://doi.org/10.38124/ijisrt/26feb725
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
Artificial intelligence (AI) is increasingly embedded within high-stakes human decision systems, including medical
diagnostics, judicial decision-making, financial forecasting, and autonomous control systems. While these technologies
promise improved efficiency, accuracy, and scalability, their growing authority over life-critical and socially consequential
decisions introduces significant ethical, legal, and systemic risks. This paper presents a comprehensive risk analysis of
artificial intelligence in high-stakes decision environments through a structured synthesis of interdisciplinary literature. The
study identifies six major risk categories: algorithmic opacity, data confidentiality vulnerabilities, automation bias, ethical
displacement, systemic fragility, and adversarial manipulation. The analysis demonstrates that as AI systems assume greater
decision-making autonomy, human oversight is progressively reduced, increasing exposure to unpredictable failures, biased
outcomes, and moral misalignment. Furthermore, the interconnected nature of modern socio-technical infrastructures
amplifies these risks, enabling localized algorithmic errors to propagate across institutional and societal systems. To address
these challenges, the paper proposes a conceptual mitigation framework emphasizing transparency, human-centred
oversight, ethical governance, and regulatory alignment. Understanding and proactively managing these risks is essential to
ensure that artificial intelligence enhances human decision-making without undermining accountability, trust, and social
stability.
References :
- https://dl.acm.org/doi/epdf/10.1145/3479562
- https://www.sciencedirect.com/science/article/pii/S1566253523001148
- Lawless W, Mittu R, Sofge D (2020) Human-machine shared contexts, 1st edn. Academic Press, San Diego
- B. Green and Y. Chen, “Algorithmic Risk Assessments Can Alter Human Decision-Making Processes in High-Stakes Government Contexts,” Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. CSCW2, pp. 1–33, 2021.
- B. Sahoh and A. Choksuriwong, “The Role of Explainable Artificial Intelligence in High-Stakes Decision-Making Systems: A Systematic Review,” Journal of Ambient Intelligence and Humanized Computing, vol. 14, no. 6, pp. 7827–7843, 2023.
- B. Larwood, O. J. Sutton, and C. Cockburn, “Left Shifting Analysis of Human-Autonomous Team Interactions to Analyse Risks of Autonomy in High-Stakes AI Systems,” arXiv preprint, 2025.
Artificial intelligence (AI) is increasingly embedded within high-stakes human decision systems, including medical
diagnostics, judicial decision-making, financial forecasting, and autonomous control systems. While these technologies
promise improved efficiency, accuracy, and scalability, their growing authority over life-critical and socially consequential
decisions introduces significant ethical, legal, and systemic risks. This paper presents a comprehensive risk analysis of
artificial intelligence in high-stakes decision environments through a structured synthesis of interdisciplinary literature. The
study identifies six major risk categories: algorithmic opacity, data confidentiality vulnerabilities, automation bias, ethical
displacement, systemic fragility, and adversarial manipulation. The analysis demonstrates that as AI systems assume greater
decision-making autonomy, human oversight is progressively reduced, increasing exposure to unpredictable failures, biased
outcomes, and moral misalignment. Furthermore, the interconnected nature of modern socio-technical infrastructures
amplifies these risks, enabling localized algorithmic errors to propagate across institutional and societal systems. To address
these challenges, the paper proposes a conceptual mitigation framework emphasizing transparency, human-centred
oversight, ethical governance, and regulatory alignment. Understanding and proactively managing these risks is essential to
ensure that artificial intelligence enhances human decision-making without undermining accountability, trust, and social
stability.