Risk Analysis of Artificial Intelligence in HighStakes Human Decision Systems


Authors : Muhammad Yaaseen Hossenbux

Volume/Issue : Volume 11 - 2026, Issue 2 - February


Google Scholar : https://tinyurl.com/44nst3b7

Scribd : https://tinyurl.com/3scwjyxs

DOI : https://doi.org/10.38124/ijisrt/26feb725

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : Artificial intelligence (AI) is increasingly embedded within high-stakes human decision systems, including medical diagnostics, judicial decision-making, financial forecasting, and autonomous control systems. While these technologies promise improved efficiency, accuracy, and scalability, their growing authority over life-critical and socially consequential decisions introduces significant ethical, legal, and systemic risks. This paper presents a comprehensive risk analysis of artificial intelligence in high-stakes decision environments through a structured synthesis of interdisciplinary literature. The study identifies six major risk categories: algorithmic opacity, data confidentiality vulnerabilities, automation bias, ethical displacement, systemic fragility, and adversarial manipulation. The analysis demonstrates that as AI systems assume greater decision-making autonomy, human oversight is progressively reduced, increasing exposure to unpredictable failures, biased outcomes, and moral misalignment. Furthermore, the interconnected nature of modern socio-technical infrastructures amplifies these risks, enabling localized algorithmic errors to propagate across institutional and societal systems. To address these challenges, the paper proposes a conceptual mitigation framework emphasizing transparency, human-centred oversight, ethical governance, and regulatory alignment. Understanding and proactively managing these risks is essential to ensure that artificial intelligence enhances human decision-making without undermining accountability, trust, and social stability.

References :

  1. https://dl.acm.org/doi/epdf/10.1145/3479562
  2. https://www.sciencedirect.com/science/article/pii/S1566253523001148
  3. Lawless W, Mittu R, Sofge D (2020) Human-machine shared contexts, 1st edn. Academic Press, San Diego
  4. B. Green and Y. Chen, “Algorithmic Risk Assessments Can Alter Human Decision-Making Processes in High-Stakes Government Contexts,” Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. CSCW2, pp. 1–33, 2021.
  5. B. Sahoh and A. Choksuriwong, “The Role of Explainable Artificial Intelligence in High-Stakes Decision-Making Systems: A Systematic Review,” Journal of Ambient Intelligence and Humanized Computing, vol. 14, no. 6, pp. 7827–7843, 2023.
  6. B. Larwood, O. J. Sutton, and C. Cockburn, “Left Shifting Analysis of Human-Autonomous Team Interactions to Analyse Risks of Autonomy in High-Stakes AI Systems,” arXiv preprint, 2025.

Artificial intelligence (AI) is increasingly embedded within high-stakes human decision systems, including medical diagnostics, judicial decision-making, financial forecasting, and autonomous control systems. While these technologies promise improved efficiency, accuracy, and scalability, their growing authority over life-critical and socially consequential decisions introduces significant ethical, legal, and systemic risks. This paper presents a comprehensive risk analysis of artificial intelligence in high-stakes decision environments through a structured synthesis of interdisciplinary literature. The study identifies six major risk categories: algorithmic opacity, data confidentiality vulnerabilities, automation bias, ethical displacement, systemic fragility, and adversarial manipulation. The analysis demonstrates that as AI systems assume greater decision-making autonomy, human oversight is progressively reduced, increasing exposure to unpredictable failures, biased outcomes, and moral misalignment. Furthermore, the interconnected nature of modern socio-technical infrastructures amplifies these risks, enabling localized algorithmic errors to propagate across institutional and societal systems. To address these challenges, the paper proposes a conceptual mitigation framework emphasizing transparency, human-centred oversight, ethical governance, and regulatory alignment. Understanding and proactively managing these risks is essential to ensure that artificial intelligence enhances human decision-making without undermining accountability, trust, and social stability.

Paper Submission Last Date
31 - March - 2026

SUBMIT YOUR PAPER CALL FOR PAPERS
Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe