Authors :
Yash Mirchandani
Volume/Issue :
Volume 10 - 2025, Issue 8 - August
Google Scholar :
https://tinyurl.com/5butd482
Scribd :
https://tinyurl.com/dhu59rkz
DOI :
https://doi.org/10.38124/ijisrt/25aug952
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 30 to 40 days to display the article.
Abstract :
The world that we live in today, is dominated by technological advancements. Many breakthroughs dominate our
society today. Among them, Artificial Intelligence (AI) has emerged to be a prominent one. It is no longer relegated to being
strictly a sci-fi Hollywood blockbuster project of the future. Today, it is a part and parcel of our daily life and human decision-
making processes. It is slowly finding its imprint on several sectors with each passing moment. This, in turn, directly affects
human well-being, but also poses us a growing question of trust in these AI systems. With time, this inquiry has grown even
more urgent, one which requires an immediate addressal.
Artificial Intelligence serves a lot of purposes but has made substantial contributions in crucial sectors like education,
healthcare, and finance. Within these, the incorporation of Artificial Intelligence can have direct consequences on
individuals' lives. However, despite holding a life-changing potential, there exists an inadvertent issue of public trust in AI
and its related technologies. This is primarily due to the "black-box" nature of many models. This makes their decision-
making processes opaque. It also results in them being highly difficult to interpret. [1]
In order to tackle this challenge, Explainable AI (XAI) has emerged as a crucial response. The purpose of XAI is to
make algorithmic outcomes more transparent, interpretable, and accountable. In simpler terms, Explainable AI focuses on
making the Artificial Intelligence technology more comprehensible for humans. [2] The aim of this paper is to explore the
role of Explainable AI in building and sustaining public trust. It will focus specifically on the applications of XAI in fields
like education, healthcare, and finance. Via it, the paper seeks to demonstrate how enhancing transparency and
accountability through XAI can foster greater trust and responsible adoption of AI in these critical sectors.
To achieve the same, the paper will adopt a qualitative approach. It will be informed by published literature, case
examples and policy briefings. This will make it possible to critically consider how explainability affects perceptions of
fairness, dependability, and liability. In the education sector, the paper will delve into how transparent grading and
admission algorithms can enhance acceptance among students, parents, and educators. In the field of healthcare, it will take
a look into the significance of interpretability. This allows for enhanced clinical decision support systems. In turn, it impacts
life-altering judgements. These not only require accuracy but also human comprehension. [3] Likewise, explainability in
finance can lead to higher credit scoring, fraud detection, and robo-advisory systems. This enables streamlined mechanisms
to safeguard consumer trust and compliance with regulatory frameworks. [4]
Lastly, the paper will identify cross-sectoral themes. These include themes related to the balance between accuracy and
interpretability, ethical dangers of oversimplified explanations, and the role of cultural and social contexts in trust-building.
At the end, the paper will also outline future directions. Moreover, it will also emphasise on the need for standardised
frameworks, policy interventions, and greater public engagement in shaping trustworthy AI systems. In discussing XAI in
relation to the technology discourse, with a focus on ethics and accountability, this paper will further contextualise its
significance for responsible innovation and a sustainable public trust in AI decision-making.
Keywords :
Explainable AI, Trust, Transparency, Ethics, Accountability.
References :
- W. J. Von Eschenbach, "Transparency and the black box problem: Why we do not trust AI.," Philosophy & Technology, vol. 34, no. 4, p. 1607–1622, 01 September 2021.
- R. Dwivedi, D. Dave, H. Naik, S. Singhal, R. Omer, P. Patel, B. Qian, Z. Wen, T. Shah, G. Morgan and R. Ranjan, "Explainable AI (XAI): Core Ideas, Techniques, and Solutions," ACM Computing Surveys (ACM Comput. Surv.), vol. 55, no. 9, p. 33, September 2023.
- T. Hulsen, "Explainable artificial intelligence (XAI): concepts and challenges in healthcare," AI, vol. 4, no. 3, pp. 652 - 666, 2023.
- A. N. A. O. S. T. N. K. A. J. A. I. Anang, "Explainable AI in financial technologies: Balancing innovation with regulatory compliance," International Journal of Science and Research Archive, vol. 13, p. 1793–1806, 30 October 2024.
- W. Ertel, Introduction to Artificial Intelligenc, Springer Nature, 2024.
- T. Mucci, "The future of AI: trends shaping the next 10 years," IBM, 11 October 2024. [Online]. Available: https://www.ibm.com/think/insights/artificial-intelligence-future. [Accessed 07 August 2025].
- R. Hassan, N. Nguyen, S. R. Finserås, L. Adde, I. Strümke and R. Støen, "Unlocking the black box: Enhancing human-AI collaboration in high-stakes healthcare scenarios through explainable AI," Technological Forecasting and Social Change, vol. 219, 2025.
- C. Marshall, "What is AI transparency? A comprehensive guide," Zendesk Blog, 7 August 2025. [Online]. Available: https://www.zendesk.com/in/blog/ai-transparency/. [Accessed 11 August 2025].
- V. Hassija, V. Chamola, A. Mahapatra, A. Singal, D. Goel, K. Huang, S. Scardapane, I. Spinelli, M. Mahmud and A. Hussain, "Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence," Cognitive Computation, vol. 16, pp. 45-74, 24 August 2023.
- geeksforgeeks, "Explainable Artificial Intelligence(XAI)," geeksforgeeks, 15 April 2025. [Online]. Available: https://www.geeksforgeeks.org/artificial-intelligence/explainable-artificial-intelligencexai/. [Accessed 11 August 2025].
- S. U. Hamida, M. J. M. Chowdhury, N. R. Chakraborty, K. Biswas and S. K. Sami, "Exploring the Landscape of Explainable Artificial Intelligence (XAI): A Systematic Review of Techniques and Applications," Big Data and Cognitive Computing, vol. 8, no. 11, p. 149, 31 October 2024.
- K. Devireddy, A Comparative Study of Explainable AI Methods: Model-Agnostic vs. Model-Specific Approaches, vol. 1, Cornell University (arXiv), 2025.
- A. Athar, "SHAP (SHapley Additive exPlanations) And LIME (Local Interpretable Model-agnostic Explanations) for model explainability.," Analytics Vidhya, 04 October 2020. [Online]. Available: https://medium.com/analytics-vidhya/shap-shapley-additive-explanations-and-lime-local-interpretable-model-agnostic-explanations-8c0aa33e91f. [Accessed 08 August 2025].
- D. E. Mathew, D. U. Ebem, A. C. Ikegwu, P. E. Ukeoma and N. F. Dibiaezue, "Recent Emerging Techniques in Explainable Artificial Intelligence to Enhance the Interpretable and Understanding of AI Models for Human," Neural Processing Letters, vol. 57, no. 16, 07 February 2025.
- N. Luhmann, Vertrauen: Ein Mechanismus der Reduktion sozialer Komplexität [Trust: A mechanism for the reduction of social complexity], Stuttgart: Enke, 1973.
- R. Lukyanenko, W. Maass and V. Storey, "Trust in artificial intelligence: From a Foundational Trust Framework to emerging research opportunities," Electronic Markets, vol. 32, p. 1993–2020, 28 November 2022.
- B. K. Riley and A. Dixon, "Emotional and cognitive trust in artificial intelligence: A framework for identifying research opportunities," Current Opinion in Psychology, vol. 58, August 2024.
- M. Kim, R. Huang and S. J. Lennon, "Understanding the role of cognitive and affective trust in consumer-artificial intelligence relationships," in International Textile and Apparel Association Annual Conference Proceedings, 2022.
- J. Schoeffer, Y. Machowski and N. Kuehl, "Perceptions of Fairness and Trustworthiness Based on Explanations in Human vs. Automated Decision-Making," in Hawaii International Conference on System Sciences 2022 (HICSS-55), Hawaii, 13 September 2021.
- F. Doshi-Velez and B. Kim, A roadmap for a rigorous science of interpretability, vol. 2.1, arXiv preprint, 2017.
- M. A. FAHEEM, "AI-Driven Risk Assessment Models: Revolutionizing Credit Scoring and Default Prediction," Iconic Research and Engineering Journals, vol. 5, no. 3, pp. 177-186, September 2021.
- D. Martens, G. Shmueli, T. Evgeniou, K. Bauer, C. Janiesch, S. Feuerriegel, S. Gabel, S. Goethals, T. Greene, N. Klein, M. Kraus, N. Kühl, C. Perlich, W. Verbeke and A. Zharova, "Beware of ‘Explanations’ of AI," arXiv.org, April 2025.
- J. Li, Y. Yang, R. Zhang and Y.-C. Lee, "Overconfident and unconfident AI hinder human-AI collaboration," arXivLabs, 12 February 2024.
- "Technology acceptance model," Wikipedia, [Online]. Available: https://en.wikipedia.org/wiki/Technology_acceptance_model. [Accessed 11 August 2025].
- I. Baroni, G. R. Calegari, D. Scandolari and I. Celino, "AI-TAM: a model to investigate user acceptance and collaborative intention in human-in-the-loop AI applications," Human Computation, vol. 9, no. 1, pp. 1-21, 23 May 2022.
- P. Hayes, "An ethical intuitionist account of transparency of algorithms and its gradations," Business Research, vol. 13, no. 3, p. 849–874, 23 December 2020.
- C. Mougan and J. Brand, "Kantian deontology meets AI alignment: Towards morally grounded fairness metrics," 9 November 2023.
- B. Quinn, "UK exams debacle: how did this year's results end up in chaos?," 17 August 2020. [Online]. Available: https://www.theguardian.com/education/2020/aug/17/uk-exams-debacle-how-did-results-end-up-chaos. [Accessed 10 August 2025].
- G. Leckie and L. Prior, "The 2020 GCSE and A-level 'exam grades fiasco': A secondary data analysis of students' grades and Ofqual's algorithm," The University of Bristol, 2023. [Online]. Available: https://www.bristol.ac.uk/cmm/research/grade/. [Accessed 10 August 2025].
- S. Shead, "How a computer algorithm caused a grading crisis in British schools," CNBC.com, 21 August 2020. [Online]. Available: https://www.cnbc.com/2020/08/21/computer-algorithm-caused-a-grading-crisis-in-british-schools.html. [Accessed 10 August 2025].
- henricodolfing, "Case Study 20: The $4 Billion AI Failure of IBM Watson for Oncology," henricodolfing.com, 07 December 2024. [Online]. Available: https://www.henricodolfing.com/2024/12/case-study-ibm-watson-for-oncology-failure.html. [Accessed 10 August 2025].
- H. Faheem and S. Dutta, "Artificial Intelligence Failure at IBM 'Watson for Oncology'," IBS Center for Management Research, 2022.
- L. O’Leary, "How IBM’s Watson Went From the Future of Health Care to Sold Off for Parts," Slate.com, 31 January 2022. [Online]. Available: https://slate.com/technology/2022/01/ibm-watson-health-failure-artificial-intelligence.html. [Accessed 10 August 2025].
- BBC, "Apple's 'sexist' credit card investigated by US regulator," BBC.com, 11 November 2019. [Online]. Available: https://www.bbc.com/news/business-50365609. [Accessed 10 August 2025].
- W. Knight, "The Apple Card Didn't 'See' Gender—and That's the Problem," Wired.com, 19 November 2019. [Online]. Available: https://www.wired.com/story/the-apple-card-didnt-see-genderand-thats-the-problem/. [Accessed 10 August 2025].
- N. Vigdor, "Apple Card Investigated After Gender Discrimination Complaints," The New York Times, 10 November 2019. [Online]. Available: https://www.nytimes.com/2019/11/10/business/Apple-credit-card-investigation.html. [Accessed 10 August 2025].
- U. Ehsan, Q. V. Liao, M. Muller, M. O. Riedl and J. D. Weisz, "Expanding Explainability: Towards Social Transparency in AI systems," in CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 07 May 2021.
- M. Price, "The 2020 UK exam fiasco has given ‘algorithms’ a bad name," LinkedIn.com, 13 November 2020. [Online]. Available: https://www.linkedin.com/pulse/2020-uk-exam-fiasco-has-given-algorithms-bad-name-matthew-price/. [Accessed 11 August 2025].
- C. S. Elliot Jones, "Can algorithms ever make the grade?," Ada Lovelace Institute, 18 August 2020. [Online]. Available: https://www.adalovelaceinstitute.org/blog/can-algorithms-ever-make-the-grade/. [Accessed 10 August 2025].
- H. Khosravi, S. Buckingham Shum, G. Chen, C. Conati, Y.-S. Tsai, J. Kay, S. Knight, R. Martinez-Maldonado, S. Sadiq and D. Gašević, "Explainable Artificial Intelligence in education," Computers and Education: Artificial Intelligence, vol. 3, 13 May 2022.
- A. Kumar, P. Tejaswini, O. Nayak, A. Kujur, R. Gupta, A. Rajanand and M. Sahu, "A Survey on IBM Watson and Its Services," Journal of Physics: Conference Series, vol. 2273, 1 May 2022.
- H. J. Park, "Patient perspectives on informed consent for medical AI: A web-based experiment," Digital Health, vol. 10, 30 April 2024.
- J. Amann, A. Blasimme, E. Vayena, D. Frey and V. I. Madai, "Explainability for artificial intelligence in healthcare: a multidisciplinary perspective," BMC Medical Informatics and Decision Making, vol. 20, 30 November 2020.
- A. Kirilenko, A. Kyle, M. Samadi and T. Tuzun, "The Flash Crash: The Impact of High Frequency Trading on an Electronic Market," SSRN Electronic Journal, 26 May 2011.
- K. de Fine Licht and J. Licht, "Artificial intelligence, transparency, and public decision-making: Why explanations are key when trying to produce perceived legitimacy," AI & SOCIETY, vol. 35, December 2020.
- Emerge Digital, "AI Accountability: Who’s Responsible When AI Goes Wrong?," Emerge Digital, [Online]. Available: https://emerge.digital/resources/ai-accountability-whos-responsible-when-ai-goes-wrong/. [Accessed 11 August 2025].
- World Health Ogranization, "ETHICS AND GOVERNANCE OF ARTIFICIAL INTELLIGENCE FOR HEALTH: WHO GUIDANCE," Geneva, 2021.
- S. T. H. Mortaji and M. E. Sadeghi, "Assessing the Reliability of Artificial Intelligence Systems: Challenges, Metrics, and Future Directions," International Journal of Innovation in Management, Economics and Social Sciences, vol. 4, pp. 1-13, 29 June 2024.
- K. d. Costa, "Practical and Societal Dimensions of Explainable AI," Holistic AI, 2 March 2023. [Online]. Available: https://www.holisticai.com/blog/explainable-ai-dimensions. [Accessed 11 August 2025].
- A. Gomstyn and A. Jonker, "Exploring privacy issues in the age of AI," IBM, 30 September 2024. [Online]. Available: https://www.ibm.com/think/insights/ai-privacy. [Accessed 11 August 2025].
- Intertech, "Risks to Proprietary Data During AI Implementation and How To Protect Your Data in an AI System," Intertech, [Online]. Available: https://www.intertech.com/risks-to-proprietary-data-during-ai-implementation-and-how-to-protect-your-data/. [Accessed 11 August 2025].
- D. D. Vashistha, P. P. K. Chandel and S. Gaur, "Investigating Socioeconomic Disparities in Digital Education Experiences," The International Journal of Indian Psychology , vol. 12, no. 3, July - September 2024.
- Y. Liu, W. Yu and T. Dillon, "Regulatory responses and approval status of artificial intelligence medical devices with a focus on China," NPJ Digital Medicine, vol. 7, no. 1, p. 255, 18 September 2024.
- L. Nannini, M. Marchiori Manerba and I. Beretta, "Mapping the landscape of ethical considerations in explainable AI research," Ethics and Information Technology, vol. 26, p. 44, 25 June 2024.
- Niti Aayog, "NATIONAL STRATEGY FOR ARTIFICIAL INTELLIGENCE," Niti Aayog, June 2018.
- P. Kelly-Voicu, "What is Human-in-the-loop (HITL) in AI-assisted decision-making?," June 2023. [Online]. Available: http://1000minds.com/articles/human-in-the-loop. [Accessed 11 August 2025].
- D. Lukose, "Right Human-in-the-Loop for Effective AI," Medium.com, 13 January 2025. [Online]. Available: https://medium.com/@dickson.lukose/building-a-smarter-safer-future-why-the-right-human-in-the-loop-is-critical-for-effective-ai-b2e9c6a3386f. [Accessed 12 August 2025].
The world that we live in today, is dominated by technological advancements. Many breakthroughs dominate our
society today. Among them, Artificial Intelligence (AI) has emerged to be a prominent one. It is no longer relegated to being
strictly a sci-fi Hollywood blockbuster project of the future. Today, it is a part and parcel of our daily life and human decision-
making processes. It is slowly finding its imprint on several sectors with each passing moment. This, in turn, directly affects
human well-being, but also poses us a growing question of trust in these AI systems. With time, this inquiry has grown even
more urgent, one which requires an immediate addressal.
Artificial Intelligence serves a lot of purposes but has made substantial contributions in crucial sectors like education,
healthcare, and finance. Within these, the incorporation of Artificial Intelligence can have direct consequences on
individuals' lives. However, despite holding a life-changing potential, there exists an inadvertent issue of public trust in AI
and its related technologies. This is primarily due to the "black-box" nature of many models. This makes their decision-
making processes opaque. It also results in them being highly difficult to interpret. [1]
In order to tackle this challenge, Explainable AI (XAI) has emerged as a crucial response. The purpose of XAI is to
make algorithmic outcomes more transparent, interpretable, and accountable. In simpler terms, Explainable AI focuses on
making the Artificial Intelligence technology more comprehensible for humans. [2] The aim of this paper is to explore the
role of Explainable AI in building and sustaining public trust. It will focus specifically on the applications of XAI in fields
like education, healthcare, and finance. Via it, the paper seeks to demonstrate how enhancing transparency and
accountability through XAI can foster greater trust and responsible adoption of AI in these critical sectors.
To achieve the same, the paper will adopt a qualitative approach. It will be informed by published literature, case
examples and policy briefings. This will make it possible to critically consider how explainability affects perceptions of
fairness, dependability, and liability. In the education sector, the paper will delve into how transparent grading and
admission algorithms can enhance acceptance among students, parents, and educators. In the field of healthcare, it will take
a look into the significance of interpretability. This allows for enhanced clinical decision support systems. In turn, it impacts
life-altering judgements. These not only require accuracy but also human comprehension. [3] Likewise, explainability in
finance can lead to higher credit scoring, fraud detection, and robo-advisory systems. This enables streamlined mechanisms
to safeguard consumer trust and compliance with regulatory frameworks. [4]
Lastly, the paper will identify cross-sectoral themes. These include themes related to the balance between accuracy and
interpretability, ethical dangers of oversimplified explanations, and the role of cultural and social contexts in trust-building.
At the end, the paper will also outline future directions. Moreover, it will also emphasise on the need for standardised
frameworks, policy interventions, and greater public engagement in shaping trustworthy AI systems. In discussing XAI in
relation to the technology discourse, with a focus on ethics and accountability, this paper will further contextualise its
significance for responsible innovation and a sustainable public trust in AI decision-making.
Keywords :
Explainable AI, Trust, Transparency, Ethics, Accountability.