Explainable AI-Based Resume Screening System to Reduce Hiring Bias in Fresher Recruitment


Authors : P. R. Falke; K. J. Pawar; S. S. Sabat

Volume/Issue : Volume 11 - 2026, Issue 2 - February


Google Scholar : https://tinyurl.com/mvf2n8m8

Scribd : https://tinyurl.com/2tunjd3w

DOI : https://doi.org/10.38124/ijisrt/26feb870

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : The swift digital transformation of recruitment processes has resulted in the widespread implementation of Artificial Intelligence (AI)–driven resume screening systems, particularly in large-scale recruitment of fresh graduates. Organizations that hire entry-level candidates often receive thousands of resumes, making manual screening inefficient, inconsistent, and susceptible to human bias. To overcome these challenges, AI systems are increasingly utilized to automate the evaluation of resumes and the shortlisting of candidates. While these systems enhance efficiency and scalability, they also raise significant ethical concerns due to their opaque decision-making processes. Traditional AI-based recruitment systems function as black-box models, producing decisions without clear explanations. This lack of transparency is particularly problematic in the context of fresh graduate recruitment, where candidates have limited work experience, and minor variations in resume features can disproportionately influence outcomes. When AI systems reject candidates without providing reasons, it fosters mistrust, anxiety, and feelings of unfair treatment among fresh graduates. Another pressing issue is hiring bias. AI systems are generally trained on historical recruitment data, which may harbor biases related to college reputation, geographic location, gender, or socioeconomic status. When such biased data is utilized for model training, AI systems may inadvertently perpetuate discriminatory hiring patterns. This is especially detrimental in fresher recruitment, where equitable opportunities are crucial for social mobility and workforce diversity. Explainable Artificial Intelligence (XAI) presents a solution by facilitating transparency and interpretability in AI decision-making. XAI techniques offer human-understandable explanations for why a resume is either shortlisted or rejected. These insights enable recruiters to audit decisions, detect bias, and ensure ethical compliance. For fresh graduates, explainability provides clarity and constructive feedback, thereby enhancing trust in automated hiring systems. This research paper investigates the use of Explainable AI-based resume screening systems specifically aimed at mitigating hiring bias in fresher recruitment. The study examines current AI recruitment practices, identifies ethical and technical limitations, and highlights how explainability can improve fairness, accountability, and trust. The findings illustrate that explainable AI is vital for responsible, unbiased, and transparent hiring of fresh graduates in contemporary organizations.

Keywords : Explainable AI, Resume Screening, Hiring Bias, Recruitment Automation, Fair AI.

References :

  1. Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning. Fairness and Accountability Research Group.
  2. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144.
  3. Doshi-Velez, F., & Kim, B. (2017). Towards a Rigorous Science of Interpretable Machine Learning. arXiv preprint arXiv:1702.08608.
  4. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6), 1–35.
  5. Bogen, M., & Rieke, A. (2018). Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias. Upturn Report.
  6. Gunning, D. (2017). Explainable Artificial Intelligence (XAI). Defense Advanced Research Projects Agency (DARPA).
  7. Kim, B., et al. (2018). Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). Proceedings of the International Conference on Machine Learning (ICML).
  8. Chouldechova, A., & Roth, A. (2020). A Snapshot of the Frontiers of Fairness in Machine Learning. Communications of the ACM, 63(5), 82–89.
  9. Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating Bias in Algorithmic Hiring. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT).
  10. Ajunwa, I. (2020). The Paradox of Automation as Anti-Bias Intervention. Cardozo Law Review, 41, 1671–1743.
  11. Paparrizos, I., Cambazoglu, B. B., & Gionis, A. (2011). Machine Learning Techniques for Resume Classification. Proceedings of the European Conference on Information Retrieval.
  12. Lepri, B., et al. (2018). Fair, Transparent, and Accountable Algorithmic Decision-Making Processes. Philosophy & Technology, 31(4), 611–627.
  13. Mishra, S., & Rathi, P. (2020). Automated Resume Screening Using Machine Learning Techniques. International Journal of Computer Applications, 176(10), 1–5.
  14. Venkatasubramanian, S., & Alfano, M. (2020). The Ethical Challenges of Algorithmic Hiring.
  15. Ethics and Information Technology, 22, 43–56.
  16. Zhang, B., Lemoine, B., & Mitchell, M. (2018). Mitigating Unwanted Biases with Adversarial Learning. Proceedings of the AAAI Conference on Artificial Intelligence.

The swift digital transformation of recruitment processes has resulted in the widespread implementation of Artificial Intelligence (AI)–driven resume screening systems, particularly in large-scale recruitment of fresh graduates. Organizations that hire entry-level candidates often receive thousands of resumes, making manual screening inefficient, inconsistent, and susceptible to human bias. To overcome these challenges, AI systems are increasingly utilized to automate the evaluation of resumes and the shortlisting of candidates. While these systems enhance efficiency and scalability, they also raise significant ethical concerns due to their opaque decision-making processes. Traditional AI-based recruitment systems function as black-box models, producing decisions without clear explanations. This lack of transparency is particularly problematic in the context of fresh graduate recruitment, where candidates have limited work experience, and minor variations in resume features can disproportionately influence outcomes. When AI systems reject candidates without providing reasons, it fosters mistrust, anxiety, and feelings of unfair treatment among fresh graduates. Another pressing issue is hiring bias. AI systems are generally trained on historical recruitment data, which may harbor biases related to college reputation, geographic location, gender, or socioeconomic status. When such biased data is utilized for model training, AI systems may inadvertently perpetuate discriminatory hiring patterns. This is especially detrimental in fresher recruitment, where equitable opportunities are crucial for social mobility and workforce diversity. Explainable Artificial Intelligence (XAI) presents a solution by facilitating transparency and interpretability in AI decision-making. XAI techniques offer human-understandable explanations for why a resume is either shortlisted or rejected. These insights enable recruiters to audit decisions, detect bias, and ensure ethical compliance. For fresh graduates, explainability provides clarity and constructive feedback, thereby enhancing trust in automated hiring systems. This research paper investigates the use of Explainable AI-based resume screening systems specifically aimed at mitigating hiring bias in fresher recruitment. The study examines current AI recruitment practices, identifies ethical and technical limitations, and highlights how explainability can improve fairness, accountability, and trust. The findings illustrate that explainable AI is vital for responsible, unbiased, and transparent hiring of fresh graduates in contemporary organizations.

Keywords : Explainable AI, Resume Screening, Hiring Bias, Recruitment Automation, Fair AI.

Paper Submission Last Date
31 - March - 2026

SUBMIT YOUR PAPER CALL FOR PAPERS
Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe