Authors :
Kapil Kumar Goyal
Volume/Issue :
Volume 10 - 2025, Issue 5 - May
Google Scholar :
https://tinyurl.com/y3mhk9y3
DOI :
https://doi.org/10.38124/ijisrt/25may1549
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
Machine Learning (ML) models integrated into user-facing systems are extremely well-regarded for their ability
to automate and personalize experiences. But lying beneath the surface is a nefarious problem: the growth of silent feedback
loops. These loops, formed when model outputs quietly influence user behavior, can in turn perpetuate existing model
assumptions, leading to passive bias over time. In this paper, we propose an end-to-end system to detect, analyze, and
mitigate passive bias due to such feedback loops. We introduce a feedback-aware monitoring system architecture, describe
real-world application scenarios, and provide empirical methods to quantify bias propagation. Our approach highlights the
performance and ethical consequences of neglecting latent model feedback and suggests deployment guidelines for
responsible deployment.
Keywords :
Feedback Loops, Machine Learning, Passive Bias, Responsible AI, User Interaction, Model Drift, Bias Detection, Recommender Systems
References :
- A. Chaney, B. Stewart, and B. A. Resnick, "How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility," in Proc. RecSys, 2018
- A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” arXiv preprint arXiv:1710.11214, Oct. 2017. [Online]. Available: https://arxiv.org/abs/1710.11214
- T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. Gay, "Accurately Interpreting Clickthrough Data as Implicit Feedback," in Proc. SIGIR, 2005.
- R. Binns, M. Veale, U. Lyngs, J. Zhao, and N. Van Kleek, "‘It’s Reducing a Human Being to a Percentage’: Perceptions of Justice in Algorithmic Decisions," in Proc. CHI, 2018.
- W. Wang, Y. Zhang, Z. Yan, S. Wang, J. Wu, and C. Wang, “Prompt2Model: Teaching Large Language Models to Write and Run Programs,” arXiv preprint arXiv:2505.12185, May 2025. [Online]. Available: https://arxiv.org/abs/2505.12185
- R. Shah, Y. Li, H. Hu, and M. Sun, “Evidence-Informed Evaluation of Large Language Models,” arXiv preprint arXiv:2505.11509, May 2025. [Online]. Available: https://arxiv.org/abs/2505.11509
- A. Wang, R. Wu, S. Lee, and Y. Xu, “Judging LLM Judges: A Study of Bias in AI Feedback,” arXiv preprint arXiv:2505.11350, May 2025. [Online]. Available: https://arxiv.org/abs/2505.11350
- J. Wang, T. Zhang, Y. Zhang, and M. Wang, “LLM-Smith: Evaluating Robustness of Large Language Models with Adversarial Model Inversion,” arXiv preprint arXiv:2505.07581, May 2025. [Online]. Available: https://arxiv.org/abs/2505.07581
Machine Learning (ML) models integrated into user-facing systems are extremely well-regarded for their ability
to automate and personalize experiences. But lying beneath the surface is a nefarious problem: the growth of silent feedback
loops. These loops, formed when model outputs quietly influence user behavior, can in turn perpetuate existing model
assumptions, leading to passive bias over time. In this paper, we propose an end-to-end system to detect, analyze, and
mitigate passive bias due to such feedback loops. We introduce a feedback-aware monitoring system architecture, describe
real-world application scenarios, and provide empirical methods to quantify bias propagation. Our approach highlights the
performance and ethical consequences of neglecting latent model feedback and suggests deployment guidelines for
responsible deployment.
Keywords :
Feedback Loops, Machine Learning, Passive Bias, Responsible AI, User Interaction, Model Drift, Bias Detection, Recommender Systems