Bias and Fairness in AI Models: How can Bias in AI Models be Identified, Mitigated, and Prevented in Data Science Practices?


Authors : Shaik Mohammad Jani Basha; Aditya Kulkarni; Subhangi Choudhary; Manognya Lokesh Reddy

Volume/Issue : Volume 9 - 2024, Issue 9 - September


Google Scholar : https://tinyurl.com/55xw7jsy

Scribd : https://tinyurl.com/4rrk35ja

DOI : https://doi.org/10.38124/ijisrt/IJISRT24SEP789

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : Artificial intelligence (AI) and machine learning (ML) systems are progressively used in different areas, going with basic choices that influence individuals' lives. In any case, these frameworks can sustain and try and fuel existing social predispositions, prompting uncalled for results. This paper looks at the wellsprings of predisposition in simulated intelligence models, assesses current methods for distinguishing and relieving inclination, and proposes an extensive structure for creating more pleasant simulated intelligence frameworks. By coordinating specialized, moral, and functional points of view, this exploration plans to add to a more evenhanded utilization of computer-based intelligence across various areas, guaranteeing that artificial intelligence driven choices are fair, straightforward, and socially dependable.

Keywords : Artificial Intelligence (AI), Machine Learning (ML), Bias, Fair AI systems, Bias Mitigation.

References :

  1. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias.
  2. Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning.
  3. Binns, R. (2018). Fairness in Machine Learning: Lessons from Political Philosophy. In *Proceedings of the 2018 ACM Conference on Fairness, Accountability, and Transparency* (pp. 1-15). ACM.
  4. Dastin, J. (2018). Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women. Reuters.
  5. Friedman, B., & Nissenbaum, H. (1996). Bias in Computer Systems. ACM Transactions on Information Systems, 14(3), 330-347.
  6. Hardt, M., Price, E., & Srebro, N. (2016). Equality of Opportunity in Supervised Learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems (pp. 3315-3323). Curran Associates, Inc.
  7. Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H. (2019). Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-16). ACM.
  8. Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent Trade-Offs in the Fair Determination of Risk Scores. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science (pp. 1-23). ACM.
  9. Mitchell, M., Turner, C., Karaletsos, T., & Daumé III, H. (2018). Predictive Inequity in Automated Criminal Risk Assessments. In Proceedings of the 2018 ACM Conference on Fairness, Accountability, and Transparency (pp. 510-519). ACM.
  10. Zafar, M. B., Valera, I., Gomez, A., & Roth, A. (2017). Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification Without Disparate Mistreatment. In Proceedings of the 26th International Conference on World Wide Web (pp. 1171-1180). International World Wide Web Conferences Steering Committee.
  11. Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability, and Transparency (pp. 77-91). ACM.
  12. Raji, I. D., & Buolamwini, J. (2019). Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 429-435). ACM.
  13. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys (CSUR), 54(6), 1-35.
  14. Corbett-Davies, S., & Goel, S. (2018). The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning.
  15. Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018). Datasheets for Datasets. In Proceedings of the 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning.
  16. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations. Science, 366(6464), 447-453.
  17. Wang, T., Zhao, X., & Taylor, A. (2020). Towards Fairness in AI for People with Disabilities: A Case Study on Autism and AI. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 1-10). ACM.
  18. Binns, R., Veale, M., Van Kleek, M., & Shadbolt, N. (2018). 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-14). ACM

Artificial intelligence (AI) and machine learning (ML) systems are progressively used in different areas, going with basic choices that influence individuals' lives. In any case, these frameworks can sustain and try and fuel existing social predispositions, prompting uncalled for results. This paper looks at the wellsprings of predisposition in simulated intelligence models, assesses current methods for distinguishing and relieving inclination, and proposes an extensive structure for creating more pleasant simulated intelligence frameworks. By coordinating specialized, moral, and functional points of view, this exploration plans to add to a more evenhanded utilization of computer-based intelligence across various areas, guaranteeing that artificial intelligence driven choices are fair, straightforward, and socially dependable.

Keywords : Artificial Intelligence (AI), Machine Learning (ML), Bias, Fair AI systems, Bias Mitigation.

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe