Bias Detection and Mitigation in AI-Driven Target Marketing: Exploring Fairness in Automated Consumer Profiling


Authors : Vishvesh Soni

Volume/Issue : Volume 9 - 2024, Issue 5 - May

Google Scholar : https://tinyurl.com/yfhm5aes

Scribd : https://tinyurl.com/4nerkktu

DOI : https://doi.org/10.38124/ijisrt/IJISRT24MAY2203

Abstract : In this work, bias identification and mitigation in AI-driven target marketing are examined with an emphasis on guaranteeing fairness in automated consumer profiling. Significant biases in AI models were found by preliminary investigation, especially impacted by characteristics like purchasing history and geographic location, which closely correspond with protected characteristics like race and socioeconomic position. With a Disparate Impact (DI) of 0.60, a Statistical Parity Difference (SPD) of -0.25, and an Equal Opportunity Difference (EOD) of -0.30, the fairness measures computed for the original models revealed significant biases against certain population groups. We used three main mitigating strategies: pre-processing, in-processing, and post-processing, to counteract these biases. Re- sampling and balancing of training data as part of pre- processing raised the DI to 0.85, SPD to -0.10, and EOD to -0.15. The measures were much better with in- processing, which adds fairness restrictions straight into the learning algorithms, with a DI of 0.90, an SPD of -0.05, and an EOD of -0.10. The most successful were post- processing modifications, which changed model outputs to guarantee fairness; they produced a DI of 0.95, an SPD of -0.02, and an EOD of -0.05. These results support the research already in publication and demonstrate that bias in AI is a complicated and enduring problem that calls for a multidimensional strategy. The paper highlights how crucial ongoing audits, openness, and multidisciplinary cooperation are to reducing prejudice. Marketers, AI practitioners, and legislators will find the ramifications profound, which emphasizes the requirement of moral AI methods to preserve customer confidence and follow laws. This approach advances the larger discussion on AI ethics, promotes justice, and reduces prejudice in AI- driven marketing systems.

References :

  1. Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989.
  2. Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671-732.
  3. Bellamy, R. K. E., et al. (2019). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. IBM Journal of Research and Development, 63(4/5), 4:1-4:15.
  4. Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 149-159.
  5. Corbett-Davies, S., et al. (2017). Algorithmic decision making and the cost of fairness. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 797-806.
  6. Crawford, K., & Schultz, J. (2014). Big data and due process: Toward a framework to redress predictive privacy harms. Boston College Law Review, 55(1), 93-128.
  7. Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. Proceedings of the 26th International Joint Conference on Artificial Intelligence, 4691-4697.
  8. Davenport, T. H., Guha, A., Grewal, D., & Bressgott, T. (2020). How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science, 48(1), 24-42.
  9. De Bruyn, A., Viswanathan, V., Beh, Y. S., Brock, J. K. U., & Von Wangenheim, F. (2020). Artificial intelligence and marketing: Pitfalls and opportunities. Journal of Interactive Marketing, 51(1), 91-105.
  10. Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62.
  11. Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press.
  12. Feldman, M., et al. (2015). Certifying and removing disparate impact. Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 259-268.
  13. Gebru, T., et al. (2018). Datasheets for datasets. arXiv preprint arXiv:1803.09010.
  14. Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a "right to explanation". AI Magazine, 38(3), 50-57.
  15. Hajian, S., et al. (2016). Algorithmic bias: From discrimination discovery to fairness-aware data mining. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2125-2126.
  16. Haleem, A., Javaid, M., Qadri, M. A., Singh, R. P., & Suman, R. (2022). Artificial intelligence (AI) applications for marketing: A literature-based study. International Journal of Intelligent Networks, 3, 119-132.
  17. Hardt, M., et al. (2016). Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 29, 3315-3323.
  18. Holstein, K., et al. (2019). Improving fairness in machine learning systems: What do industry practitioners need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1-16.
  19. Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
  20. Jurgelenaite, R., & Castelló-Martinez, A. (2021). Artificial Intelligence in Customer Experience Management: A Literature Review and Research Agenda. Frontiers in Artificial Intelligence, 4, 609943.
  21. Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1-33.
  22. Kim, P. (2021). AI and Inequality. Forthcoming in The Cambridge Handbook on Artificial Intelligence & the Law, Kristin Johnson & Carla Reyes, eds. (2022), Washington University in St. Louis Legal Studies Research Paper, (21-09), 03.
  23. Kireyev, K., et al. (2020). Machine learning for marketing: From data-driven algorithms to AI-driven marketing insights. Journal of Business Research, 122, 729-740.
  24. Martin, D., & Srivastava, J. (2020). Programmatic advertising: The successful marriage of art and science. Journal of Advertising Research, 60(1), 4-5.
  25. McDaniel, J. L., & Pease, K. (Eds.). (2021). Predictive policing and artificial intelligence. Routledge, Taylor & Francis Group.
  26. Mehrabi, N., et al. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1-35.
  27. Mitchell, M., et al. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220-229.
  28. Mittelstadt, B. D., et al. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.
  29. Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M. E., ... & Staab, S. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(3), e1356.
  30. Rabah, K. (2018). Convergence of AI, IoT, big data and blockchain: a review. The Lake Institute Journal, 1(1), 1-18.
  31. Raji, I. D., et al. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33-44.
  32. Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.
  33. Rousseau, A. (2021). The personalized future of e-commerce. EcommerceBytes. Retrieved from https://www.ecommercebytes.com/C/blog/blog.pl?/pl/2021/2/1614007112.html
  34. Veale, M., & Binns, R. (2021). Fairness and machine learning in human decision making. AI & Society, 36, 491-501.
  35. Veale, M., & Brass, I. (2019). Administration by algorithm? Public management meets public sector machine learning. Proceedings of the 18th Annual International Conference on Digital Government Research, 34-43.
  36. Voss, G. (2021). The proposed EU Artificial Intelligence Act: The European approach to AI. Computer Law Review International, 22(3), 97-102.
  37. Voruganti, A., et al. (2019). Chatbots: Building Blocks for an Automated Future. Computer, 52(4), 26-35.
  38. Whittaker, M., et al. (2018). AI now report 2018. AI Now Institute at New York University.

In this work, bias identification and mitigation in AI-driven target marketing are examined with an emphasis on guaranteeing fairness in automated consumer profiling. Significant biases in AI models were found by preliminary investigation, especially impacted by characteristics like purchasing history and geographic location, which closely correspond with protected characteristics like race and socioeconomic position. With a Disparate Impact (DI) of 0.60, a Statistical Parity Difference (SPD) of -0.25, and an Equal Opportunity Difference (EOD) of -0.30, the fairness measures computed for the original models revealed significant biases against certain population groups. We used three main mitigating strategies: pre-processing, in-processing, and post-processing, to counteract these biases. Re- sampling and balancing of training data as part of pre- processing raised the DI to 0.85, SPD to -0.10, and EOD to -0.15. The measures were much better with in- processing, which adds fairness restrictions straight into the learning algorithms, with a DI of 0.90, an SPD of -0.05, and an EOD of -0.10. The most successful were post- processing modifications, which changed model outputs to guarantee fairness; they produced a DI of 0.95, an SPD of -0.02, and an EOD of -0.05. These results support the research already in publication and demonstrate that bias in AI is a complicated and enduring problem that calls for a multidimensional strategy. The paper highlights how crucial ongoing audits, openness, and multidisciplinary cooperation are to reducing prejudice. Marketers, AI practitioners, and legislators will find the ramifications profound, which emphasizes the requirement of moral AI methods to preserve customer confidence and follow laws. This approach advances the larger discussion on AI ethics, promotes justice, and reduces prejudice in AI- driven marketing systems.

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe