Authors :
Vishvesh Soni
Volume/Issue :
Volume 9 - 2024, Issue 5 - May
Google Scholar :
https://tinyurl.com/yfhm5aes
Scribd :
https://tinyurl.com/4nerkktu
DOI :
https://doi.org/10.38124/ijisrt/IJISRT24MAY2203
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
In this work, bias identification and mitigation
in AI-driven target marketing are examined with an
emphasis on guaranteeing fairness in automated
consumer profiling. Significant biases in AI models were
found by preliminary investigation, especially impacted
by characteristics like purchasing history and geographic
location, which closely correspond with protected
characteristics like race and socioeconomic position. With
a Disparate Impact (DI) of 0.60, a Statistical Parity
Difference (SPD) of -0.25, and an Equal Opportunity
Difference (EOD) of -0.30, the fairness measures
computed for the original models revealed significant
biases against certain population groups. We used three
main mitigating strategies: pre-processing, in-processing,
and post-processing, to counteract these biases. Re-
sampling and balancing of training data as part of pre-
processing raised the DI to 0.85, SPD to -0.10, and EOD
to -0.15. The measures were much better with in-
processing, which adds fairness restrictions straight into
the learning algorithms, with a DI of 0.90, an SPD of -0.05,
and an EOD of -0.10. The most successful were post-
processing modifications, which changed model outputs
to guarantee fairness; they produced a DI of 0.95, an SPD
of -0.02, and an EOD of -0.05. These results support the
research already in publication and demonstrate that bias
in AI is a complicated and enduring problem that calls for
a multidimensional strategy. The paper highlights how
crucial ongoing audits, openness, and multidisciplinary
cooperation are to reducing prejudice. Marketers, AI
practitioners, and legislators will find the ramifications
profound, which emphasizes the requirement of moral AI
methods to preserve customer confidence and follow laws.
This approach advances the larger discussion on AI
ethics, promotes justice, and reduces prejudice in AI-
driven marketing systems.
References :
- Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989.
- Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671-732.
- Bellamy, R. K. E., et al. (2019). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. IBM Journal of Research and Development, 63(4/5), 4:1-4:15.
- Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 149-159.
- Corbett-Davies, S., et al. (2017). Algorithmic decision making and the cost of fairness. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 797-806.
- Crawford, K., & Schultz, J. (2014). Big data and due process: Toward a framework to redress predictive privacy harms. Boston College Law Review, 55(1), 93-128.
- Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. Proceedings of the 26th International Joint Conference on Artificial Intelligence, 4691-4697.
- Davenport, T. H., Guha, A., Grewal, D., & Bressgott, T. (2020). How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science, 48(1), 24-42.
- De Bruyn, A., Viswanathan, V., Beh, Y. S., Brock, J. K. U., & Von Wangenheim, F. (2020). Artificial intelligence and marketing: Pitfalls and opportunities. Journal of Interactive Marketing, 51(1), 91-105.
- Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62.
- Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press.
- Feldman, M., et al. (2015). Certifying and removing disparate impact. Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 259-268.
- Gebru, T., et al. (2018). Datasheets for datasets. arXiv preprint arXiv:1803.09010.
- Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a "right to explanation". AI Magazine, 38(3), 50-57.
- Hajian, S., et al. (2016). Algorithmic bias: From discrimination discovery to fairness-aware data mining. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2125-2126.
- Haleem, A., Javaid, M., Qadri, M. A., Singh, R. P., & Suman, R. (2022). Artificial intelligence (AI) applications for marketing: A literature-based study. International Journal of Intelligent Networks, 3, 119-132.
- Hardt, M., et al. (2016). Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 29, 3315-3323.
- Holstein, K., et al. (2019). Improving fairness in machine learning systems: What do industry practitioners need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1-16.
- Jobin, A., et al. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
- Jurgelenaite, R., & Castelló-Martinez, A. (2021). Artificial Intelligence in Customer Experience Management: A Literature Review and Research Agenda. Frontiers in Artificial Intelligence, 4, 609943.
- Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1-33.
- Kim, P. (2021). AI and Inequality. Forthcoming in The Cambridge Handbook on Artificial Intelligence & the Law, Kristin Johnson & Carla Reyes, eds. (2022), Washington University in St. Louis Legal Studies Research Paper, (21-09), 03.
- Kireyev, K., et al. (2020). Machine learning for marketing: From data-driven algorithms to AI-driven marketing insights. Journal of Business Research, 122, 729-740.
- Martin, D., & Srivastava, J. (2020). Programmatic advertising: The successful marriage of art and science. Journal of Advertising Research, 60(1), 4-5.
- McDaniel, J. L., & Pease, K. (Eds.). (2021). Predictive policing and artificial intelligence. Routledge, Taylor & Francis Group.
- Mehrabi, N., et al. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1-35.
- Mitchell, M., et al. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220-229.
- Mittelstadt, B. D., et al. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.
- Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M. E., ... & Staab, S. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(3), e1356.
- Rabah, K. (2018). Convergence of AI, IoT, big data and blockchain: a review. The Lake Institute Journal, 1(1), 1-18.
- Raji, I. D., et al. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33-44.
- Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.
- Rousseau, A. (2021). The personalized future of e-commerce. EcommerceBytes. Retrieved from https://www.ecommercebytes.com/C/blog/blog.pl?/pl/2021/2/1614007112.html
- Veale, M., & Binns, R. (2021). Fairness and machine learning in human decision making. AI & Society, 36, 491-501.
- Veale, M., & Brass, I. (2019). Administration by algorithm? Public management meets public sector machine learning. Proceedings of the 18th Annual International Conference on Digital Government Research, 34-43.
- Voss, G. (2021). The proposed EU Artificial Intelligence Act: The European approach to AI. Computer Law Review International, 22(3), 97-102.
- Voruganti, A., et al. (2019). Chatbots: Building Blocks for an Automated Future. Computer, 52(4), 26-35.
- Whittaker, M., et al. (2018). AI now report 2018. AI Now Institute at New York University.
In this work, bias identification and mitigation
in AI-driven target marketing are examined with an
emphasis on guaranteeing fairness in automated
consumer profiling. Significant biases in AI models were
found by preliminary investigation, especially impacted
by characteristics like purchasing history and geographic
location, which closely correspond with protected
characteristics like race and socioeconomic position. With
a Disparate Impact (DI) of 0.60, a Statistical Parity
Difference (SPD) of -0.25, and an Equal Opportunity
Difference (EOD) of -0.30, the fairness measures
computed for the original models revealed significant
biases against certain population groups. We used three
main mitigating strategies: pre-processing, in-processing,
and post-processing, to counteract these biases. Re-
sampling and balancing of training data as part of pre-
processing raised the DI to 0.85, SPD to -0.10, and EOD
to -0.15. The measures were much better with in-
processing, which adds fairness restrictions straight into
the learning algorithms, with a DI of 0.90, an SPD of -0.05,
and an EOD of -0.10. The most successful were post-
processing modifications, which changed model outputs
to guarantee fairness; they produced a DI of 0.95, an SPD
of -0.02, and an EOD of -0.05. These results support the
research already in publication and demonstrate that bias
in AI is a complicated and enduring problem that calls for
a multidimensional strategy. The paper highlights how
crucial ongoing audits, openness, and multidisciplinary
cooperation are to reducing prejudice. Marketers, AI
practitioners, and legislators will find the ramifications
profound, which emphasizes the requirement of moral AI
methods to preserve customer confidence and follow laws.
This approach advances the larger discussion on AI
ethics, promotes justice, and reduces prejudice in AI-
driven marketing systems.