Generative Artificial Intelligence: A Comprehensive Systematic Review of Technological Evolution, Societal Impacts, and Ethical Frontiers (2020-2025)


Authors : Saedah Khader; Lana Haj Yahya

Volume/Issue : Volume 10 - 2025, Issue 12 - December


Google Scholar : https://tinyurl.com/ysczju48

Scribd : https://tinyurl.com/5x4rdcp5

DOI : https://doi.org/10.38124/ijisrt/25dec449

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.

Note : Google Scholar may take 30 to 40 days to display the article.


Abstract : In order to objectively examine the development, uses, and ramifications of generative artificial intelligence, this systematic review summarizes the results of fifteen peer-reviewed papers that were published between 2020 and 2025. The review uses a multi-method analytical framework to group the literature into five thematic clusters: (1) architectural innovations and technical foundations; (2) ethical frameworks and governance models; (3) labor market impacts and economic transformations; (4) sectoral applications and domain-specific implementations; and (5) future trajectories and existential considerations. Several important conclusions are shown by the analysis: A major governance gap has been created by the exceptional acceleration of model capabilities; global equality is threatened by the unequal distribution of AI advantages; and new ethical issues call for immediate interdisciplinary solutions. Longitudinal impact studies, cross-cultural comparative analyses, and integrative governance frameworks are among the ongoing research gaps identified by the evaluation. This paper offers a multifaceted framework for responsible AI research that strikes a balance between technological innovation and societal well-being, drawing on a variety of disciplinary viewpoints, including computer science, economics, ethics, and policy studies. According to the findings, generative AI is not just a technological development but also a turning moment in civilization that calls for concerted international action, increased regulatory flexibility, and a thorough rethinking of paradigms for human-machine collaboration.

Keywords : Large Language Models, AI Ethics, Technological Governance, Labor Market Transformation, Sustainable AI Development, Human-AI Interaction, Algorithmic Accountability, and Generative AI.

References :

  1. Citations Anthropic (2023). Constitutional AI: AI feedback is harmless. The preprint arXiv is arXiv:2212.08073.
  2. Bender, E. M., Gebru, T., Shmitchell, S., and McMillan-Major, A. (2021). Regarding the Perils of Stochastic Parrots: Are Language Models Too Large? A. 2021 ACM Conference on Fairness, Accountability, and Transparency Proceedings, 610-623.
  3. In 2021, Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... & Liang, P. Regarding the advantages and disadvantages of foundation models. arXiv preprint arXiv:2108.07258.
  4. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Few-shot learners are language models. Neural information processing system advances, 33, 1877-1901.
  5. AI Safety Center (2023). AI Risk Statement. taken from https://www.safe.ai/statement-on-ai-risk
  6. Intelligence Act is Regulation (EU) 2024/... of the European Parliament and Council on establishing uniform regulations on AI.
  7. L. Floridi (2023). The fundamentals, difficulties, and prospects of artificial intelligence ethics. Oxford University Press.
  8. Summerfield, C., Kumaran, D., Hassabis, D., and Botvinick, M. (2017). artificial intelligence that is influenced by neuroscience. Neuron, 95(2), 245-258.
  9. M. Hutter (2022). Sequential decisions based on algorithmic probability constitute universal artificial intelligence. Springer.
  10. Stanford & MIT (2024). Neuro-Symbolic Integration in AI Systems of the Future. Roadmap for Joint Research.
  11. Mohri, M., Talwalkar, A., and Rostamizadeh, A. (2018). machine learning foundations. MIT Press.
  12. Climate Change in Nature, 2024. artificial intelligence's carbon footprint. Climate Change in Nature, 14(1), 15-21.
  13. OECD (2024). The Global AI Divide: International Cooperation, Policies, and Trends. OECD Papers on the Digital Economy, No. 315.
  14. Page, M. J., Bossuyt, P. M., McKenzie, J. E., Boutron, I., Hoffmann, T. C., Mulrow, C. D., ... & Moher, D. (2021). An updated set of guidelines for reporting systematic reviews is the PRISMA 2020 statement. Systematic reviews, 10(1), 1–11.
  15. Rombach, R., Lorenz, D., Blattmann, A., Esser, P., & Ommer, B. (2022). Latent diffusion models for high-resolution image synthesis. IEEE/CVF Conference on Computer Vision and Pattern Recognition Proceedings, 10684-10695.
  16. Norvig, P., and Russell, S. (2020). A contemporary approach to artificial intelligence (4th ed.). Pearson.
  17. Dewey, D., Russell, S., and Tegmark, M. (2025). Human-compatible artificial intelligence: The alignment problem revisited. Science, 378 (6625), 1123-1127.
  18. Stanford HAI (2025). Artificial Intelligence in Education: Prospects and Difficulties. Human-Centered Artificial Intelligence Institute at Stanford.
  19. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). All you need is attention. Neural information processing system advancements, 30.
  20. Fedus, W., Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., and Borgeaud, S. (2022). Large language models' emerging capabilities. arXiv preprint arXiv:2206.07682.
  21. World Bank, 2023. Leapfrogging or Falling Behind: AI and Developing Economies? World Bank Organization.

In order to objectively examine the development, uses, and ramifications of generative artificial intelligence, this systematic review summarizes the results of fifteen peer-reviewed papers that were published between 2020 and 2025. The review uses a multi-method analytical framework to group the literature into five thematic clusters: (1) architectural innovations and technical foundations; (2) ethical frameworks and governance models; (3) labor market impacts and economic transformations; (4) sectoral applications and domain-specific implementations; and (5) future trajectories and existential considerations. Several important conclusions are shown by the analysis: A major governance gap has been created by the exceptional acceleration of model capabilities; global equality is threatened by the unequal distribution of AI advantages; and new ethical issues call for immediate interdisciplinary solutions. Longitudinal impact studies, cross-cultural comparative analyses, and integrative governance frameworks are among the ongoing research gaps identified by the evaluation. This paper offers a multifaceted framework for responsible AI research that strikes a balance between technological innovation and societal well-being, drawing on a variety of disciplinary viewpoints, including computer science, economics, ethics, and policy studies. According to the findings, generative AI is not just a technological development but also a turning moment in civilization that calls for concerted international action, increased regulatory flexibility, and a thorough rethinking of paradigms for human-machine collaboration.

Keywords : Large Language Models, AI Ethics, Technological Governance, Labor Market Transformation, Sustainable AI Development, Human-AI Interaction, Algorithmic Accountability, and Generative AI.

CALL FOR PAPERS


Paper Submission Last Date
31 - December - 2025

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe