Authors :
Abdulrahim Magaji; Yakubu Magaji; Mukhtar Dahiru; Sulaiman Bello Umar; Ahmad Muhammad Tahir; Umar Abba; Aisha Rabiu Ibrahim; Aminu Ali Lawan
Volume/Issue :
Volume 11 - 2026, Issue 3 - March
Google Scholar :
https://tinyurl.com/kstzt689
Scribd :
https://tinyurl.com/5n7a2pe3
DOI :
https://doi.org/10.38124/ijisrt/26mar1508
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
The proliferation of Generative Artificial Intelligence (GAI) between 2020 and 2025 has created hyper-realistic
synthetic media, commonly known as deepfakes, which pose significant challenges to digital evidence authenticity in legal
and investigative contexts. This systematic literature review (SLR) critically examines the evolution of deepfake generation
methods (e.g., Generative Adversarial Networks (GANs) and diffusion models) and the corresponding advancements in
forensic detection techniques. The review navigates the technical ‘arms race’ dynamic, evaluating the efficacy and
limitations of detection approaches, including forensic analysis, machine learning, and hybrid systems. Findings highlight
that while traditional detection methods struggle with the increased realism of diffusion models, innovative techniques
focusing on physiological signals and adversarial robustness are emerging. The discussion extends to the critical legal and
ethical implications, emphasizing persistent challenges in evidence admissibility and the necessity for comprehensive
regulatory frameworks to mitigate risks associated with misinformation, fraud, and manipulation. We propose a conceptual
framework for forensic readiness focused on media provenance and attribution, underscoring the imperative for continuous
innovation to safeguard a trustworthy digital environment.
Keywords :
Deepfake, Generative Artificial Intelligence, Digital Forensics, Synthetic Media, Forensic Analysis, Evidence Authenticity, Diffusion Models.
References :
- M. S. Khan, R. B. F. M. C. J., T. J. O. R. M. F. B. S. D. D., et al., “Generative Artificial Intelligence and the Evolving Challenge of Deepfake Detection: A Systematic Analysis,” J. Sensor Actuator Netw., vol. 12, no. 1, p. 17, 2023.
- Z. Wang, G. S. Choudhary, V. Sharma, and P. D., et al., “Deepfake Detection and Authentication: A Systematic Review,” Electron., vol. 13, no. 9, p. 1671, 2024.
- G. S. Choudhary, V. Sharma, and P. D., et al., “Generative AI and the Evolving Challenge of Deepfake Detection: A Systematic Analysis,” J. Sensor Actuator Netw., vol. 12, no. 1, p. 17, 2023.
- B. Schneier, “Another Move in the Deepfake Creation/Detection Arms Race,” Schneier on Security. [Online]. Available: https://www.schneier.com/blog/archives/2025/05/another-move-in-the-deepfake-creation-detection-arms-race.html.
- L. Sordo, C. G. T., V. R. F. B. C. M. J. H. N. M. N. et al., “Verifying Artificial Intelligence-Generated Images: Socio-Technical Approaches to Authenticity,” Res. Gate Prepr., 2025.
- IBM, “Generative AI model architectures and how they have evolved,” IBM Think Blog. [Online]. Available: https://www.ibm.com/think/topics/generative-ai.
- Wikipedia, “Generative artificial intelligence.” [Online]. Available: https://en.wikipedia.org/wiki/Generative_artificial_intelligence.
- ITU, “Standards and policy considerations for multimedia authenticity.” [Online]. Available: https://www.itu.int/hub/2025/07/standards-and-policy-considerations-for-multimedia-authenticity.
- B. S. Goldring, “Courts at the Crossroads: Confronting AI-Generated Evidence in the Age of Deepfakes,” U. Chicago Legal Forum, 2025.
- Jones Walker LLP, “Synthetic Media Creates New Authenticity Concerns for Legal Evidence,” Jones Walker AI Law Blog. [Online]. Available: https://www.joneswalker.com/en/insights/blogs/ai-law-blog/synthetic-media-creates-new-authenticity-concerns-for-legal-evidence.html.
- K. Townsend, “Deepfakes and the AI Battle Between Generation and Detection,” Security Week. [Online]. Available: https://www.securityweek.com/deepfakes-and-the-ai-battle-between-generation-and-detection.html.
- D. W. Smith, H. J. O., V. R. F. B. C. M. J. H. N. M. N. et al., “A Systematic Literature Review of Deepfakes in Forensic Science,” Forensic Sci. Int. Digit. Investig., 2023.
- B. Sharma, “ForensicLLM: A Local Large Language Model for Digital Forensics,” DFRWS EU. [Online]. Available: https://dfrws.org/wp-content/uploads/2025/03/ForensicLLM.pdf.
- M. J. Page, J. B., J. A. McKenzie, et al., “The PRISMA 2020 statement: an updated guideline for reporting systematic reviews,” BMJ, vol. 372, p. n71, 2021.
- PRISMA, “PRISMA 2020 flow diagram template for systematic reviews.” [Online]. Available: https://www.prisma-statement.org/prisma-2020-flow-diagram.
- T. A. Thaker, R. S., and M. V. Sharma, et al., “Comparative Analysis on Different Deepfake Detection Techniques,” Int. J. Inf. Technol. Bus. Manag., vol. 16, no. 1, 2024.
- IBM, “Unreliable source attribution in Generative AI.” [Online]. Available: https://www.ibm.com/docs/en/watsonx/saas?topic=atlas-unreliable-source-attribution.
- A. M. H. R. Hossain, M. V. Sharma, and P. D., et al., “Generative AI and the Evolving Challenge of Deepfake Detection: A Systematic Analysis,” J. Sensor Actuator Netw., vol. 12, no. 1, p. 17, 2023.
- Sapien, “GANs vs. Diffusion Models: A Comparative Analysis,” Sapien Blog. [Online]. Available: https://www.sapien.io/blog/gans-vs-diffusion-models-a-comparative-analysis.
- Aurora Solar, “Putting AI to the Test: Generative Adversarial Networks vs. Diffusion Models,” Aurora Blog. [Online]. Available: https://aurorasolar.com/blog/putting-ai-to-the-test-generative-adversarial-networks-vs-diffusion-models.
- M. J. Lee, U. O., Y. L. Y. J. L. A. S. B. et al., “Exploring self-supervised vision transformers for deepfake detection: A comparative analysis,” in Proc. IEEE Int. Jt. Conf. Biometrics (IJCB), 2024, pp. 1–10.
- R. Delfino, “The Revised Proposal for FRE 901(c),” U.S. Courts, 2025.
- D. Seng and S. Mason, “AI and the Challenge of Evidentiary Issues,” Singapore Acad. Law J. Special Issue, 2024.
- Morgan Lewis, “AI-Driven Fake Evidence: A Rising Challenge for eDiscovery and Legal Teams,” Morgan Lewis Insights. [Online]. Available: https://www.morganlewis.com/pubs/2025/03/ai-driven-fake-evidence-a-rising-challenge-for-ediscovery-and-legal-teams.
- TRM Labs, “The Rise of AI-Enabled Crime,” TRM Labs Blog. [Online]. Available: https://www.trmlabs.com/resources/blog/the-rise-of-ai-enabled-crime-exploring-the-evolution-risks-and-responses-to-ai-powered-criminal-enterprises.
- CETaS, “AI and Serious Online Crime,” Turing Inst. Rep., 2024.
- Trustwave, “WormGPT and FraudGPT: The Rise of Malicious LLMs,” SpiderLabs Blog. [Online]. Available: https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/wormgpt-and-fraudgpt-the-rise-of-malicious-llms.
- Marymount University, “The Role of AI in Forensics,” Marymount Univ. Blog. [Online]. Available: https://marymount.edu/blog/the-role-of-ai-in-forensics.
- N. Osborne, “3 ways AI can support forensics,” Johns Hopkins Univ. News. [Online]. Available: https://washingtondc.jhu.edu/news/ai-in-forensics.
- A. Thakker, “Large Language Models in Digital Forensics,” Medium. [Online]. Available: https://medium.com/@aasthathakker/large-language-models-in-digital-forensics-475cb8115b7f.
- S. R. B. Chen, M. A. F., A. D. J. P. E. G. O. A. M. H. F. J. J. T. T. D. W. S. W. M. V. J. S. P. D., et al., “AI on Trial: Legal Models Hallucinate in 1 out of 6 or More Benchmarking Queries,” Stanford HAI Blog. [Online]. Available: https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries.
- K. C. S. H. K. P. H. H. L. C. C. M. A. F. A. D. J. P. E. G. O. A. M. H. F. J. J. T. T. D. W. S. W. M. V. J. S. P. D., et al., “Lawyers sanctioned for citing AI-generated fake cases,” Data Privacy and Security Insider. [Online]. Available: https://www.dataprivacyandsecurityinsider.com/2025/02/lawyers-sanctioned-for-citing-ai-generated-fake-cases.
- V. Sharma, G. S. Choudhary, P. D., and Z. Wang, et al., “Deepfake Attribution and Recognition: Passive and Active Authentication,” MDPI Inf., vol. 14, no. 1, p. 17, 2024.
- C. S. A. G., “Content Credentials for Trust and Transparency,” Cybersecurity Inf. Sheet. [Online]. Available: https://media.defense.gov/2025/Jan/29/2003634788/-1/-1/0/CSI-CONTENT-CREDENTIALS.PDF.
- D. G. V. A. E. A. S. C. S. K. C. D., et al., “A Critical Literature Review of Deep Fake Detection, Compression and Transfer Learning Techniques,” MDPI Inf., vol. 14, no. 1, p. 17, 2024.
- V. Sharma, P. D., G. S. Choudhary, and Z. Wang, et al., “Comparison of Deepfake Detection Techniques through Deep Learning,” IEEE Trans. Inf. Forensics Security, 2023.
- S. E. D. T. P. A. C. T. A. P. L. G. A., et al., “Deepfake Detection using Deep Learning,” MDPI Inf., vol. 14, no. 1, p. 17, 2024.
- F. M. V. G. W. S. J. T. N. R., et al., “A Critical Literature Review of Deep Fake Detection, Compression and Transfer Learning Techniques,” MDPI Inf., vol. 14, no. 1, p. 17, 2024.
- M. V. Sharma, D. G., P. D. A. C. S. K. C. D. et al., “Comparative Analysis on Different Deepfake Detection Techniques,” MDPI Appl. Sci., vol. 15, no. 3, p. 1225, 2025.
- T. J. O. R. M. F. B. S. D. D., et al., “Evaluating the Robustness of Audio Deepfake Detection Models against Real-World Corruptions,” arXiv preprint arXiv:2503.17577, 2025.
- T. S. M. F. S. W. H. S. S., et al., “Watermarking for Source Attribution on LLM-Generated Synthetic Texts,” arXiv preprint arXiv:2310.00646v2, 2024.
- R. So, “Authorship and Attribution of AI Generated Content,” Project Rachel, 2024.
- T. Schreieder, T. S., and M. F., “Evidence-Based Text Generation with LLMs,” arXiv preprint arXiv:2508.15396, 2025.
- Y. S. W. A. M. H. D. Z., et al., “Exploring deepfake technology: creation, consequences and countermeasures,” Res. Gate Prepr., 2024.
- R. B. F. M. C. J., et al., “The Generalisability Gap: Evaluating Deepfake Detectors Across Domains,” Res. Gate Prepr., 2024.
- Resaro, “The Generalisability Gap: Evaluating Deepfake Detectors Across Domains,” Resaro Insights. [Online]. Available: https://resaro.ai/insights/articles/the-generalisability-gap-evaluating-deepfake-detectors-across-domains.
- A. Aborisade, I. B., Z. W. S. W. M. V. J. S. P. D., et al., “The Ethical Implications of Deepfakes on Data Privacy,” J. Ethical Issues, vol. 1, no. 2, 2024.
- S. I. O. A. C. B. A. S. A., et al., “Ethical Concerns of Generative AI,” MDPI Educ. Sci., vol. 11, no. 3, p. 58, 2024.
- NIST, “Artificial Intelligence Risk Management Framework: Generative AI Profile (NIST AI 600-1),” NIST AI 600-1, 2024.
- Partnership on AI, “Responsible Practices for Synthetic Media Framework.” [Online]. Available: https://syntheticmedia.partnershiponai.org/.
- M. V. S. C. A., et al., “The Ethical Implications of Deepfakes on Data Privacy,” J. Ethical Issues, vol. 1, no. 2, 2024.
- NCSC, “AI-Generated Evidence Guide for Judges,” NCSC Resources. [Online]. Available: https://www.ncsc.org/resources-courts/ai-generated-evidence-guide-judges.
The proliferation of Generative Artificial Intelligence (GAI) between 2020 and 2025 has created hyper-realistic
synthetic media, commonly known as deepfakes, which pose significant challenges to digital evidence authenticity in legal
and investigative contexts. This systematic literature review (SLR) critically examines the evolution of deepfake generation
methods (e.g., Generative Adversarial Networks (GANs) and diffusion models) and the corresponding advancements in
forensic detection techniques. The review navigates the technical ‘arms race’ dynamic, evaluating the efficacy and
limitations of detection approaches, including forensic analysis, machine learning, and hybrid systems. Findings highlight
that while traditional detection methods struggle with the increased realism of diffusion models, innovative techniques
focusing on physiological signals and adversarial robustness are emerging. The discussion extends to the critical legal and
ethical implications, emphasizing persistent challenges in evidence admissibility and the necessity for comprehensive
regulatory frameworks to mitigate risks associated with misinformation, fraud, and manipulation. We propose a conceptual
framework for forensic readiness focused on media provenance and attribution, underscoring the imperative for continuous
innovation to safeguard a trustworthy digital environment.
Keywords :
Deepfake, Generative Artificial Intelligence, Digital Forensics, Synthetic Media, Forensic Analysis, Evidence Authenticity, Diffusion Models.