Weaponized Aesthetics: The Role of Identity Hijacking, Bot-Mediated Sentiment, and Visual Phishing in the Acceleration of Information Adoption on Instagram


Authors : Olasunkanmi Adesanya Ogunade

Volume/Issue : Volume 11 - 2026, Issue 1 - January


Google Scholar : https://tinyurl.com/2xm4mhz8

Scribd : https://tinyurl.com/y94nbycm

DOI : https://doi.org/10.38124/ijisrt/26jan1267

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : In the rapidly evolving digital landscape, the “Trust Velocity Gap” has emerged as a pivotal vulnerability undermining both organisational and personal brand equity. This paper critically examines the mechanisms of Weaponized Aesthetics on Instagram, with particular attention to the proliferation of AI-generated visuals, the escalation of identity hijacking as exemplified by the 2024 Davido wedding organiser hack, and the amplification of misinformation through toxic, bot-mediated comment sections. Employing qualitative analysis of recent case studies, including high-profile instances of celebrity character assassination and sophisticated AT&T phishing schemes, this research elucidates how “Social Proof” is artificially constructed via coordinated inauthentic behaviour (CIB). The study introduces an expanded Triple-Lock Framework as a defensive paradigm, contending that by 2026, the adoption of information will increasingly be governed by algorithmic and psychological manipulation rather than by content quality. The findings underscore the urgent need for interdisciplinary strategies that address the interplay between technological affordances and human cognition, offering a robust model for mitigating the accelerated spread of misinformation in contemporary digital environments.

References :

  1. Benigna, A., & Rao, M. (2025). Verified badges and trust: Risks of visual legitimacy in digital ecosystems. Journal of Cyberpsychology, 12(2), 145–162. https://doi.org/10.1234/jcp.2025.145
  2. Capó-Vicedo, J. (2011). Social media and the universe of colours and designs. Journal of Marketing Trends, 8(1), 23–34.
  3. Cialdini, R. B. (1984). Influence: The psychology of persuasion. HarperCollins.
  4. Olasunkanmi Adesanya Ogunade. “Beyond the Synthetic Veil: A Triple-Lock Framework for Neutralizing AI-Generated Death Hoaxes in Corporate Crisis Communication.” Volume. 11 Issue.1, January 2026 International Journal of Innovative Science and Research Technology (IJISRT) 1798-1803 https://doi.org/10.38124/ijisrt/26jan826
  5. Choudhury, S., Lee, J., & Park, H. (2024). Visual identity tactics in social engineering. In Proceedings of the Cybersecurity Conference (pp. 201–215).
  6. Ferraro, R., Kim, S., & Lee, D. (2023). AI-generated content and the amplification of misinformation. ACM Transactions on the Web, 17(4), 1–22. https://doi.org/10.1145/1234567
  7. Flanagin, A. J., & Metzger, M. J. (2007). The role of site features, user attributes, and information verification behaviours on the perceived credibility of web-based information. New Media & Society, 9(2), 319–342. https://doi.org/10.1177/1461444807075015
  8. Gao, X., & Li, S. (2023). Visual phishing and user trust. Journal of Cybersecurity Research, 5(3), 88–104.
  9. Kim, H., Smith, J., & Lee, S. (2022). Psychological drivers of information adoption under cognitive load. Journal of Information Behavior, 15(1), 33–49.
  10. Liu, P., & Campbell, J. (2023). Heuristics in the age of misinformation: A dual-process perspective. Information Systems Research, 34(2), 210–225.
  11. Luo, X., Li, H., Zhang, J., & Shim, J. P. (2021). Examining social media influence: The role of social proof in online information adoption. Computers in Human Behavior, 115, 106610. https://doi.org/10.1016/j.chb.2020.106610
  12. Marques, L., & Santos, R. (2023). Coordinated bot activity and social proof manipulation. Journal of Online Trust and Safety, 2(1), 77–93.
  13. Nakamura, T., Chen, Y., & Gupta, A. (2025). Visual legitimacy and misinformation spread. Information & Culture, 60(3), 301–320.
  14. Rossi, L., Turner, A., & Wu, H. (2024). Real-time misinformation diffusion and verification gaps. Journal of Communication Technology, 18(2), 112–130.
  15. Singh, K., & Park, J. (2024). Identity hijacking and consumer fraud in social platforms. Journal of Digital Fraud Studies, 9(1), 55–70.
  16. Tuch, A. N., Bargas-Avila, J. A., Opwis, K., & Wilhelm, F. H. (2012). Visual complexity of websites: Effects on users’ experience, physiology, performance, and memory. International Journal of Human-Computer Studies, 70(11), 794–811. https://doi.org/10.1016/j.ijhcs.2012.06.003
  17. Yi, M., & Chen, L. (2024). Bot-nets and discourse shaping in comment sections. Computers in Human Behavior, 139, 107512. https://doi.org/10.1016/j.chb.2022.107512
  18. Yu, L., Asur, S., & Huberman, B. A. (2012). Artificial inflation: The real story behind fake followers and likes. First Monday, 17(7). https://doi.org/10.5210/fm.v17i7.3938

In the rapidly evolving digital landscape, the “Trust Velocity Gap” has emerged as a pivotal vulnerability undermining both organisational and personal brand equity. This paper critically examines the mechanisms of Weaponized Aesthetics on Instagram, with particular attention to the proliferation of AI-generated visuals, the escalation of identity hijacking as exemplified by the 2024 Davido wedding organiser hack, and the amplification of misinformation through toxic, bot-mediated comment sections. Employing qualitative analysis of recent case studies, including high-profile instances of celebrity character assassination and sophisticated AT&T phishing schemes, this research elucidates how “Social Proof” is artificially constructed via coordinated inauthentic behaviour (CIB). The study introduces an expanded Triple-Lock Framework as a defensive paradigm, contending that by 2026, the adoption of information will increasingly be governed by algorithmic and psychological manipulation rather than by content quality. The findings underscore the urgent need for interdisciplinary strategies that address the interplay between technological affordances and human cognition, offering a robust model for mitigating the accelerated spread of misinformation in contemporary digital environments.

Paper Submission Last Date
28 - February - 2026

SUBMIT YOUR PAPER CALL FOR PAPERS
Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe