Investigating the Use of AI to Detect Cyberbullying


Authors : Agastya Desai

Volume/Issue : Volume 10 - 2025, Issue 11 - November


Google Scholar : https://tinyurl.com/4e7d7map

Scribd : https://tinyurl.com/2s3asybh

DOI : https://doi.org/10.38124/ijisrt/25nov532

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.

Note : Google Scholar may take 30 to 40 days to display the article.


Abstract : This paper investigates the effectiveness of artificial intelligence in detecting cyberbullying across multiple platforms. Using a set of simulated chat logs containing signs of bullying, borderline and safe interactions, five widely used AI models were tested and compared. Each system’s ability to identify harmful language was measured taking into consideration false positives and negatives. These findings demonstrate the progress of AI moderation tools but also emphasize the importance of human involvement and ethical oversight in preventing harm online.

References :

  1. Kowalski, R. M., Giumetti, G. W., Schroeder, A. N., & Lattanner, M. R. (2014). Bullying in the digital age: A critical review and meta-analysis of cyberbullying research among youth.
  2. Zhang, Z., Robinson, D., & Tepper, J. (2016). Detecting sarcasm on Twitter: A contrastive approach. P
  3. Hosseinmardi, H., Mattson, S. A., Rafiq, R. I., Han, R., Lv, Q., & Mishra, S. (2015). Detection of cyberbullying incidents on the Instagram social network.
  4. Ptaszynski, M., Masui, F., Kimura, Y., Rzepka, R., & Araki, K. (2016). Towards context-aware cyberbullying detection.
  5. Dinakar, K., Reichart, R., & Lieberman, H. (2011). Modeling the detection of textual cyberbullying.
  6. Sap, M., Card, D., Gabriel, S., Choi, Y., & Smith, N. A. (2019). The risk of racial bias in hate speech detection.
  7. Wulczyn, E., Thain, N., & Dixon, L. (2017). Ex machina: Personal attacks seen at scale.

This paper investigates the effectiveness of artificial intelligence in detecting cyberbullying across multiple platforms. Using a set of simulated chat logs containing signs of bullying, borderline and safe interactions, five widely used AI models were tested and compared. Each system’s ability to identify harmful language was measured taking into consideration false positives and negatives. These findings demonstrate the progress of AI moderation tools but also emphasize the importance of human involvement and ethical oversight in preventing harm online.

CALL FOR PAPERS


Paper Submission Last Date
30 - November - 2025

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe