SocioSafe-Net: Detecting and Reporting Cyberbullying in Social Networks

  • Unique Paper ID: 189532
  • Volume: 12
  • Issue: 7
  • PageNo: 6427-6432
  • Abstract:
  • The growth of social media has increased instances of cyberbullying, where users face abusive or harmful online behavior. Due to the huge volume of posts, manual monitoring is ineffective. This project proposes an automated cyberbullying detection system using Natural Language Processing (NLP) and Long Short-Term Memory (LSTM) networks to classify text as hate speech, offensive, or non-offensive. A labeled social media dataset is preprocessed through cleaning, tokenization, lemmatization, and word embedding for model training. Experimental results demonstrate that the proposed LSTM model effectively identifies harmful content with high accuracy. The system supports early detection and promotes safer online communication environments.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{189532,
        author = {Pratheeksha Jain S R and Sangeetha B V and Noor Fiza and Mansi Rammurthy D and Kavitha C R},
        title = {SocioSafe-Net: Detecting and Reporting Cyberbullying in Social Networks},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {7},
        pages = {6427-6432},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=189532},
        abstract = {The growth of social media has increased instances of cyberbullying, where users face abusive or harmful online behavior. Due to the huge volume of posts, manual monitoring is ineffective. This project proposes an automated cyberbullying detection system using Natural Language Processing (NLP) and Long Short-Term Memory (LSTM) networks to classify text as hate speech, offensive, or non-offensive. A labeled social media dataset is preprocessed through cleaning, tokenization, lemmatization, and word embedding for model training. Experimental results demonstrate that the proposed LSTM model effectively identifies harmful content with high accuracy. The system supports early detection and promotes safer online communication environments.},
        keywords = {Cyberbullying, Hate Speech Detection, LSTM, Machine Learning, Natural Language Processing, Social Media},
        month = {December},
        }

Cite This Article

R, P. J. S., & V, S. B., & Fiza, N., & D, M. R., & R, K. C. (2025). SocioSafe-Net: Detecting and Reporting Cyberbullying in Social Networks. International Journal of Innovative Research in Technology (IJIRT), 12(7), 6427–6432.

Related Articles