Impact of AI Models on Digital Harassment Prevention

  • Unique Paper ID: 181825
  • Volume: 12
  • Issue: 1
  • PageNo: 5617-5622
  • Abstract:
  • The rapid expansion of digital platforms has led to a concerning rise in digital harassment, including cyberbullying, hate speech, and abusive behavior, which excessively affects vulnerable groups. Traditional moderation methods have proven inadequate, prompting the espousal of Artificial Intelligence (AI) to detect and mitigate unsafe content. This paper investigates the impact of Artificial Intelligence (AI) models, particularly Machine Learning (ML), Deep Learning (DL), and Explainable AI (XAI), on detecting and mitigating harmful digital content. Through a comparative analysis of various AI approaches such as Multinomial Naïve Bayes, Support Vector Machines, Convolutional Neural Networks, and hybrid models, the paper evaluates their effectiveness, strengths, limitations, and applicability. Despite promising advancements, key challenges remain—such as bias, scalability, contextual limitations, and ethical considerations. The paper emphasizes the need for human-AI collaboration, transparent moderation frameworks, and continued research to build fair, responsive, and scalable digital harassment prevention systems.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{181825,
        author = {Jayshri Patel},
        title = {Impact of AI Models on Digital Harassment Prevention},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {1},
        pages = {5617-5622},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=181825},
        abstract = {The rapid expansion of digital platforms has led to a concerning rise in digital harassment, including cyberbullying, hate speech, and abusive behavior, which excessively affects vulnerable groups. Traditional moderation methods have proven inadequate, prompting the espousal of Artificial Intelligence (AI) to detect and mitigate unsafe content. This paper investigates the impact of Artificial Intelligence (AI) models, particularly Machine Learning (ML), Deep Learning (DL), and Explainable AI (XAI), on detecting and mitigating harmful digital content. Through a comparative analysis of various AI approaches such as Multinomial Naïve Bayes, Support Vector Machines, Convolutional Neural Networks, and hybrid models, the paper evaluates their effectiveness, strengths, limitations, and applicability. Despite promising advancements, key challenges remain—such as bias, scalability, contextual limitations, and ethical considerations. The paper emphasizes the need for human-AI collaboration, transparent moderation frameworks, and continued research to build fair, responsive, and scalable digital harassment prevention systems.},
        keywords = {Content Moderation, Cyberbullying, Deep Learning, Digital Harassment, Ethical AI, Explainable AI, Machine Learning, Online Abuse, Social Media Safety},
        month = {June},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 12
  • Issue: 1
  • PageNo: 5617-5622

Impact of AI Models on Digital Harassment Prevention

Related Articles