Impact of AI Models on Digital Harassment Prevention

  • Unique Paper ID: 181825
  • Volume: 12
  • Issue: 1
  • PageNo: 5617-5622
  • Abstract:
  • The rapid expansion of digital platforms has led to a concerning rise in digital harassment, including cyberbullying, hate speech, and abusive behavior, which excessively affects vulnerable groups. Traditional moderation methods have proven inadequate, prompting the espousal of Artificial Intelligence (AI) to detect and mitigate unsafe content. This paper investigates the impact of Artificial Intelligence (AI) models, particularly Machine Learning (ML), Deep Learning (DL), and Explainable AI (XAI), on detecting and mitigating harmful digital content. Through a comparative analysis of various AI approaches such as Multinomial Naïve Bayes, Support Vector Machines, Convolutional Neural Networks, and hybrid models, the paper evaluates their effectiveness, strengths, limitations, and applicability. Despite promising advancements, key challenges remain—such as bias, scalability, contextual limitations, and ethical considerations. The paper emphasizes the need for human-AI collaboration, transparent moderation frameworks, and continued research to build fair, responsive, and scalable digital harassment prevention systems.

Cite This Article

  • ISSN: 2349-6002
  • Volume: 12
  • Issue: 1
  • PageNo: 5617-5622

Impact of AI Models on Digital Harassment Prevention

Related Articles