A Transformer-Based Deep Learning Framework for Insider Threat Detection

  • Unique Paper ID: 183075
  • Volume: 12
  • Issue: 3
  • PageNo: 72-78
  • Abstract:
  • Detecting insider threats is a significant cybersecurity challenge, as conventional systems often fail to identify the subtle behavioral clues of malicious actors. This research proposes a novel approach that treats user activity logs as a language, where harmful actions deviate from normal "grammatical" patterns. To effectively analyze this "language," the study introduces a deep learning framework centered on a Transformer architecture. Unlike models that process data sequentially, the Transformer's self-attention mechanism can examine an entire history of user actions at once, enabling it to capture complex, long-range relationships. The system processes a wide range of data, including logins, device usage, file access, and emails. It enhances this data by creating sessions, profiling individual user behavior, and incorporating anomaly scores from an unsupervised model. When tested on the public CERT Insider Threat r4.2 dataset, the model proved highly effective. It achieved 90% overall accuracy, a 71% precision rate in identifying threats, and a recall of 55%. This performance underscores the value of using Transformer-based models to build more intelligent, context-aware security systems for identifying insider threats.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{183075,
        author = {A Ludwika and Dr A S N Chakravarthy and C Priyadarshini},
        title = {A Transformer-Based Deep Learning Framework for Insider Threat Detection},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {3},
        pages = {72-78},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=183075},
        abstract = {Detecting insider threats is a significant cybersecurity challenge, as conventional systems often fail to identify the subtle behavioral clues of malicious actors. This research proposes a novel approach that treats user activity logs as a language, where harmful actions deviate from normal "grammatical" patterns.
To effectively analyze this "language," the study introduces a deep learning framework centered on a Transformer architecture. Unlike models that process data sequentially, the Transformer's self-attention mechanism can examine an entire history of user actions at once, enabling it to capture complex, long-range relationships.
The system processes a wide range of data, including logins, device usage, file access, and emails. It enhances this data by creating sessions, profiling individual user behavior, and incorporating anomaly scores from an unsupervised model. When tested on the public CERT Insider Threat r4.2 dataset, the model proved highly effective. It achieved 90% overall accuracy, a 71% precision rate in identifying threats, and a recall of 55%. This performance underscores the value of using Transformer-based models to build more intelligent, context-aware security systems for identifying insider threats.},
        keywords = {Insider Threat Detection, Deep Learning, Transformer Model, User Behaviour Analytics, Sequence Modelling, Anomaly Detection.},
        month = {July},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 12
  • Issue: 3
  • PageNo: 72-78

A Transformer-Based Deep Learning Framework for Insider Threat Detection

Related Articles