Securing Agentic AI Systems: A Framework for Cyber-Resilience of Autonomous Decision Engines

  • Unique Paper ID: 186459
  • PageNo: 2050-2052
  • Abstract:
  • Agentic Artificial Intelligence (AI) systems—capable of autonomous decision-making and action execution—are increasingly embedded in critical domains such as finance, healthcare, and national security. Unlike traditional AI, agentic AI possesses autonomy, self-learning, and goal-directed behavior, presenting unique cyber-resilience challenges. As these systems become pivotal, their compromise could lead to catastrophic outcomes, from misinformation propagation to physical world harm. This paper proposes a comprehensive framework for securing agentic AI systems against cyber threats. The framework systematically addresses vulnerability surfaces, security primitives, and adaptive defenses tailored to autonomous decision engines. It introduces a layered architecture applying principles from distributed systems, cognitive security, and formal verification. The proposed model offers governance, technical, and operational guidelines for ensuring safe deployment of agentic AI in adversarial environments.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{186459,
        author = {Ms. Jagriti Bhatia and Mrs. Amrita Pathak and Mr. Edukondalu Simhadati and Mrs. Megha U. Vakani and Mr. Nirav Amin and Mr. Gandikota Narasimhulu and Mr. G. Haribabu},
        title = {Securing Agentic AI Systems: A Framework for Cyber-Resilience of Autonomous Decision Engines},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {6},
        pages = {2050-2052},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=186459},
        abstract = {Agentic Artificial Intelligence (AI) systems—capable of autonomous decision-making and action execution—are increasingly embedded in critical domains such as finance, healthcare, and national security. Unlike traditional AI, agentic AI possesses autonomy, self-learning, and goal-directed behavior, presenting unique cyber-resilience challenges. As these systems become pivotal, their compromise could lead to catastrophic outcomes, from misinformation propagation to physical world harm. This paper proposes a comprehensive framework for securing agentic AI systems against cyber threats. The framework systematically addresses vulnerability surfaces, security primitives, and adaptive defenses tailored to autonomous decision engines. It introduces a layered architecture applying principles from distributed systems, cognitive security, and formal verification. The proposed model offers governance, technical, and operational guidelines for ensuring safe deployment of agentic AI in adversarial environments.},
        keywords = {Agentic AI Systems; Cyber-Resilience; Autonomous Decision Engines; Adversarial Machine Learning; Ethical AI Governance; Formal Verification; Multi-Agent Security.},
        month = {November},
        }

Cite This Article

Bhatia, M. J., & Pathak, M. A., & Simhadati, M. E., & Vakani, M. M. U., & Amin, M. N., & Narasimhulu, M. G., & Haribabu, M. G. (2025). Securing Agentic AI Systems: A Framework for Cyber-Resilience of Autonomous Decision Engines. International Journal of Innovative Research in Technology (IJIRT), 12(6), 2050–2052.

Related Articles