NetFense+: Dual-Stage Privacy Shield for Graph Neural Networks Against Inference Threats

  • Unique Paper ID: 182377
  • PageNo: 1647-1653
  • Abstract:
  • Graph Neural Networks (GNNs) have become increasingly prominent in domains involving sensitive relational data, such as social networks, healthcare systems, and financial platforms. However, their susceptibility to privacy attacks, including Membership Inference Attacks (MIA) and Attribute Inference Attacks (AIA), raises significant concerns. This paper presents NetFense, a novel defense framework specifically tailored to protect GNNs against such privacy threats. NetFense combines adversarial training with graph-adapted differential privacy mechanisms to reduce privacy leakage while preserving model utility. Extensive evaluations on real-world graph datasets demonstrate the effectiveness of NetFense in defending against various attack vectors, outperforming baseline privacy-preserving techniques in both accuracy retention and privacy metrics. The results establish NetFense as a scalable, practical, and secure approach for deploying GNNs in privacy-sensitive applications.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{182377,
        author = {Mandala Monika and Sanjay Gandhi Gundabatini and Ramachandran Vedantham},
        title = {NetFense+: Dual-Stage Privacy Shield for Graph Neural Networks Against Inference Threats},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {2},
        pages = {1647-1653},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=182377},
        abstract = {Graph Neural Networks (GNNs) have become increasingly prominent in domains involving sensitive relational data, such as social networks, healthcare systems, and financial platforms. However, their susceptibility to privacy attacks, including Membership Inference Attacks (MIA) and Attribute Inference Attacks (AIA), raises significant concerns. This paper presents NetFense, a novel defense framework specifically tailored to protect GNNs against such privacy threats. NetFense combines adversarial training with graph-adapted differential privacy mechanisms to reduce privacy leakage while preserving model utility. Extensive evaluations on real-world graph datasets demonstrate the effectiveness of NetFense in defending against various attack vectors, outperforming baseline privacy-preserving techniques in both accuracy retention and privacy metrics. The results establish NetFense as a scalable, practical, and secure approach for deploying GNNs in privacy-sensitive applications.},
        keywords = {Graph Neural Networks, Privacy Attacks, Adversarial Training, Differential Privacy, Membership Inference, Attribute Inference, NetFense Framework.},
        month = {July},
        }

Cite This Article

Monika, M., & Gundabatini, S. G., & Vedantham, R. (2025). NetFense+: Dual-Stage Privacy Shield for Graph Neural Networks Against Inference Threats. International Journal of Innovative Research in Technology (IJIRT), 12(2), 1647–1653.

Related Articles