A Comparative Study on Audio Adversarial Attacks

  • Unique Paper ID: 159389
  • PageNo: 1003-1009
  • Abstract:
  • Adversarial examples are prone to neural networks when specific types of inputs are given to a system that can result in misclassification or incorrect output. With the growing prominence of personal voice assistants (Google Home, Siri, Alexa, etc.) which depend on Automatic Speech Recognition systems (ASR) which are an application of neural networks, a question arises as to how robust these systems are to adversarial attacks. This makes adversarial audio attacks a critical topic in the current world of automated systems. This paper aims at presenting a thorough introduction to the background knowledge of adversarial attacks, and the generation of adversarial examples as well as psychoacoustic models and the different evaluation indicators. It’s necessary to understand how the Deep Learning models in Automatic Speech Recognition systems (ASR) are vulnerable to attacks and how these attacks are performed using different methods

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{159389,
        author = {Sushree Nadiminty and Angelica Sebastian and Melita Lewis and Shaun Noronha and Omprakash Yadav},
        title = {A Comparative Study on Audio Adversarial Attacks},
        journal = {International Journal of Innovative Research in Technology},
        year = {},
        volume = {9},
        number = {11},
        pages = {1003-1009},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=159389},
        abstract = {Adversarial examples are prone to neural networks when specific types of inputs are given to a system that can result in misclassification or incorrect output. With the growing prominence of personal voice assistants (Google Home, Siri, Alexa, etc.) which depend on Automatic Speech Recognition systems (ASR) which are an application of neural networks, a question arises as to how robust these systems are to adversarial attacks. This makes adversarial audio attacks a critical topic in the current world of automated systems. This paper aims at presenting a thorough introduction to the background knowledge of adversarial attacks, and the generation of adversarial examples as well as psychoacoustic models and the different evaluation indicators. It’s necessary to understand how the Deep Learning models in Automatic Speech Recognition systems (ASR) are vulnerable to attacks and how these attacks are performed using different methods},
        keywords = {ASR, Attack, Audio Adversarial, Carlini, Comparison, Neural Network, Psychoacoustics.},
        month = {},
        }

Cite This Article

Nadiminty, S., & Sebastian, A., & Lewis, M., & Noronha, S., & Yadav, O. (). A Comparative Study on Audio Adversarial Attacks. International Journal of Innovative Research in Technology (IJIRT), 9(11), 1003–1009.

Related Articles