A Comparative Study on Audio Adversarial Attacks
Author(s):
Sushree Nadiminty, Angelica Sebastian, Melita Lewis, Shaun Noronha, Omprakash Yadav
Keywords:
ASR, Attack, Audio Adversarial, Carlini, Comparison, Neural Network, Psychoacoustics.
Abstract
Adversarial examples are prone to neural networks when specific types of inputs are given to a system that can result in misclassification or incorrect output. With the growing prominence of personal voice assistants (Google Home, Siri, Alexa, etc.) which depend on Automatic Speech Recognition systems (ASR) which are an application of neural networks, a question arises as to how robust these systems are to adversarial attacks. This makes adversarial audio attacks a critical topic in the current world of automated systems. This paper aims at presenting a thorough introduction to the background knowledge of adversarial attacks, and the generation of adversarial examples as well as psychoacoustic models and the different evaluation indicators. It’s necessary to understand how the Deep Learning models in Automatic Speech Recognition systems (ASR) are vulnerable to attacks and how these attacks are performed using different methods
Article Details
Unique Paper ID: 159389

Publication Volume & Issue: Volume 9, Issue 11

Page(s): 1003 - 1009
Article Preview & Download


Share This Article

Join our RMS

Conference Alert

NCSEM 2024

National Conference on Sustainable Engineering and Management - 2024

Last Date: 15th March 2024

Call For Paper

Volume 11 Issue 1

Last Date for paper submitting for Latest Issue is 25 June 2024

About Us

IJIRT.org enables door in research by providing high quality research articles in open access market.

Send us any query related to your research on editor@ijirt.org

Social Media

Google Verified Reviews