A Comparative Study on Audio Adversarial Attacks

  • Unique Paper ID: 159389
  • Volume: 9
  • Issue: 11
  • PageNo: 1003-1009
  • Abstract:
  • Adversarial examples are prone to neural networks when specific types of inputs are given to a system that can result in misclassification or incorrect output. With the growing prominence of personal voice assistants (Google Home, Siri, Alexa, etc.) which depend on Automatic Speech Recognition systems (ASR) which are an application of neural networks, a question arises as to how robust these systems are to adversarial attacks. This makes adversarial audio attacks a critical topic in the current world of automated systems. This paper aims at presenting a thorough introduction to the background knowledge of adversarial attacks, and the generation of adversarial examples as well as psychoacoustic models and the different evaluation indicators. It’s necessary to understand how the Deep Learning models in Automatic Speech Recognition systems (ASR) are vulnerable to attacks and how these attacks are performed using different methods

Cite This Article

  • ISSN: 2349-6002
  • Volume: 9
  • Issue: 11
  • PageNo: 1003-1009

A Comparative Study on Audio Adversarial Attacks

Related Articles