The following navigation utilizes arrow, enter, escape, and space bar key commands. Left and right arrows move through main tier links and expand / close menus in sub tiers. Up and Down arrows will open main tier menus and toggle through sub tier links. Enter and space open menus and escape closes them as well. Tab will move on to the next part of the site rather than go through menu items.
Ken Alparslan, Department of Computer Science, Conestoga College, 108 University Ave, Waterloo, ON N2J 2W2, Canada Yigit Alparslan, and Dr. Matthew Burlick, Department of Computer Science and Information Technology, Drexel University, 3141 Chestnut Street, Philadelphia, Pennsylvania, US, 19104
This paper investigates adversarial attacks on neural networks in audio domain. Adversarial attacks are inputs that look like the original input but altered on purpose. A corrupted image or audio is an adversarial attack. A sticker on a stop sign is an adversarial attack for a self-driving car. Neural networks are specific machine learning algorithms to classify inputs such as image, audio etc. Speech-to-text neural networks that are widely used today are prone to misclassify adversarial attacks. In this study, we create a new adversarial attack algorithm and test it against a new defense mechanism that we create. First, we generate state-of-art adversarial attacks by altering wave forms from Common Voice data set adapting Principal Component Analysis (PCA), which is a special compression algorithm in machine learning. We attack DeepSpeech - A speech-to-text neural network implemented by Mozilla and achieve 100% adversarial success rate (0 successful classification by DeepSpeech) on all 25 adversarial audio files that we crafted. Second, we propose a state-of-the-art defense mechanism against our own adversarial attacks. We reduce dimensionality reduction to defend against our own attacks. When tested them with DeepSpeech again, we achieve 100% adversarial success again, which suggests our state-of-art attacks are stronger than our state-of-art defense mechanism. We also publish our adversarial waveforms and encourage readers to listen to them. We also encourage current literature to defend against our attacks.
Presenters: Yigit Alparslan, Ken Alparslan
Institution: Drexel University
Type: Poster
Subject: Computer Science
Status: Approved