3 years ago

Robust Audio Adversarial Example for a Physical Attack.

Hiromu Yakura, Jun Sakuma

The success of deep learning in recent years has raised concerns about adversarial examples, which allow attackers to force deep neural networks to output a specified target. Although a method by which to generate audio adversarial examples targeting a state-of-the-art speech recognition model has been proposed, this method cannot fool the model in the case of playing over the air, and thus, the threat was considered to be limited. In this paper, we propose a method to generate adversarial examples that can attack even when playing over the air in the physical world by simulating transformation caused by playback or recording and incorporating them in the generation process. Evaluation and a listening experiment demonstrated that audio adversarial examples generated by the proposed method may become a real threat.

Publisher URL: http://arxiv.org/abs/1810.11793

DOI: arXiv:1810.11793v2

You might also like
Discover & Discuss Important Research

Keeping up-to-date with research can feel impossible, with papers being published faster than you'll ever be able to read them. That's where Researcher comes in: we're simplifying discovery and making important discussions happen. With over 19,000 sources, including peer-reviewed journals, preprints, blogs, universities, podcasts and Live events across 10 research areas, you'll never miss what's important to you. It's like social media, but better. Oh, and we should mention - it's free.

  • Download from Google Play
  • Download from App Store
  • Download from AppInChina

Researcher displays publicly available abstracts and doesn’t host any full article content. If the content is open access, we will direct clicks from the abstracts to the publisher website and display the PDF copy on our platform. Clicks to view the full text will be directed to the publisher website, where only users with subscriptions or access through their institution are able to view the full article.