Defending Against Physically Realizable Attacks on Image Classification

Tong Wu, Liang Tong, Yevgeniy Vorobeychik

Keywords: adversarial, adversarial machine learning, randomized smoothing, robustness

Mon Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Mon Session 4 (17:00-19:00 GMT) [Live QA] [Cal]
Monday: Robustness and Verification

Abstract: We study the problem of defending deep neural network approaches for image classification from physically realizable attacks. First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit very limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the resulting adversarial examples. Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks.

Similar Papers

Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
Jiadong Lin, Chuanbiao Song, Kun He, Liwei Wang, John E. Hopcroft,
Certified Defenses for Adversarial Patches
Ping-yeh Chiang, Renkun Ni, Ahmed Abdelkader, Chen Zhu, Christoph Studor, Tom Goldstein,
Unrestricted Adversarial Examples via Semantic Manipulation
Anand Bhattad, Min Jin Chong, Kaizhao Liang, Bo Li, D. A. Forsyth,