Optimal Strategies Against Generative Attacks

Roy Mor, Erez Peterfreund, Matan Gavish, Amir Globerson

Keywords: anomaly detection, information theory

Mon Session 2 (08:00-10:00 GMT) [Live QA] [Cal]
Mon Session 3 (12:00-14:00 GMT) [Live QA] [Cal]
Monday: Generative models

Abstract: Generative neural models have improved dramatically recently. With this progress comes the risk that such models will be used to attack systems that rely on sensor data for authentication and anomaly detection. Many such learning systems are installed worldwide, protecting critical infrastructure or private data against malfunction and cyber attacks. We formulate the scenario of such an authentication system facing generative impersonation attacks, characterize it from a theoretical perspective and explore its practical implications. In particular, we ask fundamental theoretical questions in learning, statistics and information theory: How hard is it to detect a "fake reality"? How much data does the attacker need to collect before it can reliably generate nominally-looking artificial data? Are there optimal strategies for the attacker or the authenticator? We cast the problem as a maximin game, characterize the optimal strategy for both attacker and authenticator in the general case, and provide the optimal strategies in closed form for the case of Gaussian source distributions. Our analysis reveals the structure of the optimal attack and the relative importance of data collection for both authenticator and attacker. Based on these insights we design practical learning approaches and show that they result in models that are more robust to various attacks on real-world data.

Similar Papers

Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Tribhuvanesh Orekondy, Bernt Schiele, Mario Fritz,
Certified Defenses for Adversarial Patches
Ping-yeh Chiang, Renkun Ni, Ahmed Abdelkader, Chen Zhu, Christoph Studor, Tom Goldstein,