Universal Approximation with Certified Networks

Maximilian Baader, Matthew Mirman, Martin Vechev

Keywords: adversarial, adversarial attacks, interval bound propagation, relu networks, robustness, universal approximation

Thurs Session 3 (12:00-14:00 GMT) [Live QA] [Cal]
Thurs Session 4 (17:00-19:00 GMT) [Live QA] [Cal]

Abstract: Training neural networks to be certifiably robust is critical to ensure their safety against adversarial attacks. However, it is currently very difficult to train a neural network that is both accurate and certifiably robust. In this work we take a step towards addressing this challenge. We prove that for every continuous function $f$, there exists a network $n$ such that: (i) $n$ approximates $f$ arbitrarily close, and (ii) simple interval bound propagation of a region $B$ through $n$ yields a result that is arbitrarily close to the optimal output of $f$ on $B$. Our result can be seen as a Universal Approximation Theorem for interval-certified ReLU networks. To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.

Similar Papers

Gradientless Descent: High-Dimensional Zeroth-Order Optimization
Daniel Golovin, John Karro, Greg Kochanski, Chansoo Lee, Xingyou Song, Qiuyi Zhang,