Interpretable Complex-Valued Neural Networks for Privacy Protection

Liyao Xiang, Hao Zhang, Haotian Ma, Yifan Zhang, Jie Ren, Quanshi Zhang

Keywords: adversarial, privacy

Mon Session 2 (08:00-10:00 GMT) [Live QA] [Cal]
Mon Session 3 (12:00-14:00 GMT) [Live QA] [Cal]

Abstract: Previous studies have found that an adversary attacker can often infer unintended input information from intermediate-layer features. We study the possibility of preventing such adversarial inference, yet without too much accuracy degradation. We propose a generic method to revise the neural network to boost the challenge of inferring input attributes from features, while maintaining highly accurate outputs. In particular, the method transforms real-valued features into complex-valued ones, in which the input is hidden in a randomized phase of the transformed features. The knowledge of the phase acts like a key, with which any party can easily recover the output from the processing result, but without which the party can neither recover the output nor distinguish the original input. Preliminary experiments on various datasets and network structures have shown that our method significantly diminishes the adversary's ability in inferring about the input while largely preserves the resulting accuracy.

Similar Papers

BayesOpt Adversarial Attack
Binxin Ru, Adam Cobb, Arno Blaas, Yarin Gal,
The Variational Bandwidth Bottleneck: Stochastic Evaluation on an Information Budget
Anirudh Goyal, Yoshua Bengio, Matthew Botvinick, Sergey Levine,