Overlearning Reveals Sensitive Attributes

Congzheng Song, Vitaly Shmatikov

Keywords: privacy, transfer learning

Thurs Session 3 (12:00-14:00 GMT) [Live QA] [Cal]
Thurs Session 4 (17:00-19:00 GMT) [Live QA] [Cal]

Abstract: ``"Overlearning'' means that a model trained for a seemingly simple objective implicitly learns to recognize attributes and concepts that are (1) not part of the learning objective, and (2) sensitive from a privacy or bias perspective. For example, a binary gender classifier of facial images also learns to recognize races, even races that are not represented in the training data, and identities. We demonstrate overlearning in several vision and NLP models and analyze its harmful consequences. First, inference-time representations of an overlearned model reveal sensitive attributes of the input, breaking privacy protections such as model partitioning. Second, an overlearned model can be "`re-purposed'' for a different, privacy-violating task even in the absence of the original training data. We show that overlearning is intrinsic for some tasks and cannot be prevented by censoring unwanted attributes. Finally, we investigate where, when, and why overlearning happens during model training.

Similar Papers

Differentially Private Meta-Learning
Jeffrey Li, Mikhail Khodak, Sebastian Caldas, Ameet Talwalkar,
Meta-Learning without Memorization
Mingzhang Yin, George Tucker, Mingyuan Zhou, Sergey Levine, Chelsea Finn,
Maxmin Q-learning: Controlling the Estimation Bias of Q-learning
Qingfeng Lan, Yangchen Pan, Alona Fyshe, Martha White,