Selection via Proxy: Efficient Data Selection for Deep Learning

Cody Coleman, Christopher Yeh, Stephen Mussmann, Baharan Mirzasoleiman, Peter Bailis, Percy Liang, Jure Leskovec, Matei Zaharia

Keywords: active learning, imagenet, uncertainty

Thurs Session 3 (12:00-14:00 GMT) [Live QA] [Cal]
Thurs Session 5 (20:00-22:00 GMT) [Live QA] [Cal]

Abstract: Data selection methods, such as active learning and core-set selection, are useful tools for machine learning on large datasets. However, they can be prohibitively expensive to apply in deep learning because they depend on feature representations that need to be learned. In this work, we show that we can greatly improve the computational efficiency by using a small proxy model to perform data selection (e.g., selecting data points to label for active learning). By removing hidden layers from the target model, using smaller architectures, and training for fewer epochs, we create proxies that are an order of magnitude faster to train. Although these small proxy models have higher error rates, we find that they empirically provide useful signals for data selection. We evaluate this "selection via proxy" (SVP) approach on several data selection tasks across five datasets: CIFAR10, CIFAR100, ImageNet, Amazon Review Polarity, and Amazon Review Full. For active learning, applying SVP can give an order of magnitude improvement in data selection runtime (i.e., the time it takes to repeatedly train and select points) without significantly increasing the final error (often within 0.1%). For core-set selection on CIFAR10, proxies that are over 10× faster to train than their larger, more accurate targets can remove up to 50% of the data without harming the final accuracy of the target, leading to a 1.6× end-to-end training time improvement.

Similar Papers

Adversarial AutoAugment
Xinyu Zhang, Qiang Wang, Jian Zhang, Zhao Zhong,
Fast is better than free: Revisiting adversarial training
Eric Wong, Leslie Rice, J. Zico Kolter,
Precision Gating: Improving Neural Network Efficiency with Dynamic Dual-Precision Activations
Yichi Zhang, Ritchie Zhao, Weizhe Hua, Nayun Xu, G. Edward Suh, Zhiru Zhang,