Scalable Model Compression by Entropy Penalized Reparameterization

Deniz Oktay, Johannes Ballé, Saurabh Singh, Abhinav Shrivastava

Keywords: compression, computer vision, imagenet, information theory, model compression

Mon Session 3 (12:00-14:00 GMT) [Live QA] [Cal]
Mon Session 5 (20:00-22:00 GMT) [Live QA] [Cal]

Abstract: We describe a simple and general neural network weight compression approach, in which the network parameters (weights and biases) are represented in a “latent” space, amounting to a reparameterization. This space is equipped with a learned probability model, which is used to impose an entropy penalty on the parameter representation during training, and to compress the representation using a simple arithmetic coder after training. Classification accuracy and model compressibility is maximized jointly, with the bitrate–accuracy trade-off specified by a hyperparameter. We evaluate the method on the MNIST, CIFAR-10 and ImageNet classification benchmarks using six distinct model architectures. Our results show that state-of-the-art model compression can be achieved in a scalable and general way without requiring complex procedures such as multi-stage training.

Similar Papers

Neural Epitome Search for Architecture-Agnostic Network Compression
Daquan Zhou, Xiaojie Jin, Qibin Hou, Kaixin Wang, Jianchao Yang, Jiashi Feng,
HiLLoC: lossless image compression with hierarchical latent variable models
James Townsend, Thomas Bird, Julius Kunze, David Barber,
Data-Independent Neural Pruning via Coresets
Ben Mussay, Margarita Osadchy, Vladimir Braverman, Samson Zhou, Dan Feldman,
Dynamic Model Pruning with Feedback
Tao Lin, Sebastian U. Stich, Luis Barba, Daniil Dmitriev, Martin Jaggi,