Sampling-Free Learning of Bayesian Quantized Neural Networks

Jiahao Su, Milan Cvitkovic, Furong Huang

Keywords: bayesian neural networks, uncertainty

Thurs Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Thurs Session 4 (17:00-19:00 GMT) [Live QA] [Cal]

Abstract: Bayesian learning of model parameters in neural networks is important in scenarios where estimates with well-calibrated uncertainty are important. In this paper, we propose Bayesian quantized networks (BQNs), quantized neural networks (QNNs) for which we learn a posterior distribution over their discrete parameters. We provide a set of efficient algorithms for learning and prediction in BQNs without the need to sample from their parameters or activations, which not only allows for differentiable learning in quantized models but also reduces the variance in gradients estimation. We evaluate BQNs on MNIST, Fashion-MNIST and KMNIST classification datasets compared against bootstrap ensemble of QNNs (E-QNN). We demonstrate BQNs achieve both lower predictive errors and better-calibrated uncertainties than E-QNN (with less than 20% of the negative log-likelihood).

Similar Papers

Linear Symmetric Quantization of Neural Networks for Low-precision Integer Hardware
Xiandong Zhao, Ying Wang, Xuyi Cai, Cheng Liu, Lei Zhang,
Mixed Precision DNNs: All you need is a good parametrization
Stefan Uhlich, Lukas Mauch, Fabien Cardinaux, Kazuki Yoshiyama, Javier Alonso Garcia, Stephen Tiedemann, Thomas Kemp, Akira Nakamura,
LEARNED STEP SIZE QUANTIZATION
Steven K. Esser, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, Dharmendra S. Modha,