Continual Learning with Bayesian Neural Networks for Non-Stationary Data

Richard Kurle, Botond Cseke, Alexej Klushyn, Patrick van der Smagt, Stephan Günnemann

Keywords: bayesian neural networks, continual learning, episodic memory, lifelong learning, memory, variational inference

Mon Session 2 (08:00-10:00 GMT) [Live QA] [Cal]
Mon Session 5 (20:00-22:00 GMT) [Live QA] [Cal]

Abstract: This work addresses continual learning for non-stationary data, using Bayesian neural networks and memory-based online variational Bayes. We represent the posterior approximation of the network weights by a diagonal Gaussian distribution and a complementary memory of raw data. This raw data corresponds to likelihood terms that cannot be well approximated by the Gaussian. We introduce a novel method for sequentially updating both components of the posterior approximation. Furthermore, we propose Bayesian forgetting and a Gaussian diffusion process for adapting to non-stationary data. The experimental results show that our update method improves on existing approaches for streaming data. Additionally, the adaptation methods lead to better predictive performance for non-stationary data.

Similar Papers

Functional Regularisation for Continual Learning with Gaussian Processes
Michalis K. Titsias, Jonathan Schwarz, Alexander G. de G. Matthews, Razvan Pascanu, Yee Whye Teh,
A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning
Soochan Lee, Junsoo Ha, Dongsu Zhang, Gunhee Kim,
Neural Stored-program Memory
Hung Le, Truyen Tran, Svetha Venkatesh,
Continual Learning with Adaptive Weights (CLAW)
Tameem Adel, Han Zhao, Richard E. Turner,