Continual Learning with Adaptive Weights (CLAW)

Tameem Adel, Han Zhao, Richard E. Turner

Keywords: catastrophic forgetting, continual learning, transfer learning, variational inference

Wed Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Wed Session 2 (08:00-10:00 GMT) [Live QA] [Cal]

Abstract: Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner. Recently, several frameworks have been developed which enable deep learning to be deployed in this learning scenario. A key modelling decision is to what extent the architecture should be shared across tasks. On the one hand, separately modelling each task avoids catastrophic forgetting but it does not support transfer learning and leads to large models. On the other hand, rigidly specifying a shared component and a task-specific part enables task transfer and limits the model size, but it is vulnerable to catastrophic forgetting and restricts the form of task-transfer that can occur. Ideally, the network should adaptively identify which parts of the network to share in a data driven way. Here we introduce such an approach called Continual Learning with Adaptive Weights (CLAW), which is based on probabilistic modelling and variational inference. Experiments show that CLAW achieves state-of-the-art performance on six benchmarks in terms of overall continual learning performance, as measured by classification accuracy, and in terms of addressing catastrophic forgetting.

Similar Papers

Compositional Language Continual Learning
Yuanpeng Li, Liang Zhao, Kenneth Church, Mohamed Elhoseiny,
A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning
Soochan Lee, Junsoo Ha, Dongsu Zhang, Gunhee Kim,
Uncertainty-guided Continual Learning with Bayesian Neural Networks
Sayna Ebrahimi, Mohamed Elhoseiny, Trevor Darrell, Marcus Rohrbach,