Continual learning

tags
Machine learning

Continual learning is a type of supervised learning where there is no “testing phase” associated to a decision process. Instead, training samples keep being processed by the algorithm which has to simultaneously make predictions and keep learning.

This is challenging for a fixed neural network architecture since it has a fixed capacity and is bound to either forget things or be unable to learn anything new.

A definition from the survey (De Lange et al. 2020):

The General Continual Learning setting considers an infinite stream of training data where at each time step, the system receives a (number of) new sample(s) drawn non i.i.d from a current distribution that could itself experience sudden of gradual changes.

Examples of continual learning systems

Benchmarks

Computer vision based benchmarks

  • Split MNIST: the MNIST dataset is split into 5 2-classes tasks (Nguyen et al. 2017; Zenke et al. 2017; Shin et al. 2017).

  • Split CIFAR10: the CIFAR10 dataset is split into 5 2-classes tasks (Krizhevsky, Hinton 2009).

  • Split mini-ImageNet: a mini ImageNet (100 classes) task split into 20 5-classes tasks.

  • Continual Transfer Learning Benchmark: A benchmark from Facebook AI, built from 7 computer vision datasets: MNIST, CIFAR10, CIFAR100, DTD, SVHN, Rainbow-MNIST, Fashion MNIST. The tasks are all 5-classes or 10-classes classification tasks. Some example task sequence constructions from (Veniat et al. 2021):

    The last task of \(S_{out}\) consists of a shuffling of the output labels of the first task. The last task of \(S_{in}\) is the same as its first task except that MNIST images have a different background color. \(S_{long}\) has 100 tasks, and it is constructed by first sampling a dataset, then 5 classes at random, and finally the amount of training data from a distribution that favors small tasks by the end of the learning experience.

  • Permuted MNIST: here for each different task the pixels of the MNIST digits are permuted, generating a new task of equal difficulty as the original one but different solution. This task is not suitable if the model has some spatial prior (like a CNN). Used first in (Goodfellow et al. 2014; Srivastava et al. 2013). Also in (Kirkpatrick et al. 2017)

  • Rotated MNIST: each task contains digits rotated by a fixed angle between 0 and 180 degrees.

Bibliography

  1. . . "A Continual Learning Survey: Defying Forgetting in Classification Tasks". Arxiv:1909.08383 [cs, Stat]. http://arxiv.org/abs/1909.08383.
  2. . . "Toward an Architecture for Never-ending Language Learning.". In Proceedings of the Conference on Artificial Intelligence (AAAI) (2010), 1306–13. DOI.
  3. . . "Variational Continual Learning". Corr abs/1710.10628. http://arxiv.org/abs/1710.10628.
  4. . . "Continual Learning Through Synaptic Intelligence". In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, edited by Doina Precup and Yee Whye Teh, 70:3987–95. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v70/zenke17a.html.
  5. . . "Continual Learning with Deep Generative Replay". In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, edited by Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, 2990–99. https://proceedings.neurips.cc/paper/2017/hash/0efbe98067c6c73dba1250d2beaa81f9-Abstract.html.
  6. . . "Learning Multiple Layers of Features from Tiny Images". University of Toronto.
  7. . . "Efficient Continual Learning with Modular Networks and Task-driven Priors". Arxiv:2012.12631 [cs]. http://arxiv.org/abs/2012.12631.
  8. . . "An Empirical Investigation of Catastrophic Forgeting in Gradient-based Neural Networks". In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, edited by Yoshua Bengio and Yann LeCun. http://arxiv.org/abs/1312.6211.
  9. . . "Compete to Compute". In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a Meeting Held December 5-8, 2013, Lake Tahoe, Nevada, United States, edited by Christopher J. C. Burges, Léon Bottou, Zoubin Ghahramani, and Kilian Q. Weinberger, 2310–18. https://proceedings.neurips.cc/paper/2013/hash/8f1d43620bc6bb580df6e80b0dc05c48-Abstract.html.
  10. . . "Overcoming Catastrophic Forgetting in Neural Networks". Arxiv:1612.00796 [cs, Stat]. http://arxiv.org/abs/1612.00796.

Comments


← Back to Notes