Catastrophic forgetting

tags
Machine learning

Catastrophic forgetting is the name given to a common problem of machine learning models: when training on some new data from a new distribution (a new “task”), many models forget what they learned from the first task.

This isn’t surprising since models are following a loss function that is often applied solely on the task at hand, and not constraining the model to retain past information.

The field of continual learning is an effort to counteract catastrophic forgetting in machine learning.

Catastrophic forgetting in neural networks

This phenomenon is common in neural networks. It was strongly identified in (French 1999) and investigated in (Robins 1993; Goodfellow et al. 2015).

Bibliography

  1. . . "Catastrophic Forgetting in Connectionist Networks". Trends in Cognitive Sciences 3 (4):128–35. DOI.
  2. . . "Catastrophic Forgetting in Neural Networks: The Role of Rehearsal Mechanisms". In Proceedings 1993 the First New Zealand International Two-stream Conference on Artificial Neural Networks and Expert Systems, 65–68. DOI.
  3. . . "An Empirical Investigation of Catastrophic Forgetting in Gradient-based Neural Networks". Arxiv:1312.6211 [cs, Stat]. http://arxiv.org/abs/1312.6211.
Last changed | authored by

Comments


← Back to Notes