- tags
- Machine learning
Catastrophic forgetting is the name given to a common problem of machine learning models: when training on some new data from a new distribution (a new “task”), many models forget what they learned from the first task.
This isn’t surprising since models are following a loss function that is often applied solely on the task at hand, and not constraining the model to retain past information.
The field of continual learning is an effort to counteract catastrophic forgetting in machine learning.
Catastrophic forgetting in neural networks
This phenomenon is common in neural networks. It was strongly identified in (French 1999) and investigated in (Robins 1993; Goodfellow et al. 2015).
Bibliography
- Robert M. French. . "Catastrophic Forgetting in Connectionist Networks". Trends in Cognitive Sciences 3 (4):128–35. DOI.
- A. Robins. . "Catastrophic Forgetting in Neural Networks: The Role of Rehearsal Mechanisms". In Proceedings 1993 the First New Zealand International Two-stream Conference on Artificial Neural Networks and Expert Systems, 65–68. DOI.
- Ian J. Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, Yoshua Bengio. . "An Empirical Investigation of Catastrophic Forgetting in Gradient-based Neural Networks". Arxiv:1312.6211 [cs, Stat]. http://arxiv.org/abs/1312.6211.