Double descent is a phenomenon usually observed in neural networks, where the usual bias-variance tradeoff seems to break down: test error keeps decreasing as we over-parametrize the network or add more training examples. This was observed for over-parametrized neural networks in (Geman et al. 1992).
An illustration from (caption is also adapted from the paper) (Belkin et al. 2019):
- Mikhail Belkin, Daniel Hsu, Siyuan Ma, Soumik Mandal. . "Reconciling Modern Machine Learning Practice and the Bias-variance Trade-off". Arxiv:1812.11118 [cs, Stat]. http://arxiv.org/abs/1812.11118.
- Stuart Geman, Elie Bienenstock, René Doursat. . "Neural Networks and the Bias/variance Dilemma". Neural Comput. 4 (1):1–58. DOI.