(Kirsch and Schmidhuber 2021)
The goal is to learn the initialization of neural network parameters or recurrent neural network initial states in order to make the training faster or less prone to getting stuck in local minima.
Example for implicit neural representations: (Tancik et al. 2021)
(Finn, Abbeel, and Levine 2017)
(Nichol, Achiam, and Schulman 2018)
Finn, Chelsea, Pieter Abbeel, and Sergey Levine. 2017. "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks". In Proceedings of the 34th International Conference on Machine Learning, 70:10. Sydney, Australia.
Kirsch, Louis, and Jürgen Schmidhuber. February 16, 2021. “Meta Learning Backpropagation and Improving It”. arXiv:2012.14905 [Cs, Stat]. http://arxiv.org/abs/2012.14905.
Nichol, Alex, Joshua Achiam, and John Schulman. October 22, 2018. “On First-Order Meta-Learning Algorithms”. arXiv:1803.02999 [Cs]. http://arxiv.org/abs/1803.02999.
Tancik, Matthew, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, and Ren Ng. March 23, 2021. “Learned Initializations for Optimizing Coordinate-Based Neural Representations”. arXiv:2012.02189 [Cs]. http://arxiv.org/abs/2012.02189.