Implicit neural representations is about parameterizing a continuous differentiable signal with a neural network. The signal is encoded within the neural network, providing a possibly more compact representation. This is a type of regression problem.
Applications of these learned representations range from simple compression, to 3D scene reconstruction from 2D images, semantic information inference, etc.
CPPN is an early example of a implicit neural representation implementation mainly used for pattern generation . It uses a neural network to generate patterns parameterized by two (or more) coordinates .
Implicit neural representations for high frequency data
To encode potentially high frequency data such as sound or images, it is much more efficient to start from periodic feature transformations. This can be achieved with periodic activation functions (Sitzmann et al. 2020) or by using a Fourier feature mapping (Tancik et al. 2020) .
Neural radiance fields
Bibliography
- Mildenhall, Ben, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. August 3, 2020. "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis". arXiv:2003.08934 [Cs]. http://arxiv.org/abs/2003.08934.
- Sitzmann, Vincent, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. June 17, 2020. “Implicit Neural Representations with Periodic Activation Functions”. arXiv:2006.09661 [Cs, Eess]. http://arxiv.org/abs/2006.09661.
- Tancik, Matthew, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. June 18, 2020. “Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains”. arXiv:2006.10739 [Cs]. http://arxiv.org/abs/2006.10739.