Implicit neural representations is about parameterizing a continuous differentiable signal with a neural network. The signal is encoded within the neural network, providing a possibly more compact representation.
Applications of these learned representations range from simple compression, to 3D scene reconstruction from 2D images, semantic information inference, etc.
CPPN is an early example of a implicit neural representation implementation mainly used for pattern generation . It uses a neural network to generate patterns parameterized by two (or more) coordinates .
Implicit neural representations for high frequency data
To encode potentially high frequency data such as sound or images, it is much more efficient to start from periodic feature transformations. This can be achieved with periodic activation functions (Sitzmann et al. 2020) or by using a Fourier feature mapping (Tancik et al. 2020) .
- Sitzmann, Vincent, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. June 2020. "Implicit Neural Representations with Periodic Activation Functions". arXiv:2006.09661 [Cs, Eess], June.
- Tancik, Matthew, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. June 2020. “Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains”. arXiv:2006.10739 [Cs], June.