Implicit neural representations

Data representation, Neural networks

Implicit neural representations is about parameterizing a continuous differentiable signal with a neural network. The signal is encoded within the neural network, providing a possibly more compact representation. This is a type of regression problem.

Applications of these learned representations range from simple compression, to 3D scene reconstruction from 2D images, semantic information inference, etc.

CPPN is an early example of a implicit neural representation implementation mainly used for pattern generation . It uses a neural network to generate patterns parameterized by two (or more) coordinates .

Implicit neural representations for high frequency data

To encode potentially high frequency data such as sound or images, it is much more efficient to start from periodic feature transformations. This can be achieved with periodic activation functions (Sitzmann et al. 2020) or by using a Fourier feature mapping (Tancik et al. 2020) .

Neural radiance fields

(Mildenhall et al. 2020)


  1. . . "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis". arXiv:2003.08934 [Cs].

  2. . . “Implicit Neural Representations with Periodic Activation Functions”. arXiv:2006.09661 [Cs, Eess].

  3. . . “Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains”. arXiv:2006.10739 [Cs].

← Back to Notes