Learned Initializations for Optimizing Coordinate-Based Neural Representations by Tancik, M., Mildenhall, B., Wang, T., Schmidt, D., Srinivasan, P. P., Barron, J. T., & Ng, R. (2021)

tags
Neural radiance fields, Meta-learning
source
(Tancik et al. 2021)

Summary

This paper explores meta-learning techniques for improving the quality and speed of convergence of learned implicit neural representations.

The authors use meta-learning to optimize the initial weights \(\theta_0\) of the neural networks such that it minimizes the loss \(L(\theta_m)\) when the network is optimized on a new unseen observations.

As a meta-learning problem, there is an inner loop and an outer loop:

  • The inner loop is the optimization procedure to go from a \(\theta_0\) to an optimized \(\theta_m\) for a specific observation T. This can be done with classical optimization methods
  • The outer loop is about shifting the position of the \(\theta_0\) to make the inner loop results better on average. This cannot be done with the usual learning algorithms.

Two algorithms are considered for that outer-loop:

  • MAML where the update \(\theta_0^{(j+1)} = \theta_0^j - \beta\nabla_\theta L(\theta_m(\theta, T_j))\) is computed by taking inner-loop steps with different step sizes
  • Reptile where the update \(\theta_0^{(j+1)} = \theta_0^j - \beta\left( \theta_m(\theta_0^j, T_j) - \theta_0^j \right)\) is a step towards the optimized weights from an observation

This setup gives very good results. The training of neural representations is faster and of better quality than with other types of initialization (even initialization trained to match the output of the met-learned one).

The results are encouraging both for image regression and view synthesis of 3D objects.

Bibliography

  1. . . "Learned Initializations for Optimizing Coordinate-based Neural Representations". Arxiv:2012.02189 [cs]. http://arxiv.org/abs/2012.02189.
Last changed | authored by

Comments


← Back to Notes