Neural networks can be seen as dynamical systems in different contexts.
With Recurrent neural networks, the continuous dynamical system analogy is very striking. These networks evolve progressively in time by updating an internal state with a fixed algorithm. Usually the state dynamics are not studied because the recurrent networks is designed to complete some fixed task.
The notion of attractor can be defined for such networks, making them related to the notion of attractor networks.
Training as a dynamical system
For a neural network with parameters \(\theta\) trained with example pairs \((x_i, y_i)\), the parameters trajectory \(\theta_0, \theta_1, \ldots, \theta_n\) until convergence can be interpreted as the evolution of a dynamical system.
One may study the various attractors of that dynamical system and how the training examples affect them.
This can be thought of as a sort of learning in dynamical systems.
Residual networks as dynamical systems
Residual neural networks may be seen as discretized dynamical systems. In the limit of small step sizes, it becomes a continuous resnet, we get Neural ordinary differential equations (Chen et al. 2019).
- Chen, Ricky T. Q., Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. December 13, 2019. "Neural Ordinary Differential Equations". arXiv:1806.07366 [Cs, Stat]. http://arxiv.org/abs/1806.07366.