- tags
- Gradient descent, Optimization
The gradient flow for a model parametrized by parameters \(w\) and a loss function \(L\) is written:
\[ \dot{w} = - \nabla L (w(t)) \]
The gradient flow for a model parametrized by parameters \(w\) and a loss function \(L\) is written:
\[ \dot{w} = - \nabla L (w(t)) \]