Optimization
Links to this note

Automatic differentiation

Cellular automata as convolutional neural networks

Genetic algorithms

Gradient descent

Gradient descent for wide twolayer neural networks – I : Global convergence

Gradient flow

Linear programming

Neural network training

Notes on: AIGAs: AIgenerating algorithms, an alternate paradigm for producing general artificial intelligence by Clune, J. (2019)

Notes on: Learned Initializations for Optimizing CoordinateBased Neural Representations by Tancik, M., Mildenhall, B., Wang, T., Schmidt, D., Srinivasan, P. P., Barron, J. T., & Ng, R. (2021)

Notes on: POET: openended coevolution of environments and their optimized solutions by Wang, R., Lehman, J., Clune, J., & Stanley, K. O. (2019)

Notes on: The geometry of integration in text classification RNNs by Aitken, K., Ramasesh, V. V., Garg, A., Cao, Y., Sussillo, D., & Maheswaranathan, N. (2020)

Ordinary least squares

Patterndefeating quicksort

Program synthesis

Projection on convex sets

Supervised learning

Talk: Differentiation of blackbox combinatorial solvers

Talk: The Importance of OpenEndedness in AI and Machine Learning
Comments
← Back to Notes