Optimization
Links to this note
-
Automatic differentiation
-
Cellular automata as convolutional neural networks
-
Genetic algorithms
-
Gradient descent
-
Gradient descent for wide two-layer neural networks – I : Global convergence
-
Gradient flow
-
Linear programming
-
Neural network training
-
Notes on: AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence by Clune, J. (2019)
-
Notes on: Learned Initializations for Optimizing Coordinate-Based Neural Representations by Tancik, M., Mildenhall, B., Wang, T., Schmidt, D., Srinivasan, P. P., Barron, J. T., & Ng, R. (2021)
-
Notes on: POET: open-ended coevolution of environments and their optimized solutions by Wang, R., Lehman, J., Clune, J., & Stanley, K. O. (2019)
-
Notes on: The geometry of integration in text classification RNNs by Aitken, K., Ramasesh, V. V., Garg, A., Cao, Y., Sussillo, D., & Maheswaranathan, N. (2020)
-
Ordinary least squares
-
Pattern-defeating quicksort
-
Program synthesis
-
Projection on convex sets
-
Supervised learning
-
Talk: Differentiation of black-box combinatorial solvers
-
Talk: The Importance of Open-Endedness in AI and Machine Learning
Comments
← Back to Notes