Neural architecture search (NAS) is a method for finding neural networks architectures. It is usually based on three main components:
- Search space
- Type of network that can be built.
- Search strategy
- The approach for exploring the space.
- Performance estimation strategy
- The way the performance of a constructed neural network is evaluated (without actually building it or training/running it).
Reinforcement learning-based NAS
The original idea was called Neural architecture search and is based on the use of a RNN as a controller and generator of architectures. The search-space is pre-defined and explored in a rigid way. (Zoph and Le 2017).
The process of generating architectures from the first article was extremely lengthy and replaced later by a more constrained search. (Zoph et al. 2018).
Recent ideas include the use of parameter sharing across architectures because the main bottleneck of previous techniques was essentially in the training of each child model. This results in significant speedup of RL-based NAS. (Pham et al. 2018)
This field is more focused on evolution neural network through evolutionary methods such as e.g genetic algorithms. One of the main work that made that field popular is NEAT (Stanley and Miikkulainen 2002).