Neural architecture search

tags
Search, Neural networks

Neural architecture search (NAS) is a method for finding neural networks architectures. It is usually based on three main components:

Search space
Type of network that can be built.
Search strategy
The approach for exploring the space.
Performance estimation strategy
The way the performance of a constructed neural network is evaluated (without actually building it or training/running it).

Reinforcement learning-based NAS

The original idea was called Neural architecture search and is based on the use of a RNN as a controller and generator of architectures. The search-space is pre-defined and explored in a rigid way. (Zoph and Le 2017).

The process of generating architectures from the first article was extremely lengthy and replaced later by a more constrained search. (Zoph et al. 2018).

Recent ideas include the use of parameter sharing across architectures because the main bottleneck of previous techniques was essentially in the training of each child model. This results in significant speedup of RL-based NAS. (Pham et al. 2018)

Neuroevolution

This field is more focused on evolution neural network through evolutionary methods such as e.g genetic algorithms. One of the main work that made that field popular is NEAT (Stanley and Miikkulainen 2002).

Bibliography

Pham, Hieu, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. 2018. “Efficient Neural Architecture Search via Parameter Sharing.” arXiv:1802.03268 [Cs, Stat], February.

Stanley, Kenneth O., and Risto Miikkulainen. 2002. “Evolving Neural Networks Through Augmenting Topologies.” Evolutionary Computation 10 (2):99–127.

Zoph, Barret, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. 2018. “Learning Transferable Architectures for Scalable Image Recognition.” arXiv:1707.07012 [Cs, Stat], April.

Zoph, Barret, and Quoc V. Le. 2017. “Neural Architecture Search with Reinforcement Learning.” arXiv:1611.01578 [Cs], February.


← Back to Notes