Neural architecture search

tags
Search, Neural networks

Neural architecture search (NAS) is a method for finding neural networks architectures. It is usually based on three main components:

Search space
Type of network that can be built.
Search strategy
The approach for exploring the space.
Performance estimation strategy
The way the performance of a constructed neural network is evaluated (without actually building it or training/running it).

Reinforcement learning-based NAS

The original idea was called Neural architecture search and is based on the use of a RNN as a controller and generator of architectures. The search-space is pre-defined and explored in a rigid way. (Zoph, Le 2017).

The process of generating architectures from the first article was extremely lengthy and replaced later by a more constrained search. (Zoph et al. 2018).

Recent ideas include the use of parameter sharing across architectures because the main bottleneck of previous techniques was essentially in the training of each child model. This results in significant speedup of RL-based NAS. (Pham et al. 2018)

Neuroevolution

This field is more focused on evolution neural network through evolutionary methods such as e.g genetic algorithms. One of the main work that made that field popular is NEAT (Stanley, Miikkulainen 2002).

Bibliography

  1. . . "Neural Architecture Search with Reinforcement Learning". Arxiv:1611.01578 [cs]. http://arxiv.org/abs/1611.01578. See notes
  2. . . "Learning Transferable Architectures for Scalable Image Recognition". Arxiv:1707.07012 [cs, Stat]. http://arxiv.org/abs/1707.07012. See notes
  3. . . "Efficient Neural Architecture Search via Parameter Sharing". Arxiv:1802.03268 [cs, Stat]. http://arxiv.org/abs/1802.03268. See notes
  4. . . "Evolving Neural Networks Through Augmenting Topologies". Evolutionary Computation 10 (2):99–127. DOI. See notes
Last changed | authored by

Comments


← Back to Notes