Learning Transferable Architectures for Scalable Image Recognition by Zoph, B., Vasudevan, V., Shlens, J., & Le, Q. V. (2018)

tags
NAS
source
(Zoph et al. 2018)

Summary

This paper is more or less a follow up of (Zoph, Le 2017) where the search space get at the same time widened and more constraints are added (division between normal cell for processing and reduction cell for pooling/downsampling). Normal cells get stacked \(N\) times resulting in very big architectures. NASNet is created by searching for thos cells but the actual number of cells stacked and number of filters of the penultimate layer are searched separately.

Comments

These models seem very impractical because very long and hard to produce. The indeed have SOTA results at the time of writing but with explicit mention of the size of the models and time of training. Improvement over random search is only 1%..

Bibliography

  1. . . "Learning Transferable Architectures for Scalable Image Recognition". Arxiv:1707.07012 [cs, Stat]. http://arxiv.org/abs/1707.07012.
  2. . . "Neural Architecture Search with Reinforcement Learning". Arxiv:1611.01578 [cs]. http://arxiv.org/abs/1611.01578. See notes
Last changed | authored by

Comments


← Back to Notes