Learning Transferable Architectures for Scalable Image Recognition by Zoph, B., Vasudevan, V., Shlens, J., & Le, Q. V. (2018)

tags
NAS
source
(Zoph et al. 2018)

Summary

This paper is more or less a follow up of (Zoph and Le 2017) where the search space get at the same time widened and more constraints are added (division between normal cell for processing and reduction cell for pooling/downsampling). Normal cells get stacked \(N\) times resulting in very big architectures. NASNet is created by searching for thos cells but the actual number of cells stacked and number of filters of the penultimate layer are searched separately.

Comments

These models seem very impractical because very long and hard to produce. The indeed have SOTA results at the time of writing but with explicit mention of the size of the models and time of training. Improvement over random search is only 1%..

Bibliography

Zoph, Barret, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. 2018. “Learning Transferable Architectures for Scalable Image Recognition.” arXiv:1707.07012 [Cs, Stat], April.

Zoph, Barret, and Quoc V. Le. 2017. “Neural Architecture Search with Reinforcement Learning.” arXiv:1611.01578 [Cs], February.


← Back to Notes