Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
hyper-parameter_tuning [2018/10/17 16:08]
admin
hyper-parameter_tuning [2018/10/31 10:52]
admin
Line 316: Line 316:
 https://​arxiv.org/​abs/​1810.05749v1 Graph HyperNetworks for Neural Architecture Search https://​arxiv.org/​abs/​1810.05749v1 Graph HyperNetworks for Neural Architecture Search
  
 +GHNs model the topology of an architecture and therefore can predict network performance more accurately than regular hypernetworks and premature early stopping. To perform NAS, we randomly sample architectures and use the validation accuracy of networks with GHN generated weights as the surrogate search signal. GHNs are fast -- they can search nearly 10 times faster than other random search methods on CIFAR-10 and ImageNet. ​
 +
 +https://​ai.googleblog.com/​2018/​10/​introducing-adanet-fast-and-flexible.html?​m=1