Timezone: »

 
Ranking Architectures by Feature Extraction Capabilities
Debadeepta Dey · Shital Shah · Sebastien Bubeck

The fundamental problem in Neural Architecture Search (NAS) is to efficiently find high- performing ones from a search space of architectures. We propose a simple but powerful method for ranking architectures FEAR in any search space. FEAR leverages the viewpoint that neural networks are powerful non-linear feature extractors. By training different architectures in the search space to the same training or validation error and subsequently comparing the usefulness of the features extracted on the task-dataset of interest by freezing most of the architecture we obtain quick estimates of the relative performance. We validate FEAR on Natsbench topology search space on three different datasets against competing baselines and show strong ranking correlation especially compared to recently proposed zero-cost methods. FEAR especially excels at ranking high-performance architectures in the search space. When used in the inner loop of discrete search algorithms like random search, FEAR can cut down the search time by approximately 2.4x without losing accuracy. We additionally empirically study very recently proposed zero-cost measures for ranking and find that they breakdown in ranking performance as training proceeds and also that data-agnostic ranking scores which ignore the dataset do not generalize across dissimilar datasets.

Author Information

Debadeepta Dey (Microsoft)
Shital Shah (Microsoft Research)
Sebastien Bubeck (Microsoft Research)

More from the Same Authors