TPAMI 2024
DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions
Guangrun Wang†, Changlin Li†, Liuchun Yuan, Jiefeng Peng, Xiaoyu Xian, Xiaodan Liang, Xiaojun Chang, and Liang Lin
TPAMI 2024

Abstract


Neural Architecture Search (NAS), aiming at automatically designing neural architectures by machines, has been considered a key step toward automatic machine learning. One notable NAS branch is the weight-sharing NAS, which significantly improves search efficiency and allows NAS algorithms to run on ordinary computers. Despite receiving high expectations, this category of methods suffers from low search effectiveness. By employing a generalization boundedness tool, we demonstrate that the devil behind this drawback is the untrustworthy architecture rating with the oversized search space of the possible architectures. Addressing this problem, we modularize a large search space into blocks with small search spaces and develop a family of models with the distilling neural architecture (DNA) techniques. These proposed models, namely a DNA family, are capable of resolving multiple dilemmas of the weight-sharing NAS, such as scalability, efficiency, and multi-modal compatibility. Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using heuristic algorithms. Moreover, under a certain computational complexity constraint, our method can seek architectures with different depths and widths. Extensive experimental evaluations show that our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively. Additionally, we provide in-depth empirical analysis and insights into neural architecture ratings. 

 

 

Framework


Experiment


 

Conclusion


In this paper, we employ a tool for generalization boundedness to link weight-sharing NAS’s inefficiency to unreliable architecture ratings due to a vast search space. To address this issue, we modularize the search space into blocks and apply distilling neural architecture techniques. We explore three block-wise learning methods: supervised learning (DNA), progressive learning (DNA+), and selfsupervised learning (DNA++). Our DNA family evaluates all candidate architectures, a significant advancement over prior methods restricted to smaller sub-search spaces via heuristic algorithms. Additionally, our approach enables the search for architectures with varying depths and widths under specified computational constraints. Recognizing the pivotal role of architecture rating in NAS, we provide extensive empirical results to scrutinize this aspect. Lastly, our method attains state-of-the-art results across various tasks. Future work will extend the application of our DNA method to NLP, 3D architectures [98], [99], [100], and generative models.

 

 

Acknowledgement


This work is supported by the following grants: National Key R&D Program of China under Grant No. 2021ZD0111601, National Natural Science Foundation of China (NSFC) under Grant No.61836012, 62006255, 62325605, GuangDong Basic and Applied Basic Research Foundation under Grant No. 2023A1515011374, GuangDong Province Key Laboratory of Information Security Technology.