ICCV 2021
Pi-NAS: Improving Neural Architecture Search by Reducing Supernet Training Consistency Shift
Jiefeng Peng, Jiqi Zhang, Changlin Li, Guangrun Wang, Xiaodan Liang, and Liang Lin
ICCV 2021

Abstract


Recently proposed neural architecture search (NAS) methods co-train billions of architectures in a supernet and estimate their potential accuracy using the network weights detached from the supernet. However, the ranking correlation between the architectures’ predicted accuracy and their actual capability is incorrect, which causes the existing NAS methods’ dilemma. We attribute this ranking correlation problem to the supernet training consistency shift, including feature shift and parameter shift. Feature shift is identified as dynamic input distributions of a hidden layer due to random path sampling. The input distribution dynamic affects the loss descent and finally affects architecture ranking. Parameter shift is identified as contradictory parameter updates for a shared layer lay in different paths in different training steps. The rapidly-changing parameter could not preserve architecture ranking. We address these two shifts simultaneously using a nontrivial supernet-Ⅱ model, called Ⅱ-NAS.Specifically, we employ a supernet-Ⅱ model that contains cross-path learning to reduce the feature consistency shift between different paths. Meanwhile, we adopt a novel nontrivial mean teacher containing negative samples to overcome parameter shift and model collision. Further-more, our Ⅱ-NAS runs in an unsupervised manner, which can search for more transferable architectures. Extensive experiments on ImageNet and a wide range of downstream tasks (e.g., COCO 2017, ADE20K, and Cityscapes) demonstrate the effectiveness and universality of our Ⅱ-NAS compared to supervised NAS.

 

 

Framework


 

 

Experiment


 

 

 

 

 

 

Conclusion


This paper recognizes the importance of architecture ranking in NAS and attributes the ranking correlation problem to the supernet training consistency shift, including feature shift an parameter shift. To address these two shifts, we propose a nontrivial supernet-Ⅱ model, i.e., Ⅱ-NAS. Specifically, we propose a supernet-Ⅱ model with crosspath learning to reduce feature shift and a nontrivial mean teacher to cope with parameter shift. Notably, our Ⅱ-NAS can search for more transferable and universal architectures than supervised NAS. Extensive experiments on many tasks demonstrate the search effectiveness and universality of our Ⅱ-NAS compared to the NAS counterparts.