Wenbo Liu, Xiaoyun Qiao, Chunyu Zhao, Tao Deng, Fei Yan
{"title":"A new pipeline with ultimate search efficiency for neural architecture search.","authors":"Wenbo Liu, Xiaoyun Qiao, Chunyu Zhao, Tao Deng, Fei Yan","doi":"10.1016/j.neunet.2025.107163","DOIUrl":null,"url":null,"abstract":"<p><p>We present a novel neural architecture search pipeline designed to enhance search efficiency through optimized data and algorithms. Leveraging dataset distillation techniques, our pipeline condenses large-scale target datasets into more streamlined proxy datasets, effectively reducing the computational overhead associated with identifying optimal neural architectures. To accommodate diverse approaches to synthetic dataset utilization, our pipeline comprises two distinct schemes. Scheme 1 involves constructing rich data from various Bases |B|, while Scheme 2 focuses on establishing high-quality relationship mappings within the data. Models generated through Scheme 1 exhibit outstanding scalability, demonstrating superior performance when transferred to larger, more complex tasks. Despite utilizing fewer data, Scheme 2 maintains performance levels without degradation on the source dataset. Furthermore, our research extends to the inherent challenges present in DARTS-derived algorithms, particularly in the selection of candidate operations based on architectural parameters. We identify architectural parameter disparities across different edges, highlighting the occurrence of \"Selection Errors\" during the model generation process, and propose an enhanced search algorithm. Our proposed algorithm comprises three components-attention, regularization, and normalization-aiding in the rapid identification of high-quality models using data generated from proxy datasets. Experimental results demonstrate a significant reduction in search time, with high-quality models generated in as little as two minutes using our proposed pipeline. Through comprehensive experimentation, we meticulously validate the efficacy of both schemes and algorithms.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"107163"},"PeriodicalIF":6.0000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2025.107163","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
We present a novel neural architecture search pipeline designed to enhance search efficiency through optimized data and algorithms. Leveraging dataset distillation techniques, our pipeline condenses large-scale target datasets into more streamlined proxy datasets, effectively reducing the computational overhead associated with identifying optimal neural architectures. To accommodate diverse approaches to synthetic dataset utilization, our pipeline comprises two distinct schemes. Scheme 1 involves constructing rich data from various Bases |B|, while Scheme 2 focuses on establishing high-quality relationship mappings within the data. Models generated through Scheme 1 exhibit outstanding scalability, demonstrating superior performance when transferred to larger, more complex tasks. Despite utilizing fewer data, Scheme 2 maintains performance levels without degradation on the source dataset. Furthermore, our research extends to the inherent challenges present in DARTS-derived algorithms, particularly in the selection of candidate operations based on architectural parameters. We identify architectural parameter disparities across different edges, highlighting the occurrence of "Selection Errors" during the model generation process, and propose an enhanced search algorithm. Our proposed algorithm comprises three components-attention, regularization, and normalization-aiding in the rapid identification of high-quality models using data generated from proxy datasets. Experimental results demonstrate a significant reduction in search time, with high-quality models generated in as little as two minutes using our proposed pipeline. Through comprehensive experimentation, we meticulously validate the efficacy of both schemes and algorithms.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.