Fuxun Yu, Dimitrios Stamoulis, Di Wang, Dimitrios Lymberopoulos, Xiang Chen
{"title":"探索高效深度神经网络的设计空间","authors":"Fuxun Yu, Dimitrios Stamoulis, Di Wang, Dimitrios Lymberopoulos, Xiang Chen","doi":"10.1109/SEC50012.2020.00043","DOIUrl":null,"url":null,"abstract":"This paper gives an overview of our ongoing work on the design space exploration of efficient deep neural networks (DNNs), specifically on the novel optimization perspectives that past work have mainly overlooked. We cover two complementary aspects of efficient DNN design: (1) static architecture design efficiency and (2) dynamic model execution efficiency. In the static architecture design, one of the major challenges of NAS is the low search efficiency. Different with current mainstream efficient search algorithm optimization, we identify the new perspective in efficient search space design. In the dynamic model execution, current major optimization methods still target at the model structure redundancy, e.g., weight/filter pruning, connection pruning, etc. We instead identify the new dimension of DNN feature map redundancy. By showcasing such new perspectives, further advantages could be potentially attained by integrating both current optimizations and our new perspectives.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring the Design Space of Efficient Deep Neural Networks\",\"authors\":\"Fuxun Yu, Dimitrios Stamoulis, Di Wang, Dimitrios Lymberopoulos, Xiang Chen\",\"doi\":\"10.1109/SEC50012.2020.00043\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper gives an overview of our ongoing work on the design space exploration of efficient deep neural networks (DNNs), specifically on the novel optimization perspectives that past work have mainly overlooked. We cover two complementary aspects of efficient DNN design: (1) static architecture design efficiency and (2) dynamic model execution efficiency. In the static architecture design, one of the major challenges of NAS is the low search efficiency. Different with current mainstream efficient search algorithm optimization, we identify the new perspective in efficient search space design. In the dynamic model execution, current major optimization methods still target at the model structure redundancy, e.g., weight/filter pruning, connection pruning, etc. We instead identify the new dimension of DNN feature map redundancy. By showcasing such new perspectives, further advantages could be potentially attained by integrating both current optimizations and our new perspectives.\",\"PeriodicalId\":375577,\"journal\":{\"name\":\"2020 IEEE/ACM Symposium on Edge Computing (SEC)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE/ACM Symposium on Edge Computing (SEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SEC50012.2020.00043\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEC50012.2020.00043","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Exploring the Design Space of Efficient Deep Neural Networks
This paper gives an overview of our ongoing work on the design space exploration of efficient deep neural networks (DNNs), specifically on the novel optimization perspectives that past work have mainly overlooked. We cover two complementary aspects of efficient DNN design: (1) static architecture design efficiency and (2) dynamic model execution efficiency. In the static architecture design, one of the major challenges of NAS is the low search efficiency. Different with current mainstream efficient search algorithm optimization, we identify the new perspective in efficient search space design. In the dynamic model execution, current major optimization methods still target at the model structure redundancy, e.g., weight/filter pruning, connection pruning, etc. We instead identify the new dimension of DNN feature map redundancy. By showcasing such new perspectives, further advantages could be potentially attained by integrating both current optimizations and our new perspectives.