Penglin Dai;Biao Han;Ke Li;Xincao Xu;Huanlai Xing;Kai Liu
{"title":"Joint Optimization of Device Placement and Model Partitioning for Cooperative DNN Inference in Heterogeneous Edge Computing","authors":"Penglin Dai;Biao Han;Ke Li;Xincao Xu;Huanlai Xing;Kai Liu","doi":"10.1109/TMC.2024.3457793","DOIUrl":null,"url":null,"abstract":"EdgeAI represents a compelling approach for deploying DNN models at network edge through model partitioning. However, most existing partitioning strategies have primarily concentrated on homogeneous environments, neglecting the effect of device placement and their inapplicability to heterogeneous settings. Moreover, these strategies often rely on either data parallelism or model parallelism, each presenting its own limitations, including data synchronization and communication overhead. This paper aims at enhancing inference performance through a pipeline system of devices through leveraging both parallel and sequential relationships among them. Accordingly, the problem of Multi-Device Cooperative DNN Inference is formulated by optimizing both device placement and model partitioning, taking into account the unique characteristics of heterogeneous edge resources and DNN models, with the goal of maximizing throughput. To this end, we propose an evolutionary device placement technique to determine the pipeline stage of devices by enhancing a variant of particle swarm optimization. Subsequently, an adaptive model partitioning strategy is developed by combining intra-layer and inter-layer model partitioning based on dynamic programming and the input-output mapping of DNN layers, respectively, to accommodate edge resource limitations. Finally, we construct a simulation model and a prototype, and the extensive results demonstrate that our proposed algorithm outperforms current state-of-the-art algorithms.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 1","pages":"210-226"},"PeriodicalIF":9.2000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10675335/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
EdgeAI represents a compelling approach for deploying DNN models at network edge through model partitioning. However, most existing partitioning strategies have primarily concentrated on homogeneous environments, neglecting the effect of device placement and their inapplicability to heterogeneous settings. Moreover, these strategies often rely on either data parallelism or model parallelism, each presenting its own limitations, including data synchronization and communication overhead. This paper aims at enhancing inference performance through a pipeline system of devices through leveraging both parallel and sequential relationships among them. Accordingly, the problem of Multi-Device Cooperative DNN Inference is formulated by optimizing both device placement and model partitioning, taking into account the unique characteristics of heterogeneous edge resources and DNN models, with the goal of maximizing throughput. To this end, we propose an evolutionary device placement technique to determine the pipeline stage of devices by enhancing a variant of particle swarm optimization. Subsequently, an adaptive model partitioning strategy is developed by combining intra-layer and inter-layer model partitioning based on dynamic programming and the input-output mapping of DNN layers, respectively, to accommodate edge resource limitations. Finally, we construct a simulation model and a prototype, and the extensive results demonstrate that our proposed algorithm outperforms current state-of-the-art algorithms.
期刊介绍:
IEEE Transactions on Mobile Computing addresses key technical issues related to various aspects of mobile computing. This includes (a) architectures, (b) support services, (c) algorithm/protocol design and analysis, (d) mobile environments, (e) mobile communication systems, (f) applications, and (g) emerging technologies. Topics of interest span a wide range, covering aspects like mobile networks and hosts, mobility management, multimedia, operating system support, power management, online and mobile environments, security, scalability, reliability, and emerging technologies such as wearable computers, body area networks, and wireless sensor networks. The journal serves as a comprehensive platform for advancements in mobile computing research.