Yukio Toyoshima;Tomoya Hatano;Tatsuya Shimada;Tomoaki Yoshida
{"title":"Dynamic Hardware Accelerator Selection Achieving Optimal Utilization of Resources","authors":"Yukio Toyoshima;Tomoya Hatano;Tatsuya Shimada;Tomoaki Yoshida","doi":"10.23919/comex.2024XBL0152","DOIUrl":null,"url":null,"abstract":"Edge computing with offloading over a network is being considered for enabling users to use advanced applications such as autonomous control applications. When network transmission time and/or task-processing time change, the total offloading time also varies. To guarantee low latency of offloading service in this case, over-performance hardware accelerator (HWA) must be assigned on edge computing, which leads to inefficient HWA resource allocation. To address this issue, we propose a method of selecting a HWA corresponding to change of the network transmission time and/or task-processing time. The proposed method refers past transmission times and input data size for the offload, enabling the prediction of changes in network transmission time and/or task-processing time. We evaluated the time for offloading through simulation and found that our method achieves a high percentage of offloads requested by users while efficiently using HWAs.","PeriodicalId":54101,"journal":{"name":"IEICE Communications Express","volume":"13 12","pages":"504-508"},"PeriodicalIF":0.3000,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713510","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEICE Communications Express","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10713510/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Edge computing with offloading over a network is being considered for enabling users to use advanced applications such as autonomous control applications. When network transmission time and/or task-processing time change, the total offloading time also varies. To guarantee low latency of offloading service in this case, over-performance hardware accelerator (HWA) must be assigned on edge computing, which leads to inefficient HWA resource allocation. To address this issue, we propose a method of selecting a HWA corresponding to change of the network transmission time and/or task-processing time. The proposed method refers past transmission times and input data size for the offload, enabling the prediction of changes in network transmission time and/or task-processing time. We evaluated the time for offloading through simulation and found that our method achieves a high percentage of offloads requested by users while efficiently using HWAs.