{"title":"AoI-Aware Inference Services in Edge Computing via Digital Twin Network Slicing","authors":"Yuncan Zhang;Weifa Liang;Zichuan Xu;Wenzheng Xu;Min Chen","doi":"10.1109/TSC.2024.3436705","DOIUrl":null,"url":null,"abstract":"The advance of Digital Twin (DT) technology sheds light on seamless cyber-physical integration with the Industry 4.0 initiative. Through continuous synchronization with their physical objects, DTs can power inference service models for analysis, emulation, optimization, and prediction on physical objects. With the proliferation of DTs, Digital Twin Network (DTN) slicing is emerging as a new paradigm of service providers for differential quality of service provisioning, where each DTN is a virtual network that consists of a set of inference service models with source data from a group of DTs, and the inference service models provide users with differential quality of services. Mobile Edge Computing (MEC) as a new computing paradigm shifts the computing power towards the edge of core networks, which is appropriate for delay-sensitive inference services. In this paper we consider Age of Information (AoI)-aware inference service provisioning in an MEC network through DTN slicing requests, where the accuracy of inference services provided by each DTN slice is determined by the Expected Age of Information (EAoI) of its inference model. Specifically, we first introduce a novel AoI-aware inference service framework of DTN slicing requests. We then formulate the expected cost minimization problem by jointly placing DT and inference service model instances, and develop efficient algorithms for the problem, based on the proposed framework. We also consider dynamic DTN slicing request admissions where requests arrive one by one without the knowledge of future arrivals, for which we devise an online algorithm with a provable competitive ratio for dynamic request admissions, assuming that DTs of all objects have been placed already. Finally, we evaluate the performance of the proposed algorithms through simulations. Simulation results demonstrate that the proposed algorithms are promising, and the proposed online algorithm improves the number of admitted requests by more than 6% than its counterpart.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"17 6","pages":"3154-3170"},"PeriodicalIF":5.8000,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Services Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10620407/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The advance of Digital Twin (DT) technology sheds light on seamless cyber-physical integration with the Industry 4.0 initiative. Through continuous synchronization with their physical objects, DTs can power inference service models for analysis, emulation, optimization, and prediction on physical objects. With the proliferation of DTs, Digital Twin Network (DTN) slicing is emerging as a new paradigm of service providers for differential quality of service provisioning, where each DTN is a virtual network that consists of a set of inference service models with source data from a group of DTs, and the inference service models provide users with differential quality of services. Mobile Edge Computing (MEC) as a new computing paradigm shifts the computing power towards the edge of core networks, which is appropriate for delay-sensitive inference services. In this paper we consider Age of Information (AoI)-aware inference service provisioning in an MEC network through DTN slicing requests, where the accuracy of inference services provided by each DTN slice is determined by the Expected Age of Information (EAoI) of its inference model. Specifically, we first introduce a novel AoI-aware inference service framework of DTN slicing requests. We then formulate the expected cost minimization problem by jointly placing DT and inference service model instances, and develop efficient algorithms for the problem, based on the proposed framework. We also consider dynamic DTN slicing request admissions where requests arrive one by one without the knowledge of future arrivals, for which we devise an online algorithm with a provable competitive ratio for dynamic request admissions, assuming that DTs of all objects have been placed already. Finally, we evaluate the performance of the proposed algorithms through simulations. Simulation results demonstrate that the proposed algorithms are promising, and the proposed online algorithm improves the number of admitted requests by more than 6% than its counterpart.
期刊介绍:
IEEE Transactions on Services Computing encompasses the computing and software aspects of the science and technology of services innovation research and development. It places emphasis on algorithmic, mathematical, statistical, and computational methods central to services computing. Topics covered include Service Oriented Architecture, Web Services, Business Process Integration, Solution Performance Management, and Services Operations and Management. The transactions address mathematical foundations, security, privacy, agreement, contract, discovery, negotiation, collaboration, and quality of service for web services. It also covers areas like composite web service creation, business and scientific applications, standards, utility models, business process modeling, integration, collaboration, and more in the realm of Services Computing.