Umesh Kumar Lilhore, Sarita Simaiya, Surjeet Dalal, Neetu Faujdar, Roobaea Alroobaea, Majed Alsafyani, Abdullah M. Baqasah, Sultan Algarni
{"title":"Optimizing energy efficiency in MEC networks: a deep learning approach with Cybertwin-driven resource allocation","authors":"Umesh Kumar Lilhore, Sarita Simaiya, Surjeet Dalal, Neetu Faujdar, Roobaea Alroobaea, Majed Alsafyani, Abdullah M. Baqasah, Sultan Algarni","doi":"10.1186/s13677-024-00688-8","DOIUrl":null,"url":null,"abstract":"Cybertwin (CT) is an innovative network structure that digitally simulates humans and items in a virtual environment, significantly influencing Cybertwin instances more than regular VMs. Cybertwin-driven networks, combined with Mobile Edge Computing (MEC), provide practical options for transmitting IoT-enabled data. This research introduces a hybrid methodology integrating deep learning with Cybertwin-driven resource allocation to enhance energy-efficient workload offloading and resource management in MEC networks. Offloading work is essential in MEC networks since several applications require significant resources. The Cybertwin-driven approach considers user mobility, virtualization, processing power, load migrations, and resource demand as crucial elements in the decision-making process for offloading. The model optimizes job allocation between on-premises and distant execution using a task-offloading strategy to reduce the operating burden on the MEC network. The model uses a hybrid partitioning approach and a cost function to optimize resource allocation efficiently. This cost function accounts for energy consumption and service delays associated with job assignment, execution, and fulfilment. The model calculates the cost of several segmentation and offloading procedures and chooses the lowest cost to enhance energy efficiency and performance. The approach employs a deep learning architecture called “CNN-LSTM-TL” to accomplish energy-efficient task offloading, utilizing pre-trained transfer learning models. Batch normalization is used to speed up model training and improve its robustness. The model is trained and assessed using an extensive mobile edge computing public dataset. The experimental findings confirm the efficacy of the proposed methodology, indicating a 20% decrease in energy usage compared to conventional methods while achieving comparable or superior performance levels. Simulation studies emphasize the advantages of incorporating Cybertwin-driven insights into resource allocation and workload-offloading techniques. This research enhances energy-efficient and resource-aware MEC networks by incorporating Cybertwin-driven techniques.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cloud Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s13677-024-00688-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Cybertwin (CT) is an innovative network structure that digitally simulates humans and items in a virtual environment, significantly influencing Cybertwin instances more than regular VMs. Cybertwin-driven networks, combined with Mobile Edge Computing (MEC), provide practical options for transmitting IoT-enabled data. This research introduces a hybrid methodology integrating deep learning with Cybertwin-driven resource allocation to enhance energy-efficient workload offloading and resource management in MEC networks. Offloading work is essential in MEC networks since several applications require significant resources. The Cybertwin-driven approach considers user mobility, virtualization, processing power, load migrations, and resource demand as crucial elements in the decision-making process for offloading. The model optimizes job allocation between on-premises and distant execution using a task-offloading strategy to reduce the operating burden on the MEC network. The model uses a hybrid partitioning approach and a cost function to optimize resource allocation efficiently. This cost function accounts for energy consumption and service delays associated with job assignment, execution, and fulfilment. The model calculates the cost of several segmentation and offloading procedures and chooses the lowest cost to enhance energy efficiency and performance. The approach employs a deep learning architecture called “CNN-LSTM-TL” to accomplish energy-efficient task offloading, utilizing pre-trained transfer learning models. Batch normalization is used to speed up model training and improve its robustness. The model is trained and assessed using an extensive mobile edge computing public dataset. The experimental findings confirm the efficacy of the proposed methodology, indicating a 20% decrease in energy usage compared to conventional methods while achieving comparable or superior performance levels. Simulation studies emphasize the advantages of incorporating Cybertwin-driven insights into resource allocation and workload-offloading techniques. This research enhances energy-efficient and resource-aware MEC networks by incorporating Cybertwin-driven techniques.