{"title":"Computing Task Offloading in Vehicular Edge Network via Deep Reinforcement Learning","authors":"Beibei He, Shengchao Su, Yiwang Wang","doi":"10.2174/0118722121283037231231064521","DOIUrl":null,"url":null,"abstract":"\n\nIn recent years, with the development of the Internet of Vehicles, a variety\nof novel in-vehicle application devices have surfaced, exhibiting increasingly stringent requirements\nfor time delay. Vehicular edge networks (VEN) can fully use network edge devices, such as roadside\nunits (RSUs), for collaborative processing, which can effectively reduce latency.\n\n\n\nIn recent years, with the development of the field of internet of vehicles, a variety of novel in-vehicle application devices have surfaced, exhibiting increasingly stringent requirements for time delay. Vehicular edge network (VEN) can make full use of network edge devices, such as road side unit (RSU) for collaborative processing, which can effectively reduce the latency.\n\n\n\nMost extant studies, including patents, assume that RSU has sufficient computing resources\nto provide unlimited services. But in fact, its computing resources will be limited with the\nincrease in processing tasks, which will restrict the delay-sensitive vehicular applications. To solve\nthis problem, a vehicle-to-vehicle computing task offloading method based on deep reinforcement\nlearning is proposed in this paper, which fully considers the remaining available computational resources\nof neighboring vehicles to minimize the total task processing latency and enhance the offloading\nsuccess rate.\n\n\n\nA vehicle-to-vehicle computing task offloading method based on deep reinforce-ment learning is proposed in this paper, which fully considers the remaining available computa-tional resources of neighboring vehicles with the objective of minimizing the total task processing latency and enhancing the offloading success rate.\n\n\n\nIn the multi-service vehicle scenario, the analytic hierarchy process (AHP) was first used\nto prioritize the computing tasks of user vehicles. Next, an improved sequence-to-sequence\n(Seq2Seq) computing task scheduling model combined with an attention mechanism was designed,\nand the model was trained by an actor-critic (AC) reinforcement learning algorithm with the optimization\ngoal of reducing the processing delay of computing tasks and improving the success rate of\noffloading. A task offloading strategy optimization model based on AHP-AC was obtained on this\nbasis.\n\n\n\nThe average latency and execution success rate are used as performance metrics to compare\nthe proposed method with three other task offloading methods: only-local processing, greedy strategy-\nbased algorithm, and random algorithm. In addition, experimental validation in terms of CPU\nfrequency and the number of SVs is carried out to demonstrate the excellent generalization ability of\nthe proposed method.\n\n\n\nThe average latency and execution success rate are used as performance metrics to compare the proposed method with three other task offloading methods: only-local processing, greedy strate-gy-based algorithm and random algorithm. In addition, experimental validation in terms of both CPU frequency and the number of SVs is carried out to demonstrate the good generalization abil-ity of the proposed method.\n\n\n\nThe simulation results reveal that the proposed method outperforms other methods in\nreducing the processing delay of tasks and improving the success rate of task offloading, which\nsolves the problem of limited execution of delay-sensitive tasks caused by insufficient computational\nresources.\n","PeriodicalId":40022,"journal":{"name":"Recent Patents on Engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Recent Patents on Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2174/0118722121283037231231064521","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, with the development of the Internet of Vehicles, a variety
of novel in-vehicle application devices have surfaced, exhibiting increasingly stringent requirements
for time delay. Vehicular edge networks (VEN) can fully use network edge devices, such as roadside
units (RSUs), for collaborative processing, which can effectively reduce latency.
In recent years, with the development of the field of internet of vehicles, a variety of novel in-vehicle application devices have surfaced, exhibiting increasingly stringent requirements for time delay. Vehicular edge network (VEN) can make full use of network edge devices, such as road side unit (RSU) for collaborative processing, which can effectively reduce the latency.
Most extant studies, including patents, assume that RSU has sufficient computing resources
to provide unlimited services. But in fact, its computing resources will be limited with the
increase in processing tasks, which will restrict the delay-sensitive vehicular applications. To solve
this problem, a vehicle-to-vehicle computing task offloading method based on deep reinforcement
learning is proposed in this paper, which fully considers the remaining available computational resources
of neighboring vehicles to minimize the total task processing latency and enhance the offloading
success rate.
A vehicle-to-vehicle computing task offloading method based on deep reinforce-ment learning is proposed in this paper, which fully considers the remaining available computa-tional resources of neighboring vehicles with the objective of minimizing the total task processing latency and enhancing the offloading success rate.
In the multi-service vehicle scenario, the analytic hierarchy process (AHP) was first used
to prioritize the computing tasks of user vehicles. Next, an improved sequence-to-sequence
(Seq2Seq) computing task scheduling model combined with an attention mechanism was designed,
and the model was trained by an actor-critic (AC) reinforcement learning algorithm with the optimization
goal of reducing the processing delay of computing tasks and improving the success rate of
offloading. A task offloading strategy optimization model based on AHP-AC was obtained on this
basis.
The average latency and execution success rate are used as performance metrics to compare
the proposed method with three other task offloading methods: only-local processing, greedy strategy-
based algorithm, and random algorithm. In addition, experimental validation in terms of CPU
frequency and the number of SVs is carried out to demonstrate the excellent generalization ability of
the proposed method.
The average latency and execution success rate are used as performance metrics to compare the proposed method with three other task offloading methods: only-local processing, greedy strate-gy-based algorithm and random algorithm. In addition, experimental validation in terms of both CPU frequency and the number of SVs is carried out to demonstrate the good generalization abil-ity of the proposed method.
The simulation results reveal that the proposed method outperforms other methods in
reducing the processing delay of tasks and improving the success rate of task offloading, which
solves the problem of limited execution of delay-sensitive tasks caused by insufficient computational
resources.
期刊介绍:
Recent Patents on Engineering publishes review articles by experts on recent patents in the major fields of engineering. A selection of important and recent patents on engineering is also included in the journal. The journal is essential reading for all researchers involved in engineering sciences.