{"title":"车辆雾计算中的延迟感知智能任务卸载策略","authors":"Indranil Sarkar, Sanjay Kumar","doi":"10.1109/CSI54720.2022.9924066","DOIUrl":null,"url":null,"abstract":"In the era of Internet of Things (loT), data offloading become a promising and crucial strategy to improve the overall system performance and also to provide quality-of-service (QOS). In this context, recently fog computing has gained a lot of interests from the industry as well as academia. In this paper, we propose a delay-aware task offloading strategy in mobile fog-based network. We consider several moving vehicles in a one-way road out of which some vehicles act as client vehicles and some of them act as mobile fog nodes. Individual fog nodes allocate its available resources to the the requesting client vehicles in its proximity. However, because of the dynamic nature of the vehicular environment, it is difficult to develop a scheme that can decide how to allocate the computing resources to the local on-board CPU or to the neighbouring fog nodes. In this regards, the paper propose a deep reinforcement learning based intelligent task offloading for vehicles in motion (ITOVM) policy, considering the vehicle mobility and communication bandwidth constraints, to minimize the overall latency of the network. The proposed IOTVM policy is formulated as the Markov decision process (MDP) which is solved by the concept of deep Q network (DQN). Finally, extensive simulation results demonstrate the efficacy and performance enhancement of the proposed approach compared to several baseline algorithms.","PeriodicalId":221137,"journal":{"name":"2022 International Conference on Connected Systems & Intelligence (CSI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Delay-aware Intelligent Task Offloading Strategy in Vehicular Fog Computing\",\"authors\":\"Indranil Sarkar, Sanjay Kumar\",\"doi\":\"10.1109/CSI54720.2022.9924066\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the era of Internet of Things (loT), data offloading become a promising and crucial strategy to improve the overall system performance and also to provide quality-of-service (QOS). In this context, recently fog computing has gained a lot of interests from the industry as well as academia. In this paper, we propose a delay-aware task offloading strategy in mobile fog-based network. We consider several moving vehicles in a one-way road out of which some vehicles act as client vehicles and some of them act as mobile fog nodes. Individual fog nodes allocate its available resources to the the requesting client vehicles in its proximity. However, because of the dynamic nature of the vehicular environment, it is difficult to develop a scheme that can decide how to allocate the computing resources to the local on-board CPU or to the neighbouring fog nodes. In this regards, the paper propose a deep reinforcement learning based intelligent task offloading for vehicles in motion (ITOVM) policy, considering the vehicle mobility and communication bandwidth constraints, to minimize the overall latency of the network. The proposed IOTVM policy is formulated as the Markov decision process (MDP) which is solved by the concept of deep Q network (DQN). Finally, extensive simulation results demonstrate the efficacy and performance enhancement of the proposed approach compared to several baseline algorithms.\",\"PeriodicalId\":221137,\"journal\":{\"name\":\"2022 International Conference on Connected Systems & Intelligence (CSI)\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Connected Systems & Intelligence (CSI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSI54720.2022.9924066\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Connected Systems & Intelligence (CSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSI54720.2022.9924066","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
在物联网(loT)时代,数据卸载成为提高系统整体性能和提供服务质量(QOS)的重要策略。在这种背景下,最近雾计算在工业界和学术界引起了很大的兴趣。本文提出了一种基于移动雾的网络延迟感知任务卸载策略。我们考虑几辆在单行道上移动的车辆,其中一些车辆充当客户端车辆,其中一些充当移动雾节点。单个雾节点将其可用资源分配给其附近的请求客户端车辆。然而,由于车辆环境的动态性,很难制定一种方案来决定如何将计算资源分配给本地车载CPU或相邻雾节点。为此,本文提出了一种基于深度强化学习的ITOVM (intelligent task offloading for vehicles In motion)策略,考虑到车辆移动性和通信带宽的限制,以最小化网络的整体延迟。所提出的IOTVM策略被表述为马尔可夫决策过程(MDP),并通过深度Q网络(DQN)的概念进行求解。最后,大量的仿真结果表明,与几种基准算法相比,该方法的有效性和性能增强。
Delay-aware Intelligent Task Offloading Strategy in Vehicular Fog Computing
In the era of Internet of Things (loT), data offloading become a promising and crucial strategy to improve the overall system performance and also to provide quality-of-service (QOS). In this context, recently fog computing has gained a lot of interests from the industry as well as academia. In this paper, we propose a delay-aware task offloading strategy in mobile fog-based network. We consider several moving vehicles in a one-way road out of which some vehicles act as client vehicles and some of them act as mobile fog nodes. Individual fog nodes allocate its available resources to the the requesting client vehicles in its proximity. However, because of the dynamic nature of the vehicular environment, it is difficult to develop a scheme that can decide how to allocate the computing resources to the local on-board CPU or to the neighbouring fog nodes. In this regards, the paper propose a deep reinforcement learning based intelligent task offloading for vehicles in motion (ITOVM) policy, considering the vehicle mobility and communication bandwidth constraints, to minimize the overall latency of the network. The proposed IOTVM policy is formulated as the Markov decision process (MDP) which is solved by the concept of deep Q network (DQN). Finally, extensive simulation results demonstrate the efficacy and performance enhancement of the proposed approach compared to several baseline algorithms.