{"title":"Distributed Computation Offloading with Low Latency for Artificial Intelligence in Vehicular Networking","authors":"Dengzhi Liu, Fan Sun, Weizheng Wang, K. Dev","doi":"10.1109/MCOMSTD.0003.2100100","DOIUrl":null,"url":null,"abstract":"Vehicular networking is a communication platform that integrates the computing power of vehicles, roadside units, and infrastructures, which is capable of offering services to terminals characterized by low latency, high bandwidth, and reliability. Artificial intelligence (AI) has been developed rapidly over the past few years, and numerous AI applications requiring high computing power in vehicular networking have emerged (e.g., automatic driving, collision avoidance, and trajectory prediction). However, the computation of the AI model requires high computing power, and the vehicles on the road have low computation capability, which significantly hinder the development of intelligent transportation based on AI in vehicular networking. In this article, a distributed computatin offloading scheme is developed, which can be used to outsource the tasks of the AI model computation to nearby vehicles and roadside units in vehicular networking. To reduce the computational burden and decrease the latency of the computation on the vehicle side, the optimized genetic algorithm is adopted to divide the computation of the sigmoid function into multiple sub-tasks. Moreover, secure multi-party computation and homomorphic encryption are applied in the sub-task computation to enhance the security of the AI model computation in vehicular networking. As indicated by the security analysis, the proposed scheme can be proved to support privacy preservation in the multi-party computation of the AI model. As revealed by the simulation results, the proposed scheme can be performed with low computational time with different lengths of keys and transmitted parameters in practice.","PeriodicalId":36719,"journal":{"name":"IEEE Communications Standards Magazine","volume":"7 1","pages":"74-80"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Communications Standards Magazine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MCOMSTD.0003.2100100","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 1
Abstract
Vehicular networking is a communication platform that integrates the computing power of vehicles, roadside units, and infrastructures, which is capable of offering services to terminals characterized by low latency, high bandwidth, and reliability. Artificial intelligence (AI) has been developed rapidly over the past few years, and numerous AI applications requiring high computing power in vehicular networking have emerged (e.g., automatic driving, collision avoidance, and trajectory prediction). However, the computation of the AI model requires high computing power, and the vehicles on the road have low computation capability, which significantly hinder the development of intelligent transportation based on AI in vehicular networking. In this article, a distributed computatin offloading scheme is developed, which can be used to outsource the tasks of the AI model computation to nearby vehicles and roadside units in vehicular networking. To reduce the computational burden and decrease the latency of the computation on the vehicle side, the optimized genetic algorithm is adopted to divide the computation of the sigmoid function into multiple sub-tasks. Moreover, secure multi-party computation and homomorphic encryption are applied in the sub-task computation to enhance the security of the AI model computation in vehicular networking. As indicated by the security analysis, the proposed scheme can be proved to support privacy preservation in the multi-party computation of the AI model. As revealed by the simulation results, the proposed scheme can be performed with low computational time with different lengths of keys and transmitted parameters in practice.