{"title":"Mobility-Aware Online Content Caching for Vehicular Networks based on Deep Reinforcement Learning","authors":"Ke Li, Shunrui Xiong, Qiang Yang","doi":"10.1109/ISPCE-ASIA57917.2022.9970809","DOIUrl":null,"url":null,"abstract":"The proper design of mobility-aware content caching scheme in vehicular networks is the critical expeditor for an efficient Intelligent Transportation System, which enables diverse applications such as content dissemination and the entertainment for commuting passengers. Due to the dynamics characteristic caused by the mobility of vehicles, it is relatively hard to implement accurate caching prediction and collect useful data samples with the traditional method. Using the recent advances in training deep neural networks, we present a deep reinforcement learning framework, namely RL-ResNet-v1, that learns content chunk allocation and makes online chunk compensation policy from high-dimensional inputs corresponding to the characteristics and requirements of users passing by multiple Road Side Units (RSUs) in a Vehicle-to-Infrastructure scenario. The realized online content caching scheme serves to reduce data redundancy in each RSU with finite-capacity while promoting cache hit ratio that should meet chunk sequentially downloaded requirement. Simulation results show our content caching scheme not only achieves more than 20% improvement of the cache hit ratio, and effective cache ratio compared to baseline schemes, but also adapt to the temporal variation of vehicle speed and network bandwidth.","PeriodicalId":197173,"journal":{"name":"2022 IEEE International Symposium on Product Compliance Engineering - Asia (ISPCE-ASIA)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Product Compliance Engineering - Asia (ISPCE-ASIA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPCE-ASIA57917.2022.9970809","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The proper design of mobility-aware content caching scheme in vehicular networks is the critical expeditor for an efficient Intelligent Transportation System, which enables diverse applications such as content dissemination and the entertainment for commuting passengers. Due to the dynamics characteristic caused by the mobility of vehicles, it is relatively hard to implement accurate caching prediction and collect useful data samples with the traditional method. Using the recent advances in training deep neural networks, we present a deep reinforcement learning framework, namely RL-ResNet-v1, that learns content chunk allocation and makes online chunk compensation policy from high-dimensional inputs corresponding to the characteristics and requirements of users passing by multiple Road Side Units (RSUs) in a Vehicle-to-Infrastructure scenario. The realized online content caching scheme serves to reduce data redundancy in each RSU with finite-capacity while promoting cache hit ratio that should meet chunk sequentially downloaded requirement. Simulation results show our content caching scheme not only achieves more than 20% improvement of the cache hit ratio, and effective cache ratio compared to baseline schemes, but also adapt to the temporal variation of vehicle speed and network bandwidth.