{"title":"Energy Allocation for Vehicle-to-Grid Settings: A Low-Cost Proposal Combining DRL and VNE","authors":"Peiying Zhang;Ning Chen;Neeraj Kumar;Laith Abualigah;Mohsen Guizani;Youxiang Duan;Jian Wang;Sheng Wu","doi":"10.1109/TSUSC.2023.3307551","DOIUrl":null,"url":null,"abstract":"As electric vehicle (EV) ownership becomes more commonplace, partly due to government incentives, there is a need also to design solutions such as energy allocation strategies to more effectively support sustainable vehicle-to-grid (V2G) applications. Therefore, this work proposes an energy allocation strategy, designed to minimize the electricity cost while improving the operating revenue. Specifically, V2G is abstracted as a three-domain network architecture to facilitate flexible, intelligent, and scalable energy allocation decision-making. Furthermore, this work combines virtual network embedding (VNE) and deep reinforcement learning (DRL) algorithms, where a DRL-based agent model is proposed, to adaptively perceives environmental features and extracts the feature matrix as input. In particular, the agent consists of a four-layer architecture for node and link embedding, and jointly optimizes the decision-making through a reward mechanism and gradient back-propagation. Finally, the effectiveness of the proposed strategy is demonstrated through simulation case studies. Specifically, compared to the used benchmarks, it improves the VNR acceptance ratio, Long-term average revenue, and Long-term average revenue-cost ratio indicators by an average of 3.17%, 191.36, and 2.04%, respectively. To the best of our knowledge, this is one of the first attempts combining VNE and DRL to provide an energy allocation strategy for V2G.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 1","pages":"75-87"},"PeriodicalIF":3.0000,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Sustainable Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10226295/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
As electric vehicle (EV) ownership becomes more commonplace, partly due to government incentives, there is a need also to design solutions such as energy allocation strategies to more effectively support sustainable vehicle-to-grid (V2G) applications. Therefore, this work proposes an energy allocation strategy, designed to minimize the electricity cost while improving the operating revenue. Specifically, V2G is abstracted as a three-domain network architecture to facilitate flexible, intelligent, and scalable energy allocation decision-making. Furthermore, this work combines virtual network embedding (VNE) and deep reinforcement learning (DRL) algorithms, where a DRL-based agent model is proposed, to adaptively perceives environmental features and extracts the feature matrix as input. In particular, the agent consists of a four-layer architecture for node and link embedding, and jointly optimizes the decision-making through a reward mechanism and gradient back-propagation. Finally, the effectiveness of the proposed strategy is demonstrated through simulation case studies. Specifically, compared to the used benchmarks, it improves the VNR acceptance ratio, Long-term average revenue, and Long-term average revenue-cost ratio indicators by an average of 3.17%, 191.36, and 2.04%, respectively. To the best of our knowledge, this is one of the first attempts combining VNE and DRL to provide an energy allocation strategy for V2G.