{"title":"基于增强拉格朗日的深度强化学习电动汽车充电调度方法","authors":"Lun Yang , Guibin Chen , Xiaoyu Cao","doi":"10.1016/j.apenergy.2024.124706","DOIUrl":null,"url":null,"abstract":"<div><div>The adoption of electric vehicles (EVs) is increasingly recognized as a promising solution to decarbonization, thereby large scales of EVs are integrated into transportation and power systems in recent years. The transportation and power systems' operation states largely influence EVs' patterns, introducing uncertainties into EVs' driving patterns and energy demand. Such uncertainties make it a challenge to optimize the operations of charging stations, which provide both charging and electric grid services such as demand responses. To handle this dilemma, this paper models the chargers' operation decisions as a constrained Markov decision process (CMDP). By synergistically combining the augmented Lagrangian method and soft actor-critic algorithm, a novel safe off-policy reinforcement learning (RL) approach is proposed in this paper to solve the CMDP. The actor-network is updated in a policy gradient manner with the Lagrangian value function. A double-critics network is adopted to estimate the action-value function to avoid overestimation bias synchronously. The proposed algorithm does not require a strong convexity guarantee of examined problems and is sample efficient. Comprehensive numerical experiments with real-world electricity prices demonstrate that our proposed algorithm can achieve high solution optimality and constraint compliance.</div></div>","PeriodicalId":246,"journal":{"name":"Applied Energy","volume":"378 ","pages":"Article 124706"},"PeriodicalIF":10.1000,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A deep reinforcement learning-based charging scheduling approach with augmented Lagrangian for electric vehicles\",\"authors\":\"Lun Yang , Guibin Chen , Xiaoyu Cao\",\"doi\":\"10.1016/j.apenergy.2024.124706\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The adoption of electric vehicles (EVs) is increasingly recognized as a promising solution to decarbonization, thereby large scales of EVs are integrated into transportation and power systems in recent years. The transportation and power systems' operation states largely influence EVs' patterns, introducing uncertainties into EVs' driving patterns and energy demand. Such uncertainties make it a challenge to optimize the operations of charging stations, which provide both charging and electric grid services such as demand responses. To handle this dilemma, this paper models the chargers' operation decisions as a constrained Markov decision process (CMDP). By synergistically combining the augmented Lagrangian method and soft actor-critic algorithm, a novel safe off-policy reinforcement learning (RL) approach is proposed in this paper to solve the CMDP. The actor-network is updated in a policy gradient manner with the Lagrangian value function. A double-critics network is adopted to estimate the action-value function to avoid overestimation bias synchronously. The proposed algorithm does not require a strong convexity guarantee of examined problems and is sample efficient. Comprehensive numerical experiments with real-world electricity prices demonstrate that our proposed algorithm can achieve high solution optimality and constraint compliance.</div></div>\",\"PeriodicalId\":246,\"journal\":{\"name\":\"Applied Energy\",\"volume\":\"378 \",\"pages\":\"Article 124706\"},\"PeriodicalIF\":10.1000,\"publicationDate\":\"2024-11-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Energy\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0306261924020890\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENERGY & FUELS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Energy","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306261924020890","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
A deep reinforcement learning-based charging scheduling approach with augmented Lagrangian for electric vehicles
The adoption of electric vehicles (EVs) is increasingly recognized as a promising solution to decarbonization, thereby large scales of EVs are integrated into transportation and power systems in recent years. The transportation and power systems' operation states largely influence EVs' patterns, introducing uncertainties into EVs' driving patterns and energy demand. Such uncertainties make it a challenge to optimize the operations of charging stations, which provide both charging and electric grid services such as demand responses. To handle this dilemma, this paper models the chargers' operation decisions as a constrained Markov decision process (CMDP). By synergistically combining the augmented Lagrangian method and soft actor-critic algorithm, a novel safe off-policy reinforcement learning (RL) approach is proposed in this paper to solve the CMDP. The actor-network is updated in a policy gradient manner with the Lagrangian value function. A double-critics network is adopted to estimate the action-value function to avoid overestimation bias synchronously. The proposed algorithm does not require a strong convexity guarantee of examined problems and is sample efficient. Comprehensive numerical experiments with real-world electricity prices demonstrate that our proposed algorithm can achieve high solution optimality and constraint compliance.
期刊介绍:
Applied Energy serves as a platform for sharing innovations, research, development, and demonstrations in energy conversion, conservation, and sustainable energy systems. The journal covers topics such as optimal energy resource use, environmental pollutant mitigation, and energy process analysis. It welcomes original papers, review articles, technical notes, and letters to the editor. Authors are encouraged to submit manuscripts that bridge the gap between research, development, and implementation. The journal addresses a wide spectrum of topics, including fossil and renewable energy technologies, energy economics, and environmental impacts. Applied Energy also explores modeling and forecasting, conservation strategies, and the social and economic implications of energy policies, including climate change mitigation. It is complemented by the open-access journal Advances in Applied Energy.