基于深度强化学习的高效协同释放交通-电力耦合网络中电动汽车脱碳潜力

IF 7.2 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Power Systems Pub Date : 2025-01-10 DOI:10.1109/TPWRS.2025.3528000
Quan Yuan;Ximu Liu;Mingxuan Mao
{"title":"基于深度强化学习的高效协同释放交通-电力耦合网络中电动汽车脱碳潜力","authors":"Quan Yuan;Ximu Liu;Mingxuan Mao","doi":"10.1109/TPWRS.2025.3528000","DOIUrl":null,"url":null,"abstract":"As a part of the global decarbonization agenda, the electrification of the transport sector involving the large-scale integration of electric vehicles (EV) constitutes one of the key initiatives. However, the decarbonization potential of EV cannot be exploited without appropriate incentive and coordination. Deep reinforcement learning (DRL) constitutes a well-suited model-free and data-driven framework to coordinate EV's charging decisions. Its real-world application facing multiple uncertainties is still challenging, due to the limited interaction efficiency between agent and environment of existing approaches. Therefore, this paper proposes a novel DRL-based coordination method, employing a pre-trained edge conditioned convolutional network and deep belief network as surrogate training environment to speed up the interaction, and combining a learning acceleration mechanism which enhances the exploration capabilities. This method is complemented in coupled transportation and power network (CTPN). Agent learns the optimal charging price composed of energy price and carbon obligation price, and incentivizes EV low-carbon coordination. Case studies involving a real-world scale CTPN are designed and the results demonstrate the effectiveness of the proposed coordination method in mitigating the operational cost and global carbon emission. The proposed method is also proved to outperform the state-of-the-art DRL methods in terms of the computational efficiency and generalization ability.","PeriodicalId":13373,"journal":{"name":"IEEE Transactions on Power Systems","volume":"40 4","pages":"2943-2954"},"PeriodicalIF":7.2000,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Efficient Deep Reinforcement Learning-Based Coordination for Unlocking Electric Vehicle Decarbonization Potential in Coupled Transportation and Power Networks\",\"authors\":\"Quan Yuan;Ximu Liu;Mingxuan Mao\",\"doi\":\"10.1109/TPWRS.2025.3528000\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As a part of the global decarbonization agenda, the electrification of the transport sector involving the large-scale integration of electric vehicles (EV) constitutes one of the key initiatives. However, the decarbonization potential of EV cannot be exploited without appropriate incentive and coordination. Deep reinforcement learning (DRL) constitutes a well-suited model-free and data-driven framework to coordinate EV's charging decisions. Its real-world application facing multiple uncertainties is still challenging, due to the limited interaction efficiency between agent and environment of existing approaches. Therefore, this paper proposes a novel DRL-based coordination method, employing a pre-trained edge conditioned convolutional network and deep belief network as surrogate training environment to speed up the interaction, and combining a learning acceleration mechanism which enhances the exploration capabilities. This method is complemented in coupled transportation and power network (CTPN). Agent learns the optimal charging price composed of energy price and carbon obligation price, and incentivizes EV low-carbon coordination. Case studies involving a real-world scale CTPN are designed and the results demonstrate the effectiveness of the proposed coordination method in mitigating the operational cost and global carbon emission. The proposed method is also proved to outperform the state-of-the-art DRL methods in terms of the computational efficiency and generalization ability.\",\"PeriodicalId\":13373,\"journal\":{\"name\":\"IEEE Transactions on Power Systems\",\"volume\":\"40 4\",\"pages\":\"2943-2954\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2025-01-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Power Systems\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10836914/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Power Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10836914/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

作为全球脱碳议程的一部分,涉及电动汽车(EV)大规模集成的运输部门电气化是关键举措之一。然而,如果没有适当的激励和协调,电动汽车的脱碳潜力就无法发挥出来。深度强化学习(DRL)构成了一个非常适合的无模型和数据驱动的框架来协调电动汽车的充电决策。由于现有方法中智能体与环境之间的交互效率有限,其面对多重不确定性的实际应用仍然具有挑战性。为此,本文提出了一种新的基于drl的协同方法,采用预训练的边缘条件卷积网络和深度信念网络作为替代训练环境来加速交互,并结合学习加速机制来增强探索能力。该方法在交通与电网耦合(CTPN)中得到了补充。Agent学习由能源价格和碳义务价格组成的最优充电价格,激励电动汽车进行低碳协调。设计了实际规模CTPN的案例研究,结果证明了所提出的协调方法在降低运营成本和全球碳排放方面的有效性。该方法在计算效率和泛化能力方面也优于目前最先进的DRL方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Efficient Deep Reinforcement Learning-Based Coordination for Unlocking Electric Vehicle Decarbonization Potential in Coupled Transportation and Power Networks
As a part of the global decarbonization agenda, the electrification of the transport sector involving the large-scale integration of electric vehicles (EV) constitutes one of the key initiatives. However, the decarbonization potential of EV cannot be exploited without appropriate incentive and coordination. Deep reinforcement learning (DRL) constitutes a well-suited model-free and data-driven framework to coordinate EV's charging decisions. Its real-world application facing multiple uncertainties is still challenging, due to the limited interaction efficiency between agent and environment of existing approaches. Therefore, this paper proposes a novel DRL-based coordination method, employing a pre-trained edge conditioned convolutional network and deep belief network as surrogate training environment to speed up the interaction, and combining a learning acceleration mechanism which enhances the exploration capabilities. This method is complemented in coupled transportation and power network (CTPN). Agent learns the optimal charging price composed of energy price and carbon obligation price, and incentivizes EV low-carbon coordination. Case studies involving a real-world scale CTPN are designed and the results demonstrate the effectiveness of the proposed coordination method in mitigating the operational cost and global carbon emission. The proposed method is also proved to outperform the state-of-the-art DRL methods in terms of the computational efficiency and generalization ability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Power Systems
IEEE Transactions on Power Systems 工程技术-工程:电子与电气
CiteScore
15.80
自引率
7.60%
发文量
696
审稿时长
3 months
期刊介绍: The scope of IEEE Transactions on Power Systems covers the education, analysis, operation, planning, and economics of electric generation, transmission, and distribution systems for general industrial, commercial, public, and domestic consumption, including the interaction with multi-energy carriers. The focus of this transactions is the power system from a systems viewpoint instead of components of the system. It has five (5) key areas within its scope with several technical topics within each area. These areas are: (1) Power Engineering Education, (2) Power System Analysis, Computing, and Economics, (3) Power System Dynamic Performance, (4) Power System Operations, and (5) Power System Planning and Implementation.
期刊最新文献
Instantaneous Complex Phase and Frequency: Conceptual Clarification and Equivalence between Formulations A Multi-timescale Learn-to-Optimize Method for Unit Commitment with Renewable Power Stochastic Damping Control Strategy for Wind Power Grid-Connected Systems Based on Itô-Moment Optimization Analysis of Frequency and Voltage Strength in Power Electronics-Dominated Power Systems Based on Characteristic Subsystems Cost-Oriented Scenario Reduction for Stochastic Optimization of Power System Operation With High-Penetration Renewable Energy
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1