优化电动汽车充电调度的分布式多代理强化学习框架

IF 3 4区 工程技术 Q3 ENERGY & FUELS Energies Pub Date : 2024-07-26 DOI:10.3390/en17153694
Christos D. Korkas, Christos Tsaknakis, Athanasios Ch. Kapoutsis, Elias B. Kosmatopoulos
{"title":"优化电动汽车充电调度的分布式多代理强化学习框架","authors":"Christos D. Korkas, Christos Tsaknakis, Athanasios Ch. Kapoutsis, Elias B. Kosmatopoulos","doi":"10.3390/en17153694","DOIUrl":null,"url":null,"abstract":"The increasing number of electric vehicles (EVs) necessitates the installation of more charging stations. The challenge of managing these grid-connected charging stations leads to a multi-objective optimal control problem where station profitability, user preferences, grid requirements and stability should be optimized. However, it is challenging to determine the optimal charging/discharging EV schedule, since the controller should exploit fluctuations in the electricity prices, available renewable resources and available stored energy of other vehicles and cope with the uncertainty of EV arrival/departure scheduling. In addition, the growing number of connected vehicles results in a complex state and action vectors, making it difficult for centralized and single-agent controllers to handle the problem. In this paper, we propose a novel Multi-Agent and distributed Reinforcement Learning (MARL) framework that tackles the challenges mentioned above, producing controllers that achieve high performance levels under diverse conditions. In the proposed distributed framework, each charging spot makes its own charging/discharging decisions toward a cumulative cost reduction without sharing any type of private information, such as the arrival/departure time of a vehicle and its state of charge, addressing the problem of cost minimization and user satisfaction. The framework significantly improves the scalability and sample efficiency of the underlying Deep Deterministic Policy Gradient (DDPG) algorithm. Extensive numerical studies and simulations demonstrate the efficacy of the proposed approach compared with Rule-Based Controllers (RBCs) and well-established, state-of-the-art centralized RL (Reinforcement Learning) algorithms, offering performance improvements of up to 25% and 20% in reducing the energy cost and increasing user satisfaction, respectively.","PeriodicalId":11557,"journal":{"name":"Energies","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Distributed and Multi-Agent Reinforcement Learning Framework for Optimal Electric Vehicle Charging Scheduling\",\"authors\":\"Christos D. Korkas, Christos Tsaknakis, Athanasios Ch. Kapoutsis, Elias B. Kosmatopoulos\",\"doi\":\"10.3390/en17153694\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The increasing number of electric vehicles (EVs) necessitates the installation of more charging stations. The challenge of managing these grid-connected charging stations leads to a multi-objective optimal control problem where station profitability, user preferences, grid requirements and stability should be optimized. However, it is challenging to determine the optimal charging/discharging EV schedule, since the controller should exploit fluctuations in the electricity prices, available renewable resources and available stored energy of other vehicles and cope with the uncertainty of EV arrival/departure scheduling. In addition, the growing number of connected vehicles results in a complex state and action vectors, making it difficult for centralized and single-agent controllers to handle the problem. In this paper, we propose a novel Multi-Agent and distributed Reinforcement Learning (MARL) framework that tackles the challenges mentioned above, producing controllers that achieve high performance levels under diverse conditions. In the proposed distributed framework, each charging spot makes its own charging/discharging decisions toward a cumulative cost reduction without sharing any type of private information, such as the arrival/departure time of a vehicle and its state of charge, addressing the problem of cost minimization and user satisfaction. The framework significantly improves the scalability and sample efficiency of the underlying Deep Deterministic Policy Gradient (DDPG) algorithm. Extensive numerical studies and simulations demonstrate the efficacy of the proposed approach compared with Rule-Based Controllers (RBCs) and well-established, state-of-the-art centralized RL (Reinforcement Learning) algorithms, offering performance improvements of up to 25% and 20% in reducing the energy cost and increasing user satisfaction, respectively.\",\"PeriodicalId\":11557,\"journal\":{\"name\":\"Energies\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-07-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Energies\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.3390/en17153694\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENERGY & FUELS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energies","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/en17153694","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0

摘要

随着电动汽车(EV)数量的不断增加,有必要安装更多的充电站。管理这些并网充电站所面临的挑战导致了一个多目标优化控制问题,在这个问题中,充电站的盈利能力、用户偏好、电网要求和稳定性都应得到优化。然而,要确定电动汽车的最佳充电/放电计划具有挑战性,因为控制器应利用电价波动、可用可再生资源和其他车辆的可用储能,并应对电动汽车到达/离开计划的不确定性。此外,联网车辆数量的不断增加导致了复杂的状态和行动向量,使得集中式和单一代理控制器难以处理该问题。在本文中,我们提出了一种新颖的多代理和分布式强化学习(MARL)框架,该框架可应对上述挑战,生成在各种条件下都能达到高性能水平的控制器。在所提出的分布式框架中,每个充电点都会做出自己的充电/放电决策,以降低累积成本,而不会共享任何类型的私人信息,如车辆的到达/离开时间及其充电状态,从而解决成本最小化和用户满意度的问题。该框架大大提高了底层深度确定性策略梯度(DDPG)算法的可扩展性和采样效率。广泛的数值研究和模拟证明,与基于规则的控制器(RBC)和成熟的、最先进的集中式 RL(强化学习)算法相比,所提出的方法非常有效,在降低能源成本和提高用户满意度方面的性能分别提高了 25% 和 20%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Distributed and Multi-Agent Reinforcement Learning Framework for Optimal Electric Vehicle Charging Scheduling
The increasing number of electric vehicles (EVs) necessitates the installation of more charging stations. The challenge of managing these grid-connected charging stations leads to a multi-objective optimal control problem where station profitability, user preferences, grid requirements and stability should be optimized. However, it is challenging to determine the optimal charging/discharging EV schedule, since the controller should exploit fluctuations in the electricity prices, available renewable resources and available stored energy of other vehicles and cope with the uncertainty of EV arrival/departure scheduling. In addition, the growing number of connected vehicles results in a complex state and action vectors, making it difficult for centralized and single-agent controllers to handle the problem. In this paper, we propose a novel Multi-Agent and distributed Reinforcement Learning (MARL) framework that tackles the challenges mentioned above, producing controllers that achieve high performance levels under diverse conditions. In the proposed distributed framework, each charging spot makes its own charging/discharging decisions toward a cumulative cost reduction without sharing any type of private information, such as the arrival/departure time of a vehicle and its state of charge, addressing the problem of cost minimization and user satisfaction. The framework significantly improves the scalability and sample efficiency of the underlying Deep Deterministic Policy Gradient (DDPG) algorithm. Extensive numerical studies and simulations demonstrate the efficacy of the proposed approach compared with Rule-Based Controllers (RBCs) and well-established, state-of-the-art centralized RL (Reinforcement Learning) algorithms, offering performance improvements of up to 25% and 20% in reducing the energy cost and increasing user satisfaction, respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Energies
Energies ENERGY & FUELS-
CiteScore
6.20
自引率
21.90%
发文量
8045
审稿时长
1.9 months
期刊介绍: Energies (ISSN 1996-1073) is an open access journal of related scientific research, technology development and policy and management studies. It publishes reviews, regular research papers, and communications. Our aim is to encourage scientists to publish their experimental and theoretical results in as much detail as possible. There is no restriction on the length of the papers. The full experimental details must be provided so that the results can be reproduced.
期刊最新文献
Transforming Abandoned Hydrocarbon Fields into Heat Storage Solutions: A Hungarian Case Study Using Enhanced Multi-Criteria Decision Analysis–Analytic Hierarchy Process and Geostatistical Methods Bibliometric Analysis of Multi-Criteria Decision-Making (MCDM) Methods in Environmental and Energy Engineering Using CiteSpace Software: Identification of Key Research Trends and Patterns of International Cooperation Readiness of Malaysian PV System to Utilize Energy Storage System with Second-Life Electric Vehicle Batteries Optimal Configuration Method of Primary and Secondary Integrated Intelligent Switches in the Active Distribution Network Considering Comprehensive Fault Observability Effect of Exhaust Gas Recirculation on Combustion Characteristics of Ultra-Low-Sulfur Diesel in Conventional and PPCI Regimes for a High-Compression-Ratio Engine
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1