Electrical vehicle grid integration for demand response in distribution networks using reinforcement learning

IF 1.9 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC IET Electrical Systems in Transportation Pub Date : 2021-06-11 DOI:10.1049/els2.12030
Fayiz Alfaverh, Mouloud Denaï, Yichuang Sun
{"title":"Electrical vehicle grid integration for demand response in distribution networks using reinforcement learning","authors":"Fayiz Alfaverh,&nbsp;Mouloud Denaï,&nbsp;Yichuang Sun","doi":"10.1049/els2.12030","DOIUrl":null,"url":null,"abstract":"<p>Most utilities across the world already have demand response (DR) programs in place to incentivise consumers to reduce or shift their electricity consumption from peak periods to off-peak hours usually in response to financial incentives. With the increasing electrification of vehicles, emerging technologies such as vehicle-to-grid (V2G) and vehicle-to-home (V2H) have the potential to offer a broad range of benefits and services to achieve more effective management of electricity demand. In this way, electric vehicles (EV) become distributed energy storage resources and can conceivably, in conjunction with other electricity storage solutions, contribute to DR and provide additional capacity to the grid when needed. Here, an effective DR approach for V2G and V2H energy management using Reinforcement Learning (RL) is proposed. Q-learning, an RL strategy based on a reward mechanism, is used to make optimal decisions to charge or delay the charging of the EV battery pack and/or dispatch the stored electricity back to the grid without compromising the driving needs. Simulations are presented to demonstrate how the proposed DR strategy can effectively manage the charging/discharging schedule of the EV battery and how V2H and V2G can contribute to smooth the household load profile, minimise electricity bills and maximise revenue.</p>","PeriodicalId":48518,"journal":{"name":"IET Electrical Systems in Transportation","volume":null,"pages":null},"PeriodicalIF":1.9000,"publicationDate":"2021-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/els2.12030","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Electrical Systems in Transportation","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/els2.12030","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 5

Abstract

Most utilities across the world already have demand response (DR) programs in place to incentivise consumers to reduce or shift their electricity consumption from peak periods to off-peak hours usually in response to financial incentives. With the increasing electrification of vehicles, emerging technologies such as vehicle-to-grid (V2G) and vehicle-to-home (V2H) have the potential to offer a broad range of benefits and services to achieve more effective management of electricity demand. In this way, electric vehicles (EV) become distributed energy storage resources and can conceivably, in conjunction with other electricity storage solutions, contribute to DR and provide additional capacity to the grid when needed. Here, an effective DR approach for V2G and V2H energy management using Reinforcement Learning (RL) is proposed. Q-learning, an RL strategy based on a reward mechanism, is used to make optimal decisions to charge or delay the charging of the EV battery pack and/or dispatch the stored electricity back to the grid without compromising the driving needs. Simulations are presented to demonstrate how the proposed DR strategy can effectively manage the charging/discharging schedule of the EV battery and how V2H and V2G can contribute to smooth the household load profile, minimise electricity bills and maximise revenue.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于强化学习的配电网需求响应集成
©2021作者。由John Wiley&Sons Ltd代表工程与技术学会出版的IET《运输中的电气系统》。这是一篇根据知识共享署名许可条款的开放获取文章,https://creativecommons.org/licenses/by/4.0/
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
5.80
自引率
4.30%
发文量
18
审稿时长
29 weeks
期刊最新文献
E-Gear Functionality Based on Mechanical Relays in Permanent Magnet Synchronous Machines Dynamic Distribution of Rail Potential with Regional Insulation Alteration in Multi-Train Urban Rail Transit Autonomous Energy Management Strategy in the Intermediate Circuit of an Electric Hybrid Drive with a Supercapacitor Parameter Optimization of Balise Circuit Based on Fusion of BNN and Genetic Algorithm Energy Management Strategy Based on Model Predictive Control-Differential Evolution for Hybrid Energy Storage System in Electric Vehicles
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1