Comparison of Deep Reinforcement Learning Techniques with Gradient based approach in Cooperative Control of Wind Farm

K. N. Pujari, Vivek Srivastava, S. Miriyala, K. Mitra
{"title":"Comparison of Deep Reinforcement Learning Techniques with Gradient based approach in Cooperative Control of Wind Farm","authors":"K. N. Pujari, Vivek Srivastava, S. Miriyala, K. Mitra","doi":"10.1109/ICC54714.2021.9703186","DOIUrl":null,"url":null,"abstract":"The control settings of a turbines play a major role in increasing the energy production from a wind farm. The nonlinear interactions of wake between the turbines make optimal control of wind farm a challenging task. Therefore, it's hard to find the proper model based method to optimize the control settings. In the recent years, Reinforcement Learning (RL) has been emerging as a promising method for wind farm control. However, its efficacy is not evaluated when compared with nonlinear control strategies. In this study, yaw misalignment is used as control parameter to deflect the wakes and increase the power production from a 4×4 wind farm. A model-free Deep Deterministic Policy Gradient (DDPG) method and model-based iterative Linear Quadratic Regulator (iLQR) based Reinforcement Learning Techniques are utilized to optimize the yaw misalignments. To prove the efficiency of RL techniques, the results of DDPG and iLQR are compared with a nonlinear cooperative control strategy, Maximum Power Point Tracking solved through gradient based optimization approach.","PeriodicalId":382373,"journal":{"name":"2021 Seventh Indian Control Conference (ICC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Seventh Indian Control Conference (ICC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICC54714.2021.9703186","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The control settings of a turbines play a major role in increasing the energy production from a wind farm. The nonlinear interactions of wake between the turbines make optimal control of wind farm a challenging task. Therefore, it's hard to find the proper model based method to optimize the control settings. In the recent years, Reinforcement Learning (RL) has been emerging as a promising method for wind farm control. However, its efficacy is not evaluated when compared with nonlinear control strategies. In this study, yaw misalignment is used as control parameter to deflect the wakes and increase the power production from a 4×4 wind farm. A model-free Deep Deterministic Policy Gradient (DDPG) method and model-based iterative Linear Quadratic Regulator (iLQR) based Reinforcement Learning Techniques are utilized to optimize the yaw misalignments. To prove the efficiency of RL techniques, the results of DDPG and iLQR are compared with a nonlinear cooperative control strategy, Maximum Power Point Tracking solved through gradient based optimization approach.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
深度强化学习与梯度法在风电场协同控制中的比较
涡轮机的控制设置在增加风力发电场的能源产量方面起着重要作用。风力发电机组间尾流的非线性相互作用使风电场的优化控制成为一项具有挑战性的任务。因此,很难找到合适的基于模型的方法来优化控制设置。近年来,强化学习(RL)已成为风电场控制的一种有前途的方法。然而,与非线性控制策略相比,其有效性尚未得到评价。在本研究中,以偏航失调作为控制参数来偏转尾迹,增加4×4风电场的发电量。利用无模型的深度确定性策略梯度(DDPG)方法和基于模型的迭代线性二次调节器(iLQR)强化学习技术来优化偏航失调。为了证明RL技术的有效性,将DDPG和iLQR的结果与一种非线性协同控制策略进行了比较,该策略通过基于梯度的优化方法求解最大功率点跟踪。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Robust Control of Buck-Boost Converter using Second Order Sliding Modes Finite-Time Stability Analysis of a Distributed Microgrid Connected via Detail-Balanced Graph Improving network's transition cohesion by approximating strongly damped waves using delayed self reinforcement Nonlinear Spacecraft Attitude Control Design Using Modified Rodrigues Parameters Comparison of Deep Reinforcement Learning Techniques with Gradient based approach in Cooperative Control of Wind Farm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1