Longyan Wang , Qiang Dong , Yanxia Fu , Bowen Zhang , Meng Chen , Junhang Xie , Jian Xu , Zhaohui Luo
{"title":"基于强化学习的在线多风力涡轮机协同偏航控制的有效性","authors":"Longyan Wang , Qiang Dong , Yanxia Fu , Bowen Zhang , Meng Chen , Junhang Xie , Jian Xu , Zhaohui Luo","doi":"10.1016/j.conengprac.2024.106124","DOIUrl":null,"url":null,"abstract":"<div><div>Wind farm wake interactions are critical determinants of overall power generation efficiency. To address these challenges, coordinated yaw control of turbines has emerged as a highly effective strategy. While conventional approaches have been widely adopted, the application of contemporary machine learning techniques, specifically reinforcement learning (RL), holds great promise for optimizing wind farm control performance. Considering the scarcity of comparative analyses for yaw control approaches, this study implements and evaluates classical greedy, optimization-based, and RL policies for in-line multiple wind turbine under various wind scenario by an experimentally validated analytical wake model. The results unambiguously establish the superiority of RL over greedy control, particularly below rated wind speeds, as RL optimizes yaw trajectories to maximize total power capture. Furthermore, the RL-controlled policy operates without being hampered by iterative modeling errors, leading to a higher cumulative power generation compared to the optimized control scheme during the control process. At lower wind speeds (5 m/s), it achieves a remarkable 32.63 % improvement over the optimized strategy. As the wind speed increases, the advantages of RL control gradually diminish. In consequence, the model-free adaptation offered by RL control substantially bolsters robustness across a spectrum of changing wind scenarios, facilitating seamless transitions between wake steering and alignment in response to evolving wake physics. This analysis underscores the significant advantages of data-driven RL for wind farm yaw control when compared to traditional methods. Its adaptive nature empowers the optimization of total power production across a range of diverse operating regimes, all without the need for an explicit model representation.</div></div>","PeriodicalId":50615,"journal":{"name":"Control Engineering Practice","volume":null,"pages":null},"PeriodicalIF":5.4000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Effectiveness of cooperative yaw control based on reinforcement learning for in-line multiple wind turbines\",\"authors\":\"Longyan Wang , Qiang Dong , Yanxia Fu , Bowen Zhang , Meng Chen , Junhang Xie , Jian Xu , Zhaohui Luo\",\"doi\":\"10.1016/j.conengprac.2024.106124\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Wind farm wake interactions are critical determinants of overall power generation efficiency. To address these challenges, coordinated yaw control of turbines has emerged as a highly effective strategy. While conventional approaches have been widely adopted, the application of contemporary machine learning techniques, specifically reinforcement learning (RL), holds great promise for optimizing wind farm control performance. Considering the scarcity of comparative analyses for yaw control approaches, this study implements and evaluates classical greedy, optimization-based, and RL policies for in-line multiple wind turbine under various wind scenario by an experimentally validated analytical wake model. The results unambiguously establish the superiority of RL over greedy control, particularly below rated wind speeds, as RL optimizes yaw trajectories to maximize total power capture. Furthermore, the RL-controlled policy operates without being hampered by iterative modeling errors, leading to a higher cumulative power generation compared to the optimized control scheme during the control process. At lower wind speeds (5 m/s), it achieves a remarkable 32.63 % improvement over the optimized strategy. As the wind speed increases, the advantages of RL control gradually diminish. In consequence, the model-free adaptation offered by RL control substantially bolsters robustness across a spectrum of changing wind scenarios, facilitating seamless transitions between wake steering and alignment in response to evolving wake physics. This analysis underscores the significant advantages of data-driven RL for wind farm yaw control when compared to traditional methods. Its adaptive nature empowers the optimization of total power production across a range of diverse operating regimes, all without the need for an explicit model representation.</div></div>\",\"PeriodicalId\":50615,\"journal\":{\"name\":\"Control Engineering Practice\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2024-10-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Control Engineering Practice\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0967066124002831\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Control Engineering Practice","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0967066124002831","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Effectiveness of cooperative yaw control based on reinforcement learning for in-line multiple wind turbines
Wind farm wake interactions are critical determinants of overall power generation efficiency. To address these challenges, coordinated yaw control of turbines has emerged as a highly effective strategy. While conventional approaches have been widely adopted, the application of contemporary machine learning techniques, specifically reinforcement learning (RL), holds great promise for optimizing wind farm control performance. Considering the scarcity of comparative analyses for yaw control approaches, this study implements and evaluates classical greedy, optimization-based, and RL policies for in-line multiple wind turbine under various wind scenario by an experimentally validated analytical wake model. The results unambiguously establish the superiority of RL over greedy control, particularly below rated wind speeds, as RL optimizes yaw trajectories to maximize total power capture. Furthermore, the RL-controlled policy operates without being hampered by iterative modeling errors, leading to a higher cumulative power generation compared to the optimized control scheme during the control process. At lower wind speeds (5 m/s), it achieves a remarkable 32.63 % improvement over the optimized strategy. As the wind speed increases, the advantages of RL control gradually diminish. In consequence, the model-free adaptation offered by RL control substantially bolsters robustness across a spectrum of changing wind scenarios, facilitating seamless transitions between wake steering and alignment in response to evolving wake physics. This analysis underscores the significant advantages of data-driven RL for wind farm yaw control when compared to traditional methods. Its adaptive nature empowers the optimization of total power production across a range of diverse operating regimes, all without the need for an explicit model representation.
期刊介绍:
Control Engineering Practice strives to meet the needs of industrial practitioners and industrially related academics and researchers. It publishes papers which illustrate the direct application of control theory and its supporting tools in all possible areas of automation. As a result, the journal only contains papers which can be considered to have made significant contributions to the application of advanced control techniques. It is normally expected that practical results should be included, but where simulation only studies are available, it is necessary to demonstrate that the simulation model is representative of a genuine application. Strictly theoretical papers will find a more appropriate home in Control Engineering Practice''s sister publication, Automatica. It is also expected that papers are innovative with respect to the state of the art and are sufficiently detailed for a reader to be able to duplicate the main results of the paper (supplementary material, including datasets, tables, code and any relevant interactive material can be made available and downloaded from the website). The benefits of the presented methods must be made very clear and the new techniques must be compared and contrasted with results obtained using existing methods. Moreover, a thorough analysis of failures that may happen in the design process and implementation can also be part of the paper.
The scope of Control Engineering Practice matches the activities of IFAC.
Papers demonstrating the contribution of automation and control in improving the performance, quality, productivity, sustainability, resource and energy efficiency, and the manageability of systems and processes for the benefit of mankind and are relevant to industrial practitioners are most welcome.