Niklas Ebell, F. Heinrich, Jonas Schlund, M. Pruckner
{"title":"含频备用电源光伏-电池系统的强化学习控制算法","authors":"Niklas Ebell, F. Heinrich, Jonas Schlund, M. Pruckner","doi":"10.1109/SmartGridComm.2018.8587480","DOIUrl":null,"url":null,"abstract":"Rooftop-installed photovoltaic systems for residential buildings withbattery energy storage system are increasing. Controlling power flows of volatile and unpredictable renewable energy sources in such a system is challenging. Therefore, in this paper we present an algorithm based on Reinforcement Learning to control the power flows of a residential household with a battery energy storage system and a photovoltaic system using neural networks as a function approximation. In a nondeterministic environment the optimal choice of a series of actions to be taken is complex. Training a Reinforcement Learning algorithm, these complex patterns can be learned. The task of the energy storage is to reduce the energy feed-in to the electric grid as well as to improve power system stability by providing frequency containment reserve power to the transmission system operator. Our model includes the profiles of the grid’s frequency, photovoltaic power generation and the electric load of two different households for one year. The first household is used to train the algorithm and to adjust the weights of the neural network to estimate the state-action values. The second household is used to test the functionality of the algorithm on unseen data. To evaluate the behavior of the Reinforcement Learning algorithm the results are compared to a simulation of rule-based control. As a result, after 300 episodes of training, the algorithm is able to reduce the energy consumption from the grid up to 7.8% compared to the rule-based control system managing the system’s power flows.","PeriodicalId":213523,"journal":{"name":"2018 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Reinforcement Learning Control Algorithm for a PV-Battery-System Providing Frequency Containment Reserve Power\",\"authors\":\"Niklas Ebell, F. Heinrich, Jonas Schlund, M. Pruckner\",\"doi\":\"10.1109/SmartGridComm.2018.8587480\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Rooftop-installed photovoltaic systems for residential buildings withbattery energy storage system are increasing. Controlling power flows of volatile and unpredictable renewable energy sources in such a system is challenging. Therefore, in this paper we present an algorithm based on Reinforcement Learning to control the power flows of a residential household with a battery energy storage system and a photovoltaic system using neural networks as a function approximation. In a nondeterministic environment the optimal choice of a series of actions to be taken is complex. Training a Reinforcement Learning algorithm, these complex patterns can be learned. The task of the energy storage is to reduce the energy feed-in to the electric grid as well as to improve power system stability by providing frequency containment reserve power to the transmission system operator. Our model includes the profiles of the grid’s frequency, photovoltaic power generation and the electric load of two different households for one year. The first household is used to train the algorithm and to adjust the weights of the neural network to estimate the state-action values. The second household is used to test the functionality of the algorithm on unseen data. To evaluate the behavior of the Reinforcement Learning algorithm the results are compared to a simulation of rule-based control. As a result, after 300 episodes of training, the algorithm is able to reduce the energy consumption from the grid up to 7.8% compared to the rule-based control system managing the system’s power flows.\",\"PeriodicalId\":213523,\"journal\":{\"name\":\"2018 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SmartGridComm.2018.8587480\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SmartGridComm.2018.8587480","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Reinforcement Learning Control Algorithm for a PV-Battery-System Providing Frequency Containment Reserve Power
Rooftop-installed photovoltaic systems for residential buildings withbattery energy storage system are increasing. Controlling power flows of volatile and unpredictable renewable energy sources in such a system is challenging. Therefore, in this paper we present an algorithm based on Reinforcement Learning to control the power flows of a residential household with a battery energy storage system and a photovoltaic system using neural networks as a function approximation. In a nondeterministic environment the optimal choice of a series of actions to be taken is complex. Training a Reinforcement Learning algorithm, these complex patterns can be learned. The task of the energy storage is to reduce the energy feed-in to the electric grid as well as to improve power system stability by providing frequency containment reserve power to the transmission system operator. Our model includes the profiles of the grid’s frequency, photovoltaic power generation and the electric load of two different households for one year. The first household is used to train the algorithm and to adjust the weights of the neural network to estimate the state-action values. The second household is used to test the functionality of the algorithm on unseen data. To evaluate the behavior of the Reinforcement Learning algorithm the results are compared to a simulation of rule-based control. As a result, after 300 episodes of training, the algorithm is able to reduce the energy consumption from the grid up to 7.8% compared to the rule-based control system managing the system’s power flows.