The integration of large-scale-distributed new energy resources has led to heightened source‒load uncertainty. As energy prosumers, microgrids urgently require enhanced real-time regulation capabilities over controllable resources amid uncertain environments, rendering real-time and rapid decision-making a critical issue. This paper proposes a tailored twin delayed deep deterministic policy gradient (TD3) reinforcement learning algorithm that explicitly accounts for source‒load uncertainty. First, following an expert experience-based methodology, Gaussian process regression was implemented using the radial basis function covariance with historical source and load data. The parameters were adaptively adjusted by maximum likelihood estimation to generate the expected curves of demand and wind‒solar power generation, along with their 95% confidence regions, which were treated as representative uncertainty scenarios. Second, the traditional scheduling model was transformed into a deep reinforcement learning (DRL) environment through a Markov process. To minimize the total operational cost of the microgrid, the tailored TD3 algorithm was applied to formulate rapid intraday scheduling decisions. Finally, simulations were conducted using real historical data from an actual region in Zhejiang province, China, to verify the efficacy of the proposed method. The results demonstrate the potential of the algorithm for achieving economic scheduling for microgrids.
扫码关注我们
求助内容:
应助结果提醒方式:
