The incorporation of modern trends and renewable power systems, coupled with smart grids, has made grid stability prediction increasingly challenging. The limitations of traditional stability prediction systems arise from dynamic power usage, along with unavoidable variations in renewable power supplies, and the models’ inability to track real-time changes. Transparency issues within traditional stability prediction systems hinder grid operators’ understanding of how predictions are formed. Transparent models play a crucial role in building trust and enabling informed decisions, but non-interpretable models pose significant problems by obscuring transparency in critical decisions. In this research, a transparent and smart Explainable Artificial Intelligence (XAI) model is proposed to operate within this framework to address existing issues. The Local Interpretable Model-agnostic Explanations (LIME) framework is integrated to improve the interpretability of model predictions, thereby increasing the transparency of the decision-making process. In this study, grid stability is represented by the dataset label ‘’stabf’’, which classifies each energy load instance as stable or unstable, rather than simulating the physical grid or modeling its dynamics. The integration of Machine Learning (ML) with XAI techniques in the proposed model enables more efficient and transparent operations, resulting in improved predictive performance and accurate real-time predictions. Simulation results have demonstrated the outstanding performance of this proposed model, which achieves an impressive accuracy of 99.92 % and a miss-rate of 0.08 %, outperforming previously published approaches.
扫码关注我们
求助内容:
应助结果提醒方式:
