首页 > 最新文献

Energy and AI最新文献

英文 中文
Interpretable transformer based intra-day solar forecasting with spatiotemporal satellite and numerical weather prediction inputs 基于时空卫星和数值天气预报输入的可解释变压器日内太阳活动预报
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-15 DOI: 10.1016/j.egyai.2025.100667
Shanlin Chen , Tao Jing , Mengying Li , Hiu Hung Lee , Ming Chun Lam , Siqi Bu
With the increasing capacity addition of solar energy systems, solar forecasting is vital and cost-effective to mitigate solar variability and to support their operation. The temporal fusion transformer (TFT) has shown great potential in both solar irradiance and power output forecasting using multiple one-dimensional time series data. Since spatiotemporal information is more beneficial for solar forecasting, this work applies a simple yet effective way to incorporate two-dimensional spatiotemporal satellite- and numerical weather prediction (NWP)-based inputs with TFT for more skillful irradiance forecasts. Results show that spatiotemporal inputs with simple spatial averaging can generally lead to better irradiance forecasts with 4-h ahead skill scores up to 12.24%, compared to the use of single-location data. The benefit of using spatiotemporal information is more pronounced for forecasts under cloudy conditions, whereas it might result in some misrepresentations when the sky is clear or less cloudy. NWP data can generally be used to improve the intra-day solar forecasting performance with TFT, and the interpretability analysis shows that NWP irradiance products have a larger impact (up to 22.07%) on the overall results. Although NWP products are beneficial for intra-day solar forecasting when integrated with satellite-based data, their influences under different sky conditions and forecast horizons might be different. A proper analysis of these impacts should be performed and interpreted in practical applications for the reliability of energy systems. This work on improved irradiance forecasts with TFT and interpretability analysis is crucial for the operation of solar energy systems.
随着太阳能系统容量的增加,太阳能预测对于减轻太阳能变化和支持其运行至关重要且具有成本效益。时间融合变压器(TFT)在利用多个一维时间序列数据预测太阳辐照度和输出功率方面显示出巨大的潜力。由于时空信息对太阳预报更有利,本研究采用了一种简单而有效的方法,将基于卫星和数值天气预报(NWP)的二维时空输入与TFT相结合,以实现更熟练的辐照度预报。结果表明,与单位置数据相比,采用简单空间平均的时空输入通常可以更好地预测4小时前的辐照度,其技能得分高达12.24%。在多云的天气条件下,使用时空信息的好处更为明显,而在天空晴朗或较少多云的情况下,它可能会导致一些错误的陈述。可解释性分析表明,NWP辐照度产品对TFT日间太阳预报结果的影响较大,可达22.07%。虽然NWP产品在与卫星数据相结合时有利于日间太阳预报,但在不同的天空条件和预报视界下,它们的影响可能不同。在能源系统可靠性的实际应用中,应该对这些影响进行适当的分析和解释。利用TFT改进辐照度预报和可解释性分析的工作对太阳能系统的运行至关重要。
{"title":"Interpretable transformer based intra-day solar forecasting with spatiotemporal satellite and numerical weather prediction inputs","authors":"Shanlin Chen ,&nbsp;Tao Jing ,&nbsp;Mengying Li ,&nbsp;Hiu Hung Lee ,&nbsp;Ming Chun Lam ,&nbsp;Siqi Bu","doi":"10.1016/j.egyai.2025.100667","DOIUrl":"10.1016/j.egyai.2025.100667","url":null,"abstract":"<div><div>With the increasing capacity addition of solar energy systems, solar forecasting is vital and cost-effective to mitigate solar variability and to support their operation. The temporal fusion transformer (TFT) has shown great potential in both solar irradiance and power output forecasting using multiple one-dimensional time series data. Since spatiotemporal information is more beneficial for solar forecasting, this work applies a simple yet effective way to incorporate two-dimensional spatiotemporal satellite- and numerical weather prediction (NWP)-based inputs with TFT for more skillful irradiance forecasts. Results show that spatiotemporal inputs with simple spatial averaging can generally lead to better irradiance forecasts with 4-h ahead skill scores up to 12.24%, compared to the use of single-location data. The benefit of using spatiotemporal information is more pronounced for forecasts under cloudy conditions, whereas it might result in some misrepresentations when the sky is clear or less cloudy. NWP data can generally be used to improve the intra-day solar forecasting performance with TFT, and the interpretability analysis shows that NWP irradiance products have a larger impact (up to 22.07%) on the overall results. Although NWP products are beneficial for intra-day solar forecasting when integrated with satellite-based data, their influences under different sky conditions and forecast horizons might be different. A proper analysis of these impacts should be performed and interpreted in practical applications for the reliability of energy systems. This work on improved irradiance forecasts with TFT and interpretability analysis is crucial for the operation of solar energy systems.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"23 ","pages":"Article 100667"},"PeriodicalIF":9.6,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cooperative multi-agent reinforcement learning for grid-aware EV charging management with cross-site redirection 基于协同多智能体强化学习的电网感知电动汽车充电管理
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-13 DOI: 10.1016/j.egyai.2025.100664
Yuchen Yang , Rui Tang
As electric vehicles (EV) become more widespread, managing charging demand is critical for grid stability and efficient resource allocation. However, prior work often optimises single sites or treats demand as exogenous, without explicitly modelling grid capacity constraints or the aggregate effects of redirection, and it rarely offers an integrated capacity-aware view that links domestic and public charging under system-level coordination. This study, therefore, develops a multi-agent reinforcement learning framework with cross-site redirection to optimise EV charging operations. The research area focuses on the Perth and Kinross region of Scotland in the UK, using real public charging records from the council database as well as domestic charging statistics from the UK Department for Transport. Charging sites are modelled as cooperative agents using the Multi-Agent Deep Deterministic Policy Gradient algorithm, trained in a spatiotemporal environment that integrates public charging data and simulated domestic demand. We contribute by unifying public and home charging demand in one learning environment, introducing a cross-site actor trained under centralised training and decentralised execution, and incorporating a delayed replay buffer with short-horizon forecasts so redirection aligns with future congestion. The learned policy reduces peak-hour load standard deviation by up to 40 % and lowers cumulative threshold violations by 37 % compared to the baseline. Distinct weekday and weekend strategies emerge, enabling adaptive coordination under varying demand patterns. The study provides interpretable control for EV networks, balancing peak demand and service quality across sites while addressing the system-level coordination.
随着电动汽车的日益普及,管理充电需求对电网的稳定和资源的有效分配至关重要。然而,先前的工作往往是优化单个站点或将需求视为外生的,没有明确模拟电网容量约束或重定向的总体效应,并且很少提供综合的容量意识观点,将系统级协调下的国内和公共收费联系起来。因此,本研究开发了一个具有跨站点重定向的多智能体强化学习框架,以优化电动汽车充电操作。研究领域集中在英国苏格兰的珀斯和金罗斯地区,使用来自理事会数据库的真实公共收费记录以及来自英国交通部的国内收费统计数据。使用多代理深度确定性策略梯度算法将充电站点建模为协作代理,并在集成公共充电数据和模拟国内需求的时空环境中进行训练。我们的贡献是在一个学习环境中统一公共和家庭充电需求,引入一个在集中训练和分散执行下训练的跨站点参与者,并结合具有短期预测的延迟重播缓冲,以便重定向与未来的拥堵保持一致。与基线相比,学习策略将高峰时段负载标准偏差降低了40%,将累计阈值违规降低了37%。不同的工作日和周末策略出现,在不同的需求模式下实现适应性协调。该研究为电动汽车网络提供了可解释的控制,在解决系统级协调的同时,平衡了站点之间的高峰需求和服务质量。
{"title":"Cooperative multi-agent reinforcement learning for grid-aware EV charging management with cross-site redirection","authors":"Yuchen Yang ,&nbsp;Rui Tang","doi":"10.1016/j.egyai.2025.100664","DOIUrl":"10.1016/j.egyai.2025.100664","url":null,"abstract":"<div><div>As electric vehicles (EV) become more widespread, managing charging demand is critical for grid stability and efficient resource allocation. However, prior work often optimises single sites or treats demand as exogenous, without explicitly modelling grid capacity constraints or the aggregate effects of redirection, and it rarely offers an integrated capacity-aware view that links domestic and public charging under system-level coordination. This study, therefore, develops a multi-agent reinforcement learning framework with cross-site redirection to optimise EV charging operations. The research area focuses on the Perth and Kinross region of Scotland in the UK, using real public charging records from the council database as well as domestic charging statistics from the UK Department for Transport. Charging sites are modelled as cooperative agents using the Multi-Agent Deep Deterministic Policy Gradient algorithm, trained in a spatiotemporal environment that integrates public charging data and simulated domestic demand. We contribute by unifying public and home charging demand in one learning environment, introducing a cross-site actor trained under centralised training and decentralised execution, and incorporating a delayed replay buffer with short-horizon forecasts so redirection aligns with future congestion. The learned policy reduces peak-hour load standard deviation by up to 40 % and lowers cumulative threshold violations by 37 % compared to the baseline. Distinct weekday and weekend strategies emerge, enabling adaptive coordination under varying demand patterns. The study provides interpretable control for EV networks, balancing peak demand and service quality across sites while addressing the system-level coordination.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"23 ","pages":"Article 100664"},"PeriodicalIF":9.6,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-cloud artificial intelligence digital twin thermal modeling for rotating sintered core heat pipes 旋转烧结芯热管的边缘云人工智能数字孪生热建模
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-13 DOI: 10.1016/j.egyai.2025.100666
Jialan Liu , Chi Ma , Wenhui Zhou , Mingming Li , Jialong He , Giovanni Totis , Chunlei Hua , Liang Wang , Gangwei Cui , Ruijuan Xue , Zhi Tan , Jun Yang , Kuo Liu , Yuansheng Zhou , Jianqiang Zhou , Shengbin Weng
The sintered core heat pipe is widely used in precision thermal control and aerospace systems due to its high heat transfer performance. However, conventional computational fluid dynamics approaches for predicting its thermal behavior are computationally expensive and inflexible under varying operating conditions, while experimental methods are time-consuming and costly. To address the above challenges, in this study, a digital twin-based predictive thermal modeling framework is presented for sintered core heat pipes under rotational conditions, implemented within an edge-cloud artificial intelligence architecture. A fully parameterized physical model is developed on the Simulink platform using the SIMSCAPE Fluids module, enabling dynamic simulations of phase transitions and temperature responses. Validation against experimental data shows prediction errors within ±5 %. Simulation and experimental datasets are integrated to train three models-physics-informed neural network, Transformer, and light gradient boosting machine-evaluated under steady and transient thermal conditions. The physics-informed neural network achieves the lowest mean absolute error of 0.85 °C in high thermal inertia cases, while the Transformer attains the best steady-state accuracy with a root mean square error of 0.58 °C and inference latency of 150 ms after Turing Tensor R-Engine deployment. Docker-based deployment enables real-time edge inference, with the Transformer achieving an optimal balance of accuracy, memory footprint (36 MB), and response speed. The proposed framework offers a practical and scalable approach for accurate thermal prediction in advanced thermal management applications.
烧结芯热管以其优良的传热性能在精密热控制和航空航天系统中得到了广泛的应用。然而,传统的计算流体动力学方法在不同的操作条件下预测其热行为的计算成本很高,而且不灵活,而实验方法既耗时又昂贵。为了解决上述挑战,在本研究中,提出了一个基于数字孪生的旋转条件下烧结芯热管预测热建模框架,该框架在边缘云人工智能架构中实现。使用SIMSCAPE流体模块,在Simulink平台上开发了全参数化物理模型,实现了相变和温度响应的动态模拟。实验数据验证表明,预测误差在±5%以内。模拟和实验数据集集成在一起,以训练三种模型-物理信息神经网络,变压器和光梯度增强机-在稳定和瞬态热条件下进行评估。在高热惯性情况下,物理信息神经网络实现了最低的平均绝对误差0.85°C,而Transformer在部署图灵张量R-Engine后,实现了最佳的稳态精度,均方根误差为0.58°C,推理延迟为150 ms。基于docker的部署支持实时边缘推断,Transformer实现了准确性、内存占用(36 MB)和响应速度的最佳平衡。提出的框架为高级热管理应用中的精确热预测提供了一种实用且可扩展的方法。
{"title":"Edge-cloud artificial intelligence digital twin thermal modeling for rotating sintered core heat pipes","authors":"Jialan Liu ,&nbsp;Chi Ma ,&nbsp;Wenhui Zhou ,&nbsp;Mingming Li ,&nbsp;Jialong He ,&nbsp;Giovanni Totis ,&nbsp;Chunlei Hua ,&nbsp;Liang Wang ,&nbsp;Gangwei Cui ,&nbsp;Ruijuan Xue ,&nbsp;Zhi Tan ,&nbsp;Jun Yang ,&nbsp;Kuo Liu ,&nbsp;Yuansheng Zhou ,&nbsp;Jianqiang Zhou ,&nbsp;Shengbin Weng","doi":"10.1016/j.egyai.2025.100666","DOIUrl":"10.1016/j.egyai.2025.100666","url":null,"abstract":"<div><div>The sintered core heat pipe is widely used in precision thermal control and aerospace systems due to its high heat transfer performance. However, conventional computational fluid dynamics approaches for predicting its thermal behavior are computationally expensive and inflexible under varying operating conditions, while experimental methods are time-consuming and costly. To address the above challenges, in this study, a digital twin-based predictive thermal modeling framework is presented for sintered core heat pipes under rotational conditions, implemented within an edge-cloud artificial intelligence architecture. A fully parameterized physical model is developed on the Simulink platform using the SIMSCAPE Fluids module, enabling dynamic simulations of phase transitions and temperature responses. Validation against experimental data shows prediction errors within ±5 %. Simulation and experimental datasets are integrated to train three models-physics-informed neural network, Transformer, and light gradient boosting machine-evaluated under steady and transient thermal conditions. The physics-informed neural network achieves the lowest mean absolute error of 0.85 °C in high thermal inertia cases, while the Transformer attains the best steady-state accuracy with a root mean square error of 0.58 °C and inference latency of 150 ms after Turing Tensor R-Engine deployment. Docker-based deployment enables real-time edge inference, with the Transformer achieving an optimal balance of accuracy, memory footprint (36 MB), and response speed. The proposed framework offers a practical and scalable approach for accurate thermal prediction in advanced thermal management applications.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"23 ","pages":"Article 100666"},"PeriodicalIF":9.6,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Thermal conductivity prediction of BN composites based on Enhanced Co-ANN combined with physical attention mechanisms 基于增强Co-ANN结合物理注意机制的BN复合材料导热系数预测
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-10 DOI: 10.1016/j.egyai.2025.100663
Chen Liu , Tong Li , Yuandong Guo , Guiping Lin
This study proposes an enhanced co-training artificial neural network (Enhanced Co-ANN) guided by the physical attention mechanisms for the effective thermal conductivity prediction of high filler volume fraction polymer/BN composites. The thermal conductivity of BN composites is influenced by multiple factors, including the morphology of the fillers, interface thermal resistance, and experimental noise. This model tackles complex physical processes by integrating a customized multi-head physical attention layer to emphasize key features, along with a physics-constrained loss function to ensure prediction consistency. A collaborative training strategy based on curriculum learning and consistency discrimination is adopted. The model is optimized using 3174 labeled experimental datasets and 50,000 unlabeled data generated from physical models. Weight distribution is systematically designed across three core levels: model architecture, loss function, and training strategy. This approach differs from traditional parameter weight adjustments, as it emphasizes key features, especially volume fraction (vf), and balances different learning objectives through a physically guided mechanism and dynamic training strategies. Attention visualization indicates that the model adaptively focuses on the volume fraction of the packing material and the interface effect, verifying the effectiveness of the physically guided design. Six groups of samples with different packing volume fractions were made for testing and validation. This model has high accuracy (R² = 0.982; MAE = 0.045 W/m K) and is extremely consistent with physical laws. This network framework provides a method with broad application prospects for the rapid calculation, screening, and efficient design of high-performance polymer/BN thermal conductive materials.
本研究提出了一种以物理注意机制为指导的增强型协同训练人工神经网络(enhanced Co-ANN),用于高填料体积分数聚合物/BN复合材料的有效导热系数预测。BN复合材料的导热性能受填料形态、界面热阻和实验噪声等多种因素的影响。该模型通过集成定制的多头物理注意层来强调关键特征,以及物理约束损失函数来确保预测一致性,从而解决复杂的物理过程。采用基于课程学习和一致性判别的协同训练策略。该模型使用3174个标记实验数据集和5万个物理模型生成的未标记数据集进行优化。权重分布系统地设计在三个核心层面:模型架构,损失函数和训练策略。这种方法不同于传统的参数权重调整,因为它强调关键特征,特别是体积分数(vf),并通过物理引导机制和动态训练策略平衡不同的学习目标。注意可视化表明,该模型自适应地关注包装材料的体积分数和界面效应,验证了物理引导设计的有效性。制作了6组不同包装体积分数的样品进行检测和验证。该模型精度高(R²= 0.982;MAE = 0.045 W/m K),与物理规律极为吻合。该网络框架为高性能聚合物/BN导热材料的快速计算、筛选和高效设计提供了一种具有广阔应用前景的方法。
{"title":"Thermal conductivity prediction of BN composites based on Enhanced Co-ANN combined with physical attention mechanisms","authors":"Chen Liu ,&nbsp;Tong Li ,&nbsp;Yuandong Guo ,&nbsp;Guiping Lin","doi":"10.1016/j.egyai.2025.100663","DOIUrl":"10.1016/j.egyai.2025.100663","url":null,"abstract":"<div><div>This study proposes an enhanced co-training artificial neural network (Enhanced Co-ANN) guided by the physical attention mechanisms for the effective thermal conductivity prediction of high filler volume fraction polymer/BN composites. The thermal conductivity of BN composites is influenced by multiple factors, including the morphology of the fillers, interface thermal resistance, and experimental noise. This model tackles complex physical processes by integrating a customized multi-head physical attention layer to emphasize key features, along with a physics-constrained loss function to ensure prediction consistency. A collaborative training strategy based on curriculum learning and consistency discrimination is adopted. The model is optimized using 3174 labeled experimental datasets and 50,000 unlabeled data generated from physical models. Weight distribution is systematically designed across three core levels: model architecture, loss function, and training strategy. This approach differs from traditional parameter weight adjustments, as it emphasizes key features, especially volume fraction (vf), and balances different learning objectives through a physically guided mechanism and dynamic training strategies. Attention visualization indicates that the model adaptively focuses on the volume fraction of the packing material and the interface effect, verifying the effectiveness of the physically guided design. Six groups of samples with different packing volume fractions were made for testing and validation. This model has high accuracy (R² = 0.982; MAE = 0.045 W/m K) and is extremely consistent with physical laws. This network framework provides a method with broad application prospects for the rapid calculation, screening, and efficient design of high-performance polymer/BN thermal conductive materials.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"23 ","pages":"Article 100663"},"PeriodicalIF":9.6,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145718921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MI-VMD-BSCNet: A lightweight spatiotemporal modeling framework for tube temperature prediction in coal-fired boiler water-walls MI-VMD-BSCNet:燃煤锅炉水冷壁管温预测的轻量级时空建模框架
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-02 DOI: 10.1016/j.egyai.2025.100656
Shiming Xu , Zhiqian He , Xianyong Peng , Zhi Wang , Yuhan Wang , Youxiang Zhang , Guangmin Yang , Mingcheng Zhang , Jinsha Luo , Yunxi Guo , Huan Liu , Meixi Zhao , Junqin Yan , Fan Geng , Huaichun Zhou
Over-temperature of boiler water-walls causes tube leakage in ultra-supercritical coal-fired power units. This is a critical issue intensified by frequent load fluctuations from flexible peak shaving, essential for carbon peaking and neutrality goals. Existing computational fluid dynamics methods have high computational load, limiting their suitability for real-time monitoring, while data-driven approaches cannot accurately capture dynamic temperature changes under rapid load ramp. This study proposes a lightweight spatiotemporal modeling framework, referred to as mutual information-variational mode decomposition-broad skip connection network (MI-VMD-BSCNet), for high-accuracy and low-cost water-wall temperature prediction, advancing artificial intelligence applications in energy systems. A feature selection method reduces the input complexity, advanced signal processing enhances the temporal feature representation, and a sliding window approach captures the underlying local and global patterns. BSCNet leverages a parallel feature extraction architecture and skip connections to optimize feature fusion and gradient flow, allowing to improve the modeling of dynamic temperature variations. The model is trained and evaluated using historical data from a 1000 MW ultra-supercritical coal-fired boiler. The obtained results demonstrate that it outperforms baseline convolutional neural network and broad learning system models, achieving mean absolute error, mean absolute percentage error, and root mean square error of 1.493 °C, 0.395%, and 1.964 °C, respectively. This framework enables early warning of over-temperature failures, which supports sustainable boiler operation and provides a high potential for theoretical and engineering advancements.
超超临界火电机组锅炉水冷壁温度过高会引起管漏。灵活调峰引起的频繁负荷波动加剧了这一关键问题,这对碳调峰和中和目标至关重要。现有的计算流体动力学方法计算量大,限制了其实时监测的适用性,而数据驱动的方法无法准确捕捉快速负荷斜坡下的动态温度变化。本研究提出了一种轻量级的时空建模框架,称为互信息变分模式分解-宽跳跃连接网络(MI-VMD-BSCNet),用于高精度和低成本的水冷壁温度预测,推进人工智能在能源系统中的应用。特征选择方法降低了输入复杂度,高级信号处理增强了时间特征表示,滑动窗口方法捕获了潜在的局部和全局模式。BSCNet利用并行特征提取架构和跳过连接来优化特征融合和梯度流,从而改进动态温度变化的建模。利用1000mw超超临界燃煤锅炉的历史数据对模型进行了训练和评估。得到的结果表明,它优于基线卷积神经网络和广义学习系统模型,平均绝对误差、平均绝对百分比误差和均方根误差分别为1.493°C、0.395%和1.964°C。该框架能够实现超温故障的早期预警,从而支持锅炉的可持续运行,并为理论和工程进步提供了巨大的潜力。
{"title":"MI-VMD-BSCNet: A lightweight spatiotemporal modeling framework for tube temperature prediction in coal-fired boiler water-walls","authors":"Shiming Xu ,&nbsp;Zhiqian He ,&nbsp;Xianyong Peng ,&nbsp;Zhi Wang ,&nbsp;Yuhan Wang ,&nbsp;Youxiang Zhang ,&nbsp;Guangmin Yang ,&nbsp;Mingcheng Zhang ,&nbsp;Jinsha Luo ,&nbsp;Yunxi Guo ,&nbsp;Huan Liu ,&nbsp;Meixi Zhao ,&nbsp;Junqin Yan ,&nbsp;Fan Geng ,&nbsp;Huaichun Zhou","doi":"10.1016/j.egyai.2025.100656","DOIUrl":"10.1016/j.egyai.2025.100656","url":null,"abstract":"<div><div>Over-temperature of boiler water-walls causes tube leakage in ultra-supercritical coal-fired power units. This is a critical issue intensified by frequent load fluctuations from flexible peak shaving, essential for carbon peaking and neutrality goals. Existing computational fluid dynamics methods have high computational load, limiting their suitability for real-time monitoring, while data-driven approaches cannot accurately capture dynamic temperature changes under rapid load ramp. This study proposes a lightweight spatiotemporal modeling framework, referred to as mutual information-variational mode decomposition-broad skip connection network (MI-VMD-BSCNet), for high-accuracy and low-cost water-wall temperature prediction, advancing artificial intelligence applications in energy systems. A feature selection method reduces the input complexity, advanced signal processing enhances the temporal feature representation, and a sliding window approach captures the underlying local and global patterns. BSC<img>Net leverages a parallel feature extraction architecture and skip connections to optimize feature fusion and gradient flow, allowing to improve the modeling of dynamic temperature variations. The model is trained and evaluated using historical data from a 1000 MW ultra-supercritical coal-fired boiler. The obtained results demonstrate that it outperforms baseline convolutional neural network and broad learning system models, achieving mean absolute error, mean absolute percentage error, and root mean square error of 1.493 °C, 0.395%, and 1.964 °C, respectively. This framework enables early warning of over-temperature failures, which supports sustainable boiler operation and provides a high potential for theoretical and engineering advancements.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"23 ","pages":"Article 100656"},"PeriodicalIF":9.6,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145718919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An explainable artificial intelligence feature selection framework for transparent, trustworthy, and cost-efficient energy forecasting 一个可解释的人工智能特征选择框架,用于透明、可信和成本效益的能源预测
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1016/j.egyai.2025.100648
Leonard Kost, Sarah K. Lier, Michael H. Breitner
Accurate forecasting of renewable power generation is crucial for grid stability and cost efficiency. Feature selection in AI-based forecasting remains challenging due to high data acquisition cost, lack of transparency, and limited user control. We introduce a transparent and cost-sensitive feature selection framework for renewable power forecasting that leverages Explainable Artificial Intelligence (XAI). We integrate SHapley Additive exPlanations (SHAP) and Explain Like I’m 5 (ELI5) to identify dominant and redundant features. This approach enables systematic dataset reduction without compromising model performance. Our case study, based on Photovoltaic (PV) generation data, evaluates the approach across four experimental setups. Experimental results indicate that our XAI-based feature selection reduces the dominance index from 0.37 to 0.17, maintains high predictive accuracy (R2 = 0.94, drop < 0.04), and lowers data acquisition costs. Furthermore, eliminating dominant features improves robustness to noise and reduces performance variance by a factor of three compared to the baseline scenario. The developed framework enhances interpretability, supports human-in-the-loop decision-making, and introduces a cost-sensitive objective function for feature selection. By combining transparency, robustness, and efficiency, we contribute to the development and implementation of Trustworthy AI (TAI) applications in energy forecasting, providing a scalable solution for industrial deployment.
可再生能源发电的准确预测对电网的稳定性和成本效率至关重要。由于数据采集成本高、缺乏透明度和用户控制有限,基于人工智能的预测中的特征选择仍然具有挑战性。我们为可再生能源预测引入了一个透明且成本敏感的特征选择框架,该框架利用可解释人工智能(XAI)。我们整合了SHapley加性解释(SHAP)和“像我5一样解释”(ELI5)来识别主要特征和冗余特征。这种方法可以在不影响模型性能的情况下实现系统的数据集缩减。我们的案例研究基于光伏发电数据,在四个实验设置中评估了该方法。实验结果表明,基于xai的特征选择将优势度指数从0.37降低到0.17,保持了较高的预测精度(R2 = 0.94, drop < 0.04),降低了数据采集成本。此外,消除主要特征可以提高对噪声的鲁棒性,并将性能差异减少到基线情景的三倍。所开发的框架增强了可解释性,支持人在环决策,并为特征选择引入了成本敏感的目标函数。通过结合透明度、稳健性和效率,我们为可信赖的人工智能(TAI)在能源预测中的应用的开发和实施做出了贡献,为工业部署提供了可扩展的解决方案。
{"title":"An explainable artificial intelligence feature selection framework for transparent, trustworthy, and cost-efficient energy forecasting","authors":"Leonard Kost,&nbsp;Sarah K. Lier,&nbsp;Michael H. Breitner","doi":"10.1016/j.egyai.2025.100648","DOIUrl":"10.1016/j.egyai.2025.100648","url":null,"abstract":"<div><div>Accurate forecasting of renewable power generation is crucial for grid stability and cost efficiency. Feature selection in AI-based forecasting remains challenging due to high data acquisition cost, lack of transparency, and limited user control. We introduce a transparent and cost-sensitive feature selection framework for renewable power forecasting that leverages Explainable Artificial Intelligence (XAI). We integrate SHapley Additive exPlanations (SHAP) and Explain Like I’m 5 (ELI5) to identify dominant and redundant features. This approach enables systematic dataset reduction without compromising model performance. Our case study, based on Photovoltaic (PV) generation data, evaluates the approach across four experimental setups. Experimental results indicate that our XAI-based feature selection reduces the dominance index from 0.37 to 0.17, maintains high predictive accuracy (<span><math><msup><mi>R</mi><mn>2</mn></msup></math></span> = 0.94, drop &lt; 0.04), and lowers data acquisition costs. Furthermore, eliminating dominant features improves robustness to noise and reduces performance variance by a factor of three compared to the baseline scenario. The developed framework enhances interpretability, supports human-in-the-loop decision-making, and introduces a cost-sensitive objective function for feature selection. By combining transparency, robustness, and efficiency, we contribute to the development and implementation of Trustworthy AI (TAI) applications in energy forecasting, providing a scalable solution for industrial deployment.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"22 ","pages":"Article 100648"},"PeriodicalIF":9.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scenario generation via moments-informed normalizing flows for stochastic optimization of local energy markets 基于时刻信息的规范化流的情景生成,用于局部能源市场的随机优化
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1016/j.egyai.2025.100649
Xu Zhou, Vassilis M. Charitopoulos
Scenario generation is a critical step in stochastic programming for energy systems applications, where accurate representation of uncertainty directly impacts the decision quality. Normalizing flows (NFs), a class of invertible deep generative models, offer flexibility in learning complex distributions by maximizing the likelihood, but often suffer from limited accuracy in reproducing key statistical properties of real-world data. In this work we propose a moments-informed Normalizing Flows (MI-NF) framework, in which moment constraints are incorporated into the NF training process to improve the accuracy of scenario-based probabilistic forecasts. Furthermore, Gaussian Processes (GPs) are employed to adaptively determine the moment regularization weight. Case studies on the open-access dataset of the Global Energy Forecasting Competition 2014 demonstrate that scenarios generated by the MI-NF model achieve over 40% lower mean absolute error on the testing set. When applied within a stochastic programming framework for a local electricity–hydrogen market, the improved scenario accuracy leads to more cost-effective and robust operational decisions under uncertainty.
场景生成是能源系统随机规划应用的关键步骤,其中不确定性的准确表示直接影响决策质量。归一化流(NFs)是一种可逆的深度生成模型,通过最大化似然提供了学习复杂分布的灵活性,但在再现现实世界数据的关键统计属性时,往往受到准确性的限制。在这项工作中,我们提出了一个矩通知的归一化流(MI-NF)框架,其中矩约束被纳入到NF训练过程中,以提高基于场景的概率预测的准确性。此外,采用高斯过程自适应确定矩正则化权值。对2014年全球能源预测大赛开放获取数据集的案例研究表明,MI-NF模型生成的场景在测试集上的平均绝对误差降低了40%以上。当应用于当地电力-氢市场的随机规划框架时,改进的情景准确性导致不确定性下更具成本效益和稳健的运营决策。
{"title":"Scenario generation via moments-informed normalizing flows for stochastic optimization of local energy markets","authors":"Xu Zhou,&nbsp;Vassilis M. Charitopoulos","doi":"10.1016/j.egyai.2025.100649","DOIUrl":"10.1016/j.egyai.2025.100649","url":null,"abstract":"<div><div>Scenario generation is a critical step in stochastic programming for energy systems applications, where accurate representation of uncertainty directly impacts the decision quality. Normalizing flows (NFs), a class of invertible deep generative models, offer flexibility in learning complex distributions by maximizing the likelihood, but often suffer from limited accuracy in reproducing key statistical properties of real-world data. In this work we propose a moments-informed Normalizing Flows (MI-NF) framework, in which moment constraints are incorporated into the NF training process to improve the accuracy of scenario-based probabilistic forecasts. Furthermore, Gaussian Processes (GPs) are employed to adaptively determine the moment regularization weight. Case studies on the open-access dataset of the Global Energy Forecasting Competition 2014 demonstrate that scenarios generated by the MI-NF model achieve over 40% lower mean absolute error on the testing set. When applied within a stochastic programming framework for a local electricity–hydrogen market, the improved scenario accuracy leads to more cost-effective and robust operational decisions under uncertainty.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"22 ","pages":"Article 100649"},"PeriodicalIF":9.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large format battery SoC estimation: An ultrasonic sensing and deep transfer learning predictions for heterogeneity 大尺寸电池SoC估计:超声传感和深度迁移学习预测异质性
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1016/j.egyai.2025.100662
Hamidreza Farhadi Tolie , Benjamin Reichmann , James Marco , Zahra Sharif Khodaei , Mona Faraji Niri
Accurate state of charge (SoC) estimation is vital for safe and efficient operation of lithium-ion batteries. Methods such as Coulomb counting and open-circuit voltage measurements face challenges related to drift and accuracy, especially in large-format cells with spatial gradients in electric vehicles and grid storage usage. This study investigates ultrasonic sensing as a non-invasive and real-time technique for SoC estimation. It explores the opportunity of sensor placement using machine learning models to identify optimal actuator–receiver paths based on signal quality and pinpoints the maximum accuracy that can be achieved for SoC estimation. Based on experimentally collected ultrasound signals transmitted between four sensors installed on a large format pouch cell, a novel and customised deep learning framework enhanced by convolutional neural networks is developed to process ultrasonic signals through transformation to waveform images and leverage transfer learning from strong pre-trained models. The results demonstrate that combining bidirectional signal transmission with a dynamic deep learning-based strategy for actuator and receiver selection significantly enhances the effectiveness of ultrasonic sensing compared to traditional data analysis and pave the way for a robust and scalable SoC monitoring in large-format battery cells. Furthermore, preliminary pathways towards self-supervision are explored by examining the differentiability of ultrasonic signals with respect to SoC, offering a promising route to reduce reliance on conventional ground truths and enhance the scalability of ultrasound-based SoC estimation. The data and source code will be made available at https://github.com/hfarhaditolie/Ultrasonic-SoC.
准确的荷电状态估算对于锂离子电池的安全高效运行至关重要。库仑计数和开路电压测量等方法面临着漂移和精度方面的挑战,特别是在电动汽车和电网存储使用中具有空间梯度的大尺寸电池中。本研究探讨了超声传感作为一种非侵入性和实时的SoC评估技术。它探索了使用机器学习模型来识别基于信号质量的最佳执行器-接收器路径的传感器放置机会,并确定了SoC估计可以达到的最大精度。基于实验采集的超声波信号在安装在大尺寸袋状细胞上的四个传感器之间传输,开发了一种新颖的定制深度学习框架,通过卷积神经网络增强,通过转换到波形图像来处理超声波信号,并利用强预训练模型的迁移学习。结果表明,与传统的数据分析相比,将双向信号传输与基于动态深度学习的致动器和接收器选择策略相结合,显著提高了超声波传感的有效性,并为在大尺寸电池中实现鲁棒性和可扩展性的SoC监测铺平了道路。此外,通过检查超声信号相对于SoC的可微分性,探索了自我监督的初步途径,为减少对传统地面事实的依赖和增强基于超声SoC估计的可扩展性提供了一条有希望的途径。数据和源代码将在https://github.com/hfarhaditolie/Ultrasonic-SoC上提供。
{"title":"Large format battery SoC estimation: An ultrasonic sensing and deep transfer learning predictions for heterogeneity","authors":"Hamidreza Farhadi Tolie ,&nbsp;Benjamin Reichmann ,&nbsp;James Marco ,&nbsp;Zahra Sharif Khodaei ,&nbsp;Mona Faraji Niri","doi":"10.1016/j.egyai.2025.100662","DOIUrl":"10.1016/j.egyai.2025.100662","url":null,"abstract":"<div><div>Accurate state of charge (SoC) estimation is vital for safe and efficient operation of lithium-ion batteries. Methods such as Coulomb counting and open-circuit voltage measurements face challenges related to drift and accuracy, especially in large-format cells with spatial gradients in electric vehicles and grid storage usage. This study investigates ultrasonic sensing as a non-invasive and real-time technique for SoC estimation. It explores the opportunity of sensor placement using machine learning models to identify optimal actuator–receiver paths based on signal quality and pinpoints the maximum accuracy that can be achieved for SoC estimation. Based on experimentally collected ultrasound signals transmitted between four sensors installed on a large format pouch cell, a novel and customised deep learning framework enhanced by convolutional neural networks is developed to process ultrasonic signals through transformation to waveform images and leverage transfer learning from strong pre-trained models. The results demonstrate that combining bidirectional signal transmission with a dynamic deep learning-based strategy for actuator and receiver selection significantly enhances the effectiveness of ultrasonic sensing compared to traditional data analysis and pave the way for a robust and scalable SoC monitoring in large-format battery cells. Furthermore, preliminary pathways towards self-supervision are explored by examining the differentiability of ultrasonic signals with respect to SoC, offering a promising route to reduce reliance on conventional ground truths and enhance the scalability of ultrasound-based SoC estimation. The data and source code will be made available at <span><span>https://github.com/hfarhaditolie/Ultrasonic-SoC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"22 ","pages":"Article 100662"},"PeriodicalIF":9.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MVLFLM: A parameter-efficient large language model framework for cross-domain multi-voltage load forecasting in smart grids 智能电网跨域多电压负荷预测的参数高效大语言模型框架
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1016/j.egyai.2025.100651
Guolong Liu , Yan Bai , Huan Zhao , Keen Wen , Xinlei Wang , Jinjin Gu , Yanli Liu , Gaoqi Liang , Junhua Zhao , Zhao Yang Dong
Modern smart grids face significant challenges in short-term load forecasting due to increasing complexity across transmission, distribution, and consumer levels. While recent studies have explored large language models for load forecasting, existing methods are limited by computational overhead, voltage-level specificity, and inadequate cross-domain generalization. This paper introduces Multi-Voltage Load Forecasting Large Model (MVLFLM), a unified Transformer-based framework that addresses multi-voltage STLF through parameter-efficient fine-tuning of a Llama 2-7B foundation model. Unlike previous LLM-based forecasting methods that focus on single voltage levels or require extensive retraining, MVLFLM employs selective layer freezing to preserve pre-trained knowledge while adapting only essential parameters for load pattern recognition. Comprehensive evaluation across four real-world datasets spanning high (transmission), medium (distribution), and low (consumer) voltage levels demonstrates MVLFLM’s superior performance, achieving higher performance than benchmarks. Most significantly, MVLFLM exhibits exceptional zero-shot generalization with only 9.07% average performance degradation when applied to unseen grid entities, substantially outperforming existing methods. These results establish MVLFLM as the unified, computationally efficient solution for multi-voltage load forecasting that maintains forecasting accuracy while enabling seamless deployment across heterogeneous smart grid infrastructures.
由于输电、配电和用户层面的复杂性日益增加,现代智能电网在短期负荷预测方面面临重大挑战。虽然最近的研究已经探索了用于负荷预测的大型语言模型,但现有的方法受到计算开销、电压水平特异性和不充分的跨域泛化的限制。本文介绍了多电压负荷预测大模型(MVLFLM),这是一个基于变压器的统一框架,通过对Llama 2-7B基础模型进行参数高效微调来解决多电压STLF问题。与以往基于llm的预测方法不同,MVLFLM采用选择性层冻结来保留预训练的知识,同时只适应负载模式识别的基本参数。对高(传输)、中(配电)和低(消费者)电压水平的四个实际数据集进行综合评估,证明了MVLFLM的卓越性能,实现了比基准更高的性能。最重要的是,当应用于不可见的网格实体时,MVLFLM表现出优异的零射击泛化,平均性能下降仅为9.07%,大大优于现有方法。这些结果表明,MVLFLM是一种统一的、计算效率高的多电压负荷预测解决方案,既能保持预测准确性,又能实现跨异构智能电网基础设施的无缝部署。
{"title":"MVLFLM: A parameter-efficient large language model framework for cross-domain multi-voltage load forecasting in smart grids","authors":"Guolong Liu ,&nbsp;Yan Bai ,&nbsp;Huan Zhao ,&nbsp;Keen Wen ,&nbsp;Xinlei Wang ,&nbsp;Jinjin Gu ,&nbsp;Yanli Liu ,&nbsp;Gaoqi Liang ,&nbsp;Junhua Zhao ,&nbsp;Zhao Yang Dong","doi":"10.1016/j.egyai.2025.100651","DOIUrl":"10.1016/j.egyai.2025.100651","url":null,"abstract":"<div><div>Modern smart grids face significant challenges in short-term load forecasting due to increasing complexity across transmission, distribution, and consumer levels. While recent studies have explored large language models for load forecasting, existing methods are limited by computational overhead, voltage-level specificity, and inadequate cross-domain generalization. This paper introduces Multi-Voltage Load Forecasting Large Model (MVLFLM), a unified Transformer-based framework that addresses multi-voltage STLF through parameter-efficient fine-tuning of a Llama 2-7B foundation model. Unlike previous LLM-based forecasting methods that focus on single voltage levels or require extensive retraining, MVLFLM employs selective layer freezing to preserve pre-trained knowledge while adapting only essential parameters for load pattern recognition. Comprehensive evaluation across four real-world datasets spanning high (transmission), medium (distribution), and low (consumer) voltage levels demonstrates MVLFLM’s superior performance, achieving higher performance than benchmarks. Most significantly, MVLFLM exhibits exceptional zero-shot generalization with only 9.07% average performance degradation when applied to unseen grid entities, substantially outperforming existing methods. These results establish MVLFLM as the unified, computationally efficient solution for multi-voltage load forecasting that maintains forecasting accuracy while enabling seamless deployment across heterogeneous smart grid infrastructures.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"22 ","pages":"Article 100651"},"PeriodicalIF":9.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning-guided optimization of high-performance porous composite membranes for alkaline water electrolysis 碱水电解用高性能多孔复合膜的机器学习优化
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1016/j.egyai.2025.100657
Xinyang Zhao , Zhen Geng , Sheng Guo , Hao Cai , Qihan Xia , Min Liu , Xuesong Zhang , Liming Jin , Cunman Zhang
The production of green hydrogen via alkaline water electrolysis necessitates porous composite membranes with high ionic conductivity and high bubble-point pressure. However, the mainstream preparation process of porous composite membranes involves many parameters, rendering this a complex high-dimensional optimization problem. Traditional trial-and-error experimentation is inefficient and often fails to explore the performance boundaries. In this study, an XGBoost -based machine learning model is developed and trained with laboratory-collected datasets, achieving satisfactory predictive performance. The model provides critical insights into the relationships between six manufacturing parameters and the two core performance parameters of the membrane. Subsequently, prediction based on coarse-grained grid and reverse search are performed on the model to identify optimal parameter regions, followed by manual refinement through feature analysis. This integrated approach ultimately identifies three high-performance composite membrane candidates, which are experimentally validated. This work demonstrates a highly efficient and accurate machine learning-driven paradigm for the development of advanced porous composite membrane in alkaline water electrolysis.
碱水电解生产绿色氢需要具有高离子电导率和高气泡点压力的多孔复合膜。然而,主流多孔复合膜的制备工艺涉及众多参数,是一个复杂的高维优化问题。传统的试错实验效率低下,而且常常无法探索性能边界。在本研究中,开发了基于XGBoost的机器学习模型,并使用实验室收集的数据集进行了训练,取得了令人满意的预测性能。该模型为六个制造参数与膜的两个核心性能参数之间的关系提供了关键的见解。然后对模型进行基于粗粒度网格的预测和反向搜索,找出最优参数区域,再通过特征分析进行人工细化。这种综合方法最终确定了三种高性能复合膜候选材料,并进行了实验验证。这项工作为碱水电解中先进多孔复合膜的开发提供了一种高效、准确的机器学习驱动范式。
{"title":"Machine learning-guided optimization of high-performance porous composite membranes for alkaline water electrolysis","authors":"Xinyang Zhao ,&nbsp;Zhen Geng ,&nbsp;Sheng Guo ,&nbsp;Hao Cai ,&nbsp;Qihan Xia ,&nbsp;Min Liu ,&nbsp;Xuesong Zhang ,&nbsp;Liming Jin ,&nbsp;Cunman Zhang","doi":"10.1016/j.egyai.2025.100657","DOIUrl":"10.1016/j.egyai.2025.100657","url":null,"abstract":"<div><div>The production of green hydrogen via alkaline water electrolysis necessitates porous composite membranes with high ionic conductivity and high bubble-point pressure. However, the mainstream preparation process of porous composite membranes involves many parameters, rendering this a complex high-dimensional optimization problem. Traditional trial-and-error experimentation is inefficient and often fails to explore the performance boundaries. In this study, an XGBoost -based machine learning model is developed and trained with laboratory-collected datasets, achieving satisfactory predictive performance. The model provides critical insights into the relationships between six manufacturing parameters and the two core performance parameters of the membrane. Subsequently, prediction based on coarse-grained grid and reverse search are performed on the model to identify optimal parameter regions, followed by manual refinement through feature analysis. This integrated approach ultimately identifies three high-performance composite membrane candidates, which are experimentally validated. This work demonstrates a highly efficient and accurate machine learning-driven paradigm for the development of advanced porous composite membrane in alkaline water electrolysis.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"22 ","pages":"Article 100657"},"PeriodicalIF":9.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Energy and AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1