首页 > 最新文献

Energy and AI最新文献

英文 中文
Digital twin-centered hybrid data-driven multi-stage deep learning framework for enhanced nuclear reactor power prediction
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-01 DOI: 10.1016/j.egyai.2024.100450
James Daniell , Kazuma Kobayashi , Ayodeji Alajo , Syed Bahauddin Alam
The accurate and efficient modeling of nuclear reactor transients is crucial for ensuring safe and optimal reactor operation. Traditional physics-based models, while valuable, can be computationally intensive and may not fully capture the complexities of real-world reactor behavior. This paper introduces a novel hybrid digital twin-focused multi-stage deep learning framework that addresses these limitations, offering a faster and more robust solution for predicting the final steady-state power of reactor transients. By leveraging a combination of feed-forward neural networks with both classification and regression stages, and training on a unique dataset that integrates real-world measurements of reactor power and controls state from the Missouri University of Science and Technology Reactor (MSTR) with noise-enhanced simulated data, our approach achieves remarkable accuracy (96% classification, 2.3% MAPE). The incorporation of simulated data with noise significantly improves the model’s generalization capabilities, mitigating the risk of overfitting. Designed as a digital twin supporting system, this framework integrates real-time, synchronized predictions of reactor state transitions, enabling dynamic operational monitoring and optimization. This innovative solution not only enables rapid and precise prediction of reactor behavior but also has the potential to revolutionize nuclear reactor operations, facilitating enhanced safety protocols, optimized performance, and streamlined decision-making processes. By aligning data-driven insights with the principles of digital twins, this work lays the groundwork for adaptable and scalable solutions for advanced reactors.
{"title":"Digital twin-centered hybrid data-driven multi-stage deep learning framework for enhanced nuclear reactor power prediction","authors":"James Daniell ,&nbsp;Kazuma Kobayashi ,&nbsp;Ayodeji Alajo ,&nbsp;Syed Bahauddin Alam","doi":"10.1016/j.egyai.2024.100450","DOIUrl":"10.1016/j.egyai.2024.100450","url":null,"abstract":"<div><div>The accurate and efficient modeling of nuclear reactor transients is crucial for ensuring safe and optimal reactor operation. Traditional physics-based models, while valuable, can be computationally intensive and may not fully capture the complexities of real-world reactor behavior. This paper introduces a novel hybrid digital twin-focused multi-stage deep learning framework that addresses these limitations, offering a faster and more robust solution for predicting the final steady-state power of reactor transients. By leveraging a combination of feed-forward neural networks with both classification and regression stages, and training on a unique dataset that integrates real-world measurements of reactor power and controls state from the Missouri University of Science and Technology Reactor (MSTR) with noise-enhanced simulated data, our approach achieves remarkable accuracy (96% classification, 2.3% MAPE). The incorporation of simulated data with noise significantly improves the model’s generalization capabilities, mitigating the risk of overfitting. Designed as a digital twin supporting system, this framework integrates real-time, synchronized predictions of reactor state transitions, enabling dynamic operational monitoring and optimization. This innovative solution not only enables rapid and precise prediction of reactor behavior but also has the potential to revolutionize nuclear reactor operations, facilitating enhanced safety protocols, optimized performance, and streamlined decision-making processes. By aligning data-driven insights with the principles of digital twins, this work lays the groundwork for adaptable and scalable solutions for advanced reactors.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"19 ","pages":"Article 100450"},"PeriodicalIF":9.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ignition delay prediction for fuels with diverse molecular structures using transfer learning-based neural networks
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-01 DOI: 10.1016/j.egyai.2024.100467
Mo Yang, Dezhi Zhou
In this study, a transfer learning-based neural network approach to predict ignition delays for a variety of fuels is proposed to meet the demand for accurate combustion analysis. A comprehensive dataset of ignition delays was generated using a random sampling technique across different temperatures and pressures, focusing on hydrocarbon fuels with 1–4 carbon atoms. Two machine learning models, an artificial neural network and a graph convolutional network, are trained on this dataset, and their prediction performance was evaluated. A transfer learning framework was subsequently developed, enabling the models trained on smaller molecules (1–3 carbon atoms) to predict ignition delays for larger molecules (4 carbon atoms) with minimal additional data. The proposed framework demonstrated reliable and high prediction accuracy, achieving a high level of reliability for fuels with limited experimental measurements. This approach offers significant potential to streamline the prediction of ignition delays for novel fuels, reducing the dependence on resource-intensive experiments and complex simulations while contributing to the advancement of clean and efficient energy technologies.
{"title":"Ignition delay prediction for fuels with diverse molecular structures using transfer learning-based neural networks","authors":"Mo Yang,&nbsp;Dezhi Zhou","doi":"10.1016/j.egyai.2024.100467","DOIUrl":"10.1016/j.egyai.2024.100467","url":null,"abstract":"<div><div>In this study, a transfer learning-based neural network approach to predict ignition delays for a variety of fuels is proposed to meet the demand for accurate combustion analysis. A comprehensive dataset of ignition delays was generated using a random sampling technique across different temperatures and pressures, focusing on hydrocarbon fuels with 1–4 carbon atoms. Two machine learning models, an artificial neural network and a graph convolutional network, are trained on this dataset, and their prediction performance was evaluated. A transfer learning framework was subsequently developed, enabling the models trained on smaller molecules (1–3 carbon atoms) to predict ignition delays for larger molecules (4 carbon atoms) with minimal additional data. The proposed framework demonstrated reliable and high prediction accuracy, achieving a high level of reliability for fuels with limited experimental measurements. This approach offers significant potential to streamline the prediction of ignition delays for novel fuels, reducing the dependence on resource-intensive experiments and complex simulations while contributing to the advancement of clean and efficient energy technologies.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"19 ","pages":"Article 100467"},"PeriodicalIF":9.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RL-ADN: A high-performance Deep Reinforcement Learning environment for optimal Energy Storage Systems dispatch in active distribution networks
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-01 DOI: 10.1016/j.egyai.2024.100457
Shengren Hou , Shuyi Gao , Weijie Xia , Edgar Mauricio Salazar Duque , Peter Palensky , Pedro P. Vergara
Deep Reinforcement Learning (DRL) presents a promising avenue for optimizing Energy Storage Systems (ESSs) dispatch in distribution networks. This paper introduces RL-ADN, an innovative open-source library specifically designed for solving the optimal ESSs dispatch in active distribution networks. RL-ADN offers unparalleled flexibility in modeling distribution networks, and ESSs, accommodating a wide range of research goals. A standout feature of RL-ADN is its data augmentation module, based on Gaussian Mixture Model and Copula (GMC) functions, which elevates the performance ceiling of DRL agents, achieving an average performance improvement of 21.43%, 1.08%, 2.76%, by augmenting five-year, one-year and three-month data, respectively. Additionally, RL-ADN incorporates the Tensor Power Flow solver, significantly reducing the computational burden of power flow calculations during training without sacrificing accuracy, maintaining voltage magnitude with an average error not exceeding 0.0001%. The effectiveness of RL-ADN is demonstrated using distribution networks with size varying, showing marked performance improvements in the adaptability of DRL algorithms for ESS dispatch tasks. Furthermore, RL-ADN achieves a tenfold increase in computational efficiency during training, making it highly suitable for large-scale network applications. The library sets a new benchmark in DRL-based ESSs dispatch in distribution networks and it is poised to advance DRL applications in distribution network operations significantly. RL-ADN is available at: https://github.com/ShengrenHou/RL-ADN and https://github.com/distributionnetworksTUDelft/RL-ADN.
{"title":"RL-ADN: A high-performance Deep Reinforcement Learning environment for optimal Energy Storage Systems dispatch in active distribution networks","authors":"Shengren Hou ,&nbsp;Shuyi Gao ,&nbsp;Weijie Xia ,&nbsp;Edgar Mauricio Salazar Duque ,&nbsp;Peter Palensky ,&nbsp;Pedro P. Vergara","doi":"10.1016/j.egyai.2024.100457","DOIUrl":"10.1016/j.egyai.2024.100457","url":null,"abstract":"<div><div>Deep Reinforcement Learning (DRL) presents a promising avenue for optimizing Energy Storage Systems (ESSs) dispatch in distribution networks. This paper introduces RL-ADN, an innovative open-source library specifically designed for solving the optimal ESSs dispatch in active distribution networks. RL-ADN offers unparalleled flexibility in modeling distribution networks, and ESSs, accommodating a wide range of research goals. A standout feature of RL-ADN is its data augmentation module, based on Gaussian Mixture Model and Copula (GMC) functions, which elevates the performance ceiling of DRL agents, achieving an average performance improvement of 21.43%, 1.08%, 2.76%, by augmenting five-year, one-year and three-month data, respectively. Additionally, RL-ADN incorporates the Tensor Power Flow solver, significantly reducing the computational burden of power flow calculations during training without sacrificing accuracy, maintaining voltage magnitude with an average error not exceeding 0.0001%. The effectiveness of RL-ADN is demonstrated using distribution networks with size varying, showing marked performance improvements in the adaptability of DRL algorithms for ESS dispatch tasks. Furthermore, RL-ADN achieves a tenfold increase in computational efficiency during training, making it highly suitable for large-scale network applications. The library sets a new benchmark in DRL-based ESSs dispatch in distribution networks and it is poised to advance DRL applications in distribution network operations significantly. RL-ADN is available at: <span><span>https://github.com/ShengrenHou/RL-ADN</span><svg><path></path></svg></span> and <span><span>https://github.com/distributionnetworksTUDelft/RL-ADN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"19 ","pages":"Article 100457"},"PeriodicalIF":9.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DGL-STFA: Predicting lithium-ion battery health with dynamic graph learning and spatial–temporal fusion attention
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-01 DOI: 10.1016/j.egyai.2024.100462
Zheng Chen , Quan Qian
Accurately predicting the State of Health (SOH) of lithium-ion batteries is a critical challenge to ensure their reliability and safety in energy storage systems, such as electric vehicles and renewable energy grids. The intricate battery degradation process is influenced by evolving spatial and temporal interactions among health indicators. Existing methods often fail to capture the dynamic interactions between health indicators over time, resulting in limited predictive accuracy. To address these challenges, we propose a novel framework, Dynamic Graph Learning with Spatial–Temporal Fusion Attention (DGL-STFA), which transforms health indicator time-series data into time-evolving graph representations. The framework employs multi-scale convolutional neural networks to capture diverse temporal patterns, a self-attention mechanism to construct dynamic adjacency matrices that adapt over time, and a temporal attention mechanism to identify and prioritize key moments that influence battery degradation. This combination enables DGL-STFA to effectively model both dynamic spatial relationships and long-term temporal dependencies, enhancing SOH prediction accuracy. Extensive experiments were conducted on the NASA and CALCE battery datasets, comparing this framework with traditional time-series prediction methods and other graph-based prediction methods. The results demonstrate that our framework significantly improves prediction accuracy, with a mean absolute error more than 30% lower than other methods. Further analysis demonstrated the robustness of DGL-STFA across various battery life stages, including early, mid, and end-of-life phases. These results highlight the capability of DGL-STFA to accurately predict SOH, addressing critical challenges in advancing battery health monitoring for energy storage applications.
{"title":"DGL-STFA: Predicting lithium-ion battery health with dynamic graph learning and spatial–temporal fusion attention","authors":"Zheng Chen ,&nbsp;Quan Qian","doi":"10.1016/j.egyai.2024.100462","DOIUrl":"10.1016/j.egyai.2024.100462","url":null,"abstract":"<div><div>Accurately predicting the State of Health (SOH) of lithium-ion batteries is a critical challenge to ensure their reliability and safety in energy storage systems, such as electric vehicles and renewable energy grids. The intricate battery degradation process is influenced by evolving spatial and temporal interactions among health indicators. Existing methods often fail to capture the dynamic interactions between health indicators over time, resulting in limited predictive accuracy. To address these challenges, we propose a novel framework, Dynamic Graph Learning with Spatial–Temporal Fusion Attention (DGL-STFA), which transforms health indicator time-series data into time-evolving graph representations. The framework employs multi-scale convolutional neural networks to capture diverse temporal patterns, a self-attention mechanism to construct dynamic adjacency matrices that adapt over time, and a temporal attention mechanism to identify and prioritize key moments that influence battery degradation. This combination enables DGL-STFA to effectively model both dynamic spatial relationships and long-term temporal dependencies, enhancing SOH prediction accuracy. Extensive experiments were conducted on the NASA and CALCE battery datasets, comparing this framework with traditional time-series prediction methods and other graph-based prediction methods. The results demonstrate that our framework significantly improves prediction accuracy, with a mean absolute error more than 30% lower than other methods. Further analysis demonstrated the robustness of DGL-STFA across various battery life stages, including early, mid, and end-of-life phases. These results highlight the capability of DGL-STFA to accurately predict SOH, addressing critical challenges in advancing battery health monitoring for energy storage applications.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"19 ","pages":"Article 100462"},"PeriodicalIF":9.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probabilistic forecasting of renewable energy and electricity demand using Graph-based Denoising Diffusion Probabilistic Model
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-01 DOI: 10.1016/j.egyai.2024.100459
Amir Miraki , Pekka Parviainen , Reza Arghandeh
Renewable energy production and the balance between production and demand have become increasingly crucial in modern power systems, necessitating accurate forecasting. Traditional deterministic methods fail to capture the inherent uncertainties associated with intermittent renewable sources and fluctuating demand patterns. This paper proposes a novel denoising diffusion method for multivariate time series probabilistic forecasting that explicitly models the interdependencies between variables through graph modeling. Our framework employs a parallel feature extraction module that simultaneously captures temporal dynamics and spatial correlations, enabling improved forecasting accuracy. Through extensive evaluation on two real-world datasets focused on renewable energy and electricity demand, we demonstrate that our approach achieves state-of-the-art performance in probabilistic energy time series forecasting tasks. By explicitly modeling variable interdependencies and incorporating temporal information, our method provides reliable probabilistic forecasts, crucial for effective decision-making and resource allocation in the energy sector. Extensive experiments validate that our proposed method reduces the Continuous Ranked Probability Score (CRPS) by 2.1%–70.9%, Mean Absolute Error (MAE) by 4.4%–52.2%, and Root Mean Squared Error (RMSE) by 7.9%–53.4% over existing methods on two real-world datasets.
{"title":"Probabilistic forecasting of renewable energy and electricity demand using Graph-based Denoising Diffusion Probabilistic Model","authors":"Amir Miraki ,&nbsp;Pekka Parviainen ,&nbsp;Reza Arghandeh","doi":"10.1016/j.egyai.2024.100459","DOIUrl":"10.1016/j.egyai.2024.100459","url":null,"abstract":"<div><div>Renewable energy production and the balance between production and demand have become increasingly crucial in modern power systems, necessitating accurate forecasting. Traditional deterministic methods fail to capture the inherent uncertainties associated with intermittent renewable sources and fluctuating demand patterns. This paper proposes a novel denoising diffusion method for multivariate time series probabilistic forecasting that explicitly models the interdependencies between variables through graph modeling. Our framework employs a parallel feature extraction module that simultaneously captures temporal dynamics and spatial correlations, enabling improved forecasting accuracy. Through extensive evaluation on two real-world datasets focused on renewable energy and electricity demand, we demonstrate that our approach achieves state-of-the-art performance in probabilistic energy time series forecasting tasks. By explicitly modeling variable interdependencies and incorporating temporal information, our method provides reliable probabilistic forecasts, crucial for effective decision-making and resource allocation in the energy sector. Extensive experiments validate that our proposed method reduces the Continuous Ranked Probability Score (CRPS) by 2.1%–70.9%, Mean Absolute Error (MAE) by 4.4%–52.2%, and Root Mean Squared Error (RMSE) by 7.9%–53.4% over existing methods on two real-world datasets.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"19 ","pages":"Article 100459"},"PeriodicalIF":9.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prognostic machine learning models for thermophysical characteristics of nanodiamond-based nanolubricants for heat pump systems
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-01 DOI: 10.1016/j.egyai.2024.100453
Ammar M. Bahman , Emil Pradeep , Zafar Said , Prabhakar Sharma
Lubricants for compressor oil significantly enhance the energy efficiency and performance of heat pump (HP) systems. This study compares prognostic machine learning (ML) models designed to predict the thermal conductivity and viscosity of nanolubricants used in HP compressors. Nanodiamond (ND) nanoparticles were mixed in Polyolester (POE) oil at volume concentrations ranging from 0.05 to 0.5 vol.% and temperatures ranging from 10 to 100 °C. The data collected from the experimental research were used to build prognostic models using modern supervised ML techniques, including Gaussian process regression (GPR) and boosted regression tree (BRT). The GPR model demonstrated superior performance compared to the BRT model, achieving coefficient of correlation (R) values of 0.9996 and 0.9991 for thermal conductivity and viscosity, respectively. The reliability of the GPR and BRT models was further validated through comprehensive validation, sensitivity analysis, and extrapolation assessment using both empirical and unseen dataset references from the literature. When validated against an empirical correlation, the ML models exhibited a mean absolute error (MAE) of 0.17% for thermal conductivity and below 8% for viscosity. Additionally, when the GPR-based model was extended up to 120 °C, the parametric analysis confirmed the reliability and accuracy of thermal conductivity and viscosity within a relative error of 5%. Furthermore, in the extrapolation analysis, despite changes in oil grade and nanolubricant concentrations, the GPR-based model showed a maximum absolute error (AE) of 19% compared to non-trained experimental data. Overall, the developed ML models can aid in designing and optimizing ND/POE nanolubricants for HP applications, achieving desired performance parameters while remaining economically viable and reducing the need for time-consuming laboratory-based testing.
{"title":"Prognostic machine learning models for thermophysical characteristics of nanodiamond-based nanolubricants for heat pump systems","authors":"Ammar M. Bahman ,&nbsp;Emil Pradeep ,&nbsp;Zafar Said ,&nbsp;Prabhakar Sharma","doi":"10.1016/j.egyai.2024.100453","DOIUrl":"10.1016/j.egyai.2024.100453","url":null,"abstract":"<div><div>Lubricants for compressor oil significantly enhance the energy efficiency and performance of heat pump (HP) systems. This study compares prognostic machine learning (ML) models designed to predict the thermal conductivity and viscosity of nanolubricants used in HP compressors. Nanodiamond (ND) nanoparticles were mixed in Polyolester (POE) oil at volume concentrations ranging from 0.05 to 0.5 vol.% and temperatures ranging from 10 to 100<!--> <!-->°C. The data collected from the experimental research were used to build prognostic models using modern supervised ML techniques, including Gaussian process regression (GPR) and boosted regression tree (BRT). The GPR model demonstrated superior performance compared to the BRT model, achieving coefficient of correlation (R) values of 0.9996 and 0.9991 for thermal conductivity and viscosity, respectively. The reliability of the GPR and BRT models was further validated through comprehensive validation, sensitivity analysis, and extrapolation assessment using both empirical and unseen dataset references from the literature. When validated against an empirical correlation, the ML models exhibited a mean absolute error (MAE) of 0.17% for thermal conductivity and below 8% for viscosity. Additionally, when the GPR-based model was extended up to 120<!--> <!-->°C, the parametric analysis confirmed the reliability and accuracy of thermal conductivity and viscosity within a relative error of 5%. Furthermore, in the extrapolation analysis, despite changes in oil grade and nanolubricant concentrations, the GPR-based model showed a maximum absolute error (AE) of 19% compared to non-trained experimental data. Overall, the developed ML models can aid in designing and optimizing ND/POE nanolubricants for HP applications, achieving desired performance parameters while remaining economically viable and reducing the need for time-consuming laboratory-based testing.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"18 ","pages":"Article 100453"},"PeriodicalIF":9.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143131781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning for battery quality classification and lifetime prediction using formation data 使用形成数据进行电池质量分类和寿命预测的机器学习
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-01 DOI: 10.1016/j.egyai.2024.100451
Jiayu Zou , Yingbo Gao , Moritz H. Frieges , Martin F. Börner , Achim Kampker , Weihan Li
Accurate classification of battery quality and prediction of battery lifetime before leaving the factory would bring economic and safety benefits. Here, we propose a data-driven approach with machine learning to classify the battery quality and predict the battery lifetime before usage only using formation data. We extract three classes of features from the raw formation data, considering the statistical aspects, differential analysis, and electrochemical characteristics. The correlation between over 100 extracted features and the battery lifetime is analysed based on the ageing mechanisms. Machine learning models are developed to classify battery quality and predict battery lifetime by features with a high correlation with battery ageing. The validation results show that the quality classification model achieved accuracies of 89.74% and 89.47% for the batteries aged at 25°C and 45°C, respectively. Moreover, the lifetime prediction model is able to predict the battery end-of-life with mean percentage errors of 6.50% and 5.45% for the batteries aged at 25°C and 45°C, respectively. This work highlights the potential of battery formation data from production lines in quality classification and lifetime prediction.
在电池出厂前对电池质量进行准确的分类和寿命预测,将带来经济效益和安全效益。在这里,我们提出了一种数据驱动的方法,通过机器学习来对电池质量进行分类,并在使用前仅使用形成数据来预测电池寿命。我们从原始地层数据中提取了三类特征,考虑了统计方面、差分分析和电化学特征。基于老化机理,分析了提取的100多个特征与电池寿命的相关性。开发了机器学习模型,通过与电池老化高度相关的特征对电池质量进行分类并预测电池寿命。验证结果表明,对于25°C和45°C老化电池,质量分类模型的准确率分别达到89.74%和89.47%。在25°C和45°C老化条件下,寿命预测模型预测电池寿命的平均百分比误差分别为6.50%和5.45%。这项工作强调了来自生产线的电池形成数据在质量分类和寿命预测方面的潜力。
{"title":"Machine learning for battery quality classification and lifetime prediction using formation data","authors":"Jiayu Zou ,&nbsp;Yingbo Gao ,&nbsp;Moritz H. Frieges ,&nbsp;Martin F. Börner ,&nbsp;Achim Kampker ,&nbsp;Weihan Li","doi":"10.1016/j.egyai.2024.100451","DOIUrl":"10.1016/j.egyai.2024.100451","url":null,"abstract":"<div><div>Accurate classification of battery quality and prediction of battery lifetime before leaving the factory would bring economic and safety benefits. Here, we propose a data-driven approach with machine learning to classify the battery quality and predict the battery lifetime before usage only using formation data. We extract three classes of features from the raw formation data, considering the statistical aspects, differential analysis, and electrochemical characteristics. The correlation between over 100 extracted features and the battery lifetime is analysed based on the ageing mechanisms. Machine learning models are developed to classify battery quality and predict battery lifetime by features with a high correlation with battery ageing. The validation results show that the quality classification model achieved accuracies of 89.74% and 89.47% for the batteries aged at 25°C and 45°C, respectively. Moreover, the lifetime prediction model is able to predict the battery end-of-life with mean percentage errors of 6.50% and 5.45% for the batteries aged at 25°C and 45°C, respectively. This work highlights the potential of battery formation data from production lines in quality classification and lifetime prediction.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"18 ","pages":"Article 100451"},"PeriodicalIF":9.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural network potential-based molecular investigation of thermal decomposition mechanisms of ethylene and ammonia 基于神经网络电位的乙烯和氨热分解机理分子研究
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-01 DOI: 10.1016/j.egyai.2024.100454
Zhihao Xing, Rodolfo S.M. Freitas, Xi Jiang
This study developed neural network potentials (NNPs) specifically tailored for pure ethylene and ethylene-ammonia blended systems for the first time. The NNPs were trained on a dataset generated from density functional theory (DFT) calculations, combining the computational accuracy of DFT with a calculation speed comparable to reactive force field methods. The NNPs are employed in reactive molecular dynamics simulations to explore the thermal decomposition reaction mechanisms of ethylene and ammonia. The simulation results revealed that adding ammonia reduces the activation energy for ethylene decomposition, thereby accelerating ethylene consumption. Furthermore, the addition of ammonia uncovers a new reaction pathway for hydrogen radical consumption, which reduces the occurrence of H-abstraction reactions from ethylene by hydrogen radicals. The inhibition effect of ammonia addition on soot formation mainly acts in two aspects: on the one hand, ammonia decomposition products react with carbon-containing species, ultimately producing C1N products, thereby decreasing the carbon numbers involved in soot formation. This significantly reduces the concentrations of C5C9 molecules and key polycyclic aromatic hydrocarbons (PAHs) precursors like C2H2 and C3H3. On the other hand, ammonia promotes the ring-opening reactions of six-membered carbon rings at high-temperature conditions, thereby reducing the formation of PAHs precursors. The results show that with the addition of ammonia, six-membered carbon rings tend to convert into seven-membered carbon rings at lower temperatures, while at higher temperatures, they are more likely to transform into three- and five-membered carbon rings. These variations in the transformation of six-membered carbon rings may also affect soot formation. The insights gained from understanding these fundamental chemical reaction mechanisms can guide the development of ethylene-ammonia co-firing systems.
该研究首次开发了专门针对纯乙烯和乙烯-氨混合系统的神经网络电位(NNPs)。nnp在密度泛函理论(DFT)计算生成的数据集上进行训练,将DFT的计算精度与与反作用力场方法相当的计算速度相结合。利用NNPs进行反应分子动力学模拟,探讨乙烯和氨的热分解反应机理。模拟结果表明,氨的加入降低了乙烯分解的活化能,从而加快了乙烯的消耗。此外,氨的加入为氢自由基的消耗开辟了一条新的反应途径,减少了氢自由基从乙烯中提取h的反应。氨的加入对烟灰形成的抑制作用主要表现在两个方面:一方面,氨分解产物与含碳物质反应,最终生成C1N产物,从而降低了烟灰形成所涉及的碳数。这大大降低了C5C9分子和关键的多环芳烃(PAHs)前体如C2H2和C3H3的浓度。另一方面,氨在高温条件下促进六元碳环的开环反应,从而减少多环芳烃前体的形成。结果表明,随着氨的加入,六元碳环在较低温度下倾向于转化为七元碳环,而在较高温度下更容易转化为三元和五元碳环。这些六元碳环转变的变化也可能影响烟尘的形成。从了解这些基本化学反应机制中获得的见解可以指导乙烯-氨共烧系统的发展。
{"title":"Neural network potential-based molecular investigation of thermal decomposition mechanisms of ethylene and ammonia","authors":"Zhihao Xing,&nbsp;Rodolfo S.M. Freitas,&nbsp;Xi Jiang","doi":"10.1016/j.egyai.2024.100454","DOIUrl":"10.1016/j.egyai.2024.100454","url":null,"abstract":"<div><div>This study developed neural network potentials (NNPs) specifically tailored for pure ethylene and ethylene-ammonia blended systems for the first time. The NNPs were trained on a dataset generated from density functional theory (DFT) calculations, combining the computational accuracy of DFT with a calculation speed comparable to reactive force field methods. The NNPs are employed in reactive molecular dynamics simulations to explore the thermal decomposition reaction mechanisms of ethylene and ammonia. The simulation results revealed that adding ammonia reduces the activation energy for ethylene decomposition, thereby accelerating ethylene consumption. Furthermore, the addition of ammonia uncovers a new reaction pathway for hydrogen radical consumption, which reduces the occurrence of H-abstraction reactions from ethylene by hydrogen radicals. The inhibition effect of ammonia addition on soot formation mainly acts in two aspects: on the one hand, ammonia decomposition products react with carbon-containing species, ultimately producing C<sub>1</sub><sub><img></sub>N products, thereby decreasing the carbon numbers involved in soot formation. This significantly reduces the concentrations of C<sub>5</sub><sub><img></sub>C<sub>9</sub> molecules and key polycyclic aromatic hydrocarbons (PAHs) precursors like C<sub>2</sub>H<sub>2</sub> and C<sub>3</sub>H<sub>3</sub>. On the other hand, ammonia promotes the ring-opening reactions of six-membered carbon rings at high-temperature conditions, thereby reducing the formation of PAHs precursors. The results show that with the addition of ammonia, six-membered carbon rings tend to convert into seven-membered carbon rings at lower temperatures, while at higher temperatures, they are more likely to transform into three- and five-membered carbon rings. These variations in the transformation of six-membered carbon rings may also affect soot formation. The insights gained from understanding these fundamental chemical reaction mechanisms can guide the development of ethylene-ammonia co-firing systems.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"18 ","pages":"Article 100454"},"PeriodicalIF":9.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling and optimization of renewable hydrogen systems: A systematic methodological review and machine learning integration
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-01 DOI: 10.1016/j.egyai.2024.100455
M.D. Mukelabai, E.R. Barbour , R.E. Blanchard
The renewable hydrogen economy is recognized as an integral solution for decarbonizing energy sectors. However, high costs have hindered widespread deployment. One promising way of reducing the costs is optimization. Optimization generally involves finding the configuration of the renewable generation and hydrogen system components that maximizes return on investment. Previous studies have included many aspects into their optimizations, including technical parameters and different costs/socio-economic objective functions, however there is no clear best-practice framework for model development. To address these gaps, this critical review examines the latest development in renewable hydrogen microgrid models and summarizes the best modeling practice. The findings show that advances in machine learning integration are improving solar electricity generation forecasting, hydrogen system simulations, and load profile development, particularly in data-scarce regions. Additionally, it is important to account for electrolyzer and fuel cell dynamics, rather than utilizing fixed performance values. This review also demonstrates that typical meteorological year datasets are better for modeling solar irradiation than first-principle calculations. The practicability of socio-economic objective functions is also assessed, proposing that the more comprehensive Levelized Value Addition (LVA) is best suited for inclusion into models. Best practices for creating load profiles in regions like the Global South are discussed, along with an evaluation of AI-based and traditional optimization methods and software tools. Finally, a new evidence-based multi-criteria decision-making framework integrated with machine learning insights, is proposed to guide decision-makers in selecting optimal solutions based on multiple attributes, offering a more comprehensive and adaptive approach to renewable hydrogen system optimization.
{"title":"Modeling and optimization of renewable hydrogen systems: A systematic methodological review and machine learning integration","authors":"M.D. Mukelabai,&nbsp;E.R. Barbour ,&nbsp;R.E. Blanchard","doi":"10.1016/j.egyai.2024.100455","DOIUrl":"10.1016/j.egyai.2024.100455","url":null,"abstract":"<div><div>The renewable hydrogen economy is recognized as an integral solution for decarbonizing energy sectors. However, high costs have hindered widespread deployment. One promising way of reducing the costs is optimization. Optimization generally involves finding the configuration of the renewable generation and hydrogen system components that maximizes return on investment. Previous studies have included many aspects into their optimizations, including technical parameters and different costs/socio-economic objective functions, however there is no clear best-practice framework for model development. To address these gaps, this critical review examines the latest development in renewable hydrogen microgrid models and summarizes the best modeling practice. The findings show that advances in machine learning integration are improving solar electricity generation forecasting, hydrogen system simulations, and load profile development, particularly in data-scarce regions. Additionally, it is important to account for electrolyzer and fuel cell dynamics, rather than utilizing fixed performance values. This review also demonstrates that typical meteorological year datasets are better for modeling solar irradiation than first-principle calculations. The practicability of socio-economic objective functions is also assessed, proposing that the more comprehensive Levelized Value Addition (LVA) is best suited for inclusion into models. Best practices for creating load profiles in regions like the Global South are discussed, along with an evaluation of AI-based and traditional optimization methods and software tools. Finally, a new evidence-based multi-criteria decision-making framework integrated with machine learning insights, is proposed to guide decision-makers in selecting optimal solutions based on multiple attributes, offering a more comprehensive and adaptive approach to renewable hydrogen system optimization.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"18 ","pages":"Article 100455"},"PeriodicalIF":9.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143131782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing PV feed-in power forecasting through federated learning with differential privacy using LSTM and GRU 利用 LSTM 和 GRU,通过具有差分隐私的联合学习加强光伏发电上网功率预测
IF 9.6 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-23 DOI: 10.1016/j.egyai.2024.100452
Pascal Riedel , Kaouther Belkilani , Manfred Reichert , Gerd Heilscher , Reinhold von Schwerin
Given the inherent fluctuation of photovoltaic (PV) generation, accurately forecasting solar power output and grid feed-in is crucial for optimizing grid operations. Data-driven methods facilitate efficient supply and demand management in smart grids, but predicting solar power remains challenging due to weather dependence and data privacy restrictions. Traditional deep learning (DL) approaches require access to centralized training data, leading to security and privacy risks. To navigate these challenges, this study utilizes federated learning (FL) to forecast feed-in power for the low-voltage grid. We propose a bottom-up, privacy-preserving prediction method using differential privacy (DP) to enhance data privacy for energy analytics on the customer side. This study aims at proving the viability of an enhanced FL approach by employing three years of meter data from three residential PV systems installed in a southern city of Germany, incorporating irradiance weather data for accurate PV power generation predictions. For the experiments, the DL models long short-term memory (LSTM) and gated recurrent unit (GRU) are federated and integrated with DP. Consequently, federated LSTM and GRU models are compared with centralized and local baseline models using rolling 5-fold cross-validation to evaluate their respective performances. By leveraging advanced FL algorithms such as FedYogi and FedAdam, we propose a method that not only predicts sequential energy data with high accuracy, achieving an R2 of 97.68%, but also adheres to stringent privacy standards, offering a scalable solution for the challenges of smart grids analytics, thus clearly showing that the proposed approach is promising and worth being pursued further.
鉴于光伏发电固有的波动性,准确预测太阳能输出和电网馈入对于优化电网运行至关重要。数据驱动方法有助于智能电网中有效的供需管理,但由于天气依赖性和数据隐私限制,预测太阳能发电量仍具有挑战性。传统的深度学习(DL)方法需要访问集中的训练数据,从而导致安全和隐私风险。为了应对这些挑战,本研究利用联合学习(FL)预测低压电网的上网电量。我们提出了一种自下而上、保护隐私的预测方法,利用差分隐私(DP)来增强用户侧能源分析的数据隐私。本研究旨在利用安装在德国南部城市的三个住宅光伏系统的三年电表数据,结合辐照度天气数据来准确预测光伏发电量,从而证明增强型 FL 方法的可行性。在实验中,DL 模型长短期记忆(LSTM)和门控递归单元(GRU)与 DP 进行了联合和集成。因此,联合 LSTM 和 GRU 模型与集中式和本地基线模型进行了滚动 5 倍交叉验证比较,以评估它们各自的性能。通过利用 FedYogi 和 FedAdam 等先进的 FL 算法,我们提出的方法不仅能高精度预测连续能源数据,R2 达到 97.68%,还能遵守严格的隐私标准,为应对智能电网分析挑战提供可扩展的解决方案,从而清楚地表明所提出的方法大有可为,值得进一步研究。
{"title":"Enhancing PV feed-in power forecasting through federated learning with differential privacy using LSTM and GRU","authors":"Pascal Riedel ,&nbsp;Kaouther Belkilani ,&nbsp;Manfred Reichert ,&nbsp;Gerd Heilscher ,&nbsp;Reinhold von Schwerin","doi":"10.1016/j.egyai.2024.100452","DOIUrl":"10.1016/j.egyai.2024.100452","url":null,"abstract":"<div><div>Given the inherent fluctuation of photovoltaic (PV) generation, accurately forecasting solar power output and grid feed-in is crucial for optimizing grid operations. Data-driven methods facilitate efficient supply and demand management in smart grids, but predicting solar power remains challenging due to weather dependence and data privacy restrictions. Traditional deep learning (DL) approaches require access to centralized training data, leading to security and privacy risks. To navigate these challenges, this study utilizes federated learning (FL) to forecast feed-in power for the low-voltage grid. We propose a bottom-up, privacy-preserving prediction method using differential privacy (DP) to enhance data privacy for energy analytics on the customer side. This study aims at proving the viability of an enhanced FL approach by employing three years of meter data from three residential PV systems installed in a southern city of Germany, incorporating irradiance weather data for accurate PV power generation predictions. For the experiments, the DL models long short-term memory (LSTM) and gated recurrent unit (GRU) are federated and integrated with DP. Consequently, federated LSTM and GRU models are compared with centralized and local baseline models using rolling 5-fold cross-validation to evaluate their respective performances. By leveraging advanced FL algorithms such as FedYogi and FedAdam, we propose a method that not only predicts sequential energy data with high accuracy, achieving an <span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> of 97.68%, but also adheres to stringent privacy standards, offering a scalable solution for the challenges of smart grids analytics, thus clearly showing that the proposed approach is promising and worth being pursued further.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"18 ","pages":"Article 100452"},"PeriodicalIF":9.6,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142723726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Energy and AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1