首页 > 最新文献

Sustainable Computing-Informatics & Systems最新文献

英文 中文
The early warning method for offshore wind turbine gearbox oil temperature based on FSTAE-ATT 基于FSTAE-ATT的海上风电齿轮箱油温预警方法
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-08-09 DOI: 10.1016/j.suscom.2025.101180
Anping Wan , Shuai Peng , Khalil AL-Bukhaiti , Yunsong Ji , Shidong Ma
Offshore wind turbine gearboxes often experience malfunctions due to harsh environmental conditions, resulting in significant downtime and financial losses. This study presents an innovative early warning system for monitoring gearbox oil temperature using a novel FSTAE-ATT model. The system leverages SCADA data and employs Feature Mode Decomposition (FMD) to enhance feature extraction from gearbox oil temperature measurements. The FSTAE-ATT model integrates Convolutional Neural Networks (CNN) for spatial feature extraction and Long Short-Term Memory (LSTM) networks for temporal dependencies, augmented by a self-attention mechanism to highlight critical features. The model's reconstruction error serves as an early warning indicator for gearbox oil temperature anomalies. The effectiveness of the FSTAE-ATT model was validated using real-world data from an offshore wind farm in Yangjiang, Guangdong, China. Comparative analysis with other models, including STAE, STAE-ATT, AE, TAE, and SAE, demonstrated that the FSTAE-ATT model outperforms them with lower RMSE (e.g., 0.003452 for unit #40) and MAE (e.g., 0.002828 for unit #40) metrics. Additionally, significantly earlier warning times (e.g., up to 22 h and 36 min for unit #40), provide substantial lead time for preventative maintenance. This work contributes to advancing offshore wind turbine condition monitoring and fault detection, enhancing the sustainability and profitability of offshore wind energy systems.
由于恶劣的环境条件,海上风力涡轮机齿轮箱经常出现故障,导致大量停机和经济损失。本研究提出了一种新颖的FSTAE-ATT模型用于监测变速箱油温的预警系统。该系统利用SCADA数据,并采用特征模式分解(FMD)来增强变速箱油温测量的特征提取。FSTAE-ATT模型集成了用于空间特征提取的卷积神经网络(CNN)和用于时间依赖性提取的长短期记忆(LSTM)网络,并通过自注意机制增强以突出关键特征。该模型的重构误差可作为齿轮箱油温异常的预警指标。利用中国广东阳江海上风电场的实际数据验证了FSTAE-ATT模型的有效性。与其他模型(包括STAE、STAE- att、AE、TAE和SAE)的比较分析表明,FSTAE-ATT模型具有较低的RMSE(例如,单元#40的0.003452)和MAE(例如,单元#40的0.002828)指标,优于它们。此外,更早的预警时间(例如,40号机组高达22 h和36 min),为预防性维护提供了大量的提前时间。这项工作有助于推进海上风电机组状态监测和故障检测,提高海上风电系统的可持续性和盈利能力。
{"title":"The early warning method for offshore wind turbine gearbox oil temperature based on FSTAE-ATT","authors":"Anping Wan ,&nbsp;Shuai Peng ,&nbsp;Khalil AL-Bukhaiti ,&nbsp;Yunsong Ji ,&nbsp;Shidong Ma","doi":"10.1016/j.suscom.2025.101180","DOIUrl":"10.1016/j.suscom.2025.101180","url":null,"abstract":"<div><div>Offshore wind turbine gearboxes often experience malfunctions due to harsh environmental conditions, resulting in significant downtime and financial losses. This study presents an innovative early warning system for monitoring gearbox oil temperature using a novel FSTAE-ATT model. The system leverages SCADA data and employs Feature Mode Decomposition (FMD) to enhance feature extraction from gearbox oil temperature measurements. The FSTAE-ATT model integrates Convolutional Neural Networks (CNN) for spatial feature extraction and Long Short-Term Memory (LSTM) networks for temporal dependencies, augmented by a self-attention mechanism to highlight critical features. The model's reconstruction error serves as an early warning indicator for gearbox oil temperature anomalies. The effectiveness of the FSTAE-ATT model was validated using real-world data from an offshore wind farm in Yangjiang, Guangdong, China. Comparative analysis with other models, including STAE, STAE-ATT, AE, TAE, and SAE, demonstrated that the FSTAE-ATT model outperforms them with lower RMSE (e.g., 0.003452 for unit #40) and MAE (e.g., 0.002828 for unit #40) metrics. Additionally, significantly earlier warning times (e.g., up to 22 h and 36 min for unit #40), provide substantial lead time for preventative maintenance. This work contributes to advancing offshore wind turbine condition monitoring and fault detection, enhancing the sustainability and profitability of offshore wind energy systems.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"47 ","pages":"Article 101180"},"PeriodicalIF":5.7,"publicationDate":"2025-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144826959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring artificial intelligence potential in solar energy production forecasting: Methodology based on modified PSO optimized attention augmented recurrent networks 探索人工智能在太阳能生产预测中的潜力:基于改进粒子群优化关注增强循环网络的方法
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-08-07 DOI: 10.1016/j.suscom.2025.101174
Luka Jovanovic , Nebojsa Bacanin , Aleksandar Petrovic , Miodrag Zivkovic , Milos Antonijevic , Vuk Gajic , Mahmoud Mohamed Elsayed , Mohamed Abouhawwash
The use of renewable power sources is vital for reducing the world’s reliance on limited fossil fuels, reducing the impact on climate and mitigating the losses associated with power transmission. However, renewable sources such as solar power, often suffer from fluctuations in production due to their heavy reliance on weather conditions. This can have a significant impact on their reliability, as well as an impact on the power grid. Nevertheless, these issues could be mitigated by utilizing powerful and robust forecasting models, allowing for more efficient planning and fuller utilization of the produced power. This work explores the use of artificial intelligence (AI) in order to predict the yield of photovoltaic-generated energy. Different artificial neural network architectures are explored, including recurrent neural network (RNN), gated recurrent unit (GRU), and the long short-term memory (LSTM). Additionally, attention mechanism is integrated into the best-performing model to help further improve its performance. To ensure favorable outcomes, an adapted variant of the particle swarm optimization (PSO) is introduced to optimize hyper-parameter settings of each model. Simulations with real-world data showcased promising results while the rigorous statistical analysis confirmed that the observed improvements are statistically significant. The best-performing models were subjected to feature importance analysis to help future endeavors, as well as data collection efforts. The best performing models attained an impressive normalized mean square error (MSE) and coefficient of determination (R2) of 0.007240 and 0.894693, respectively, suggesting strong perspective for real world applications. Nonetheless, the introduction of attention mechanism did not provide further improvements to the best performing model. Lastly, this study confirmed that the modifications made to the baseline PSO strengthened the original approach, as it statistically significantly outperformed other metaheuristics.
可再生能源的使用对于减少世界对有限的化石燃料的依赖,减少对气候的影响以及减轻与电力传输相关的损失至关重要。然而,像太阳能这样的可再生能源,由于严重依赖天气条件,经常受到产量波动的影响。这可能会对它们的可靠性产生重大影响,也会对电网产生影响。然而,这些问题可以通过利用强大和可靠的预测模型来减轻,从而允许更有效的规划和更充分地利用所产生的电力。这项工作探讨了人工智能(AI)的使用,以预测光伏发电的产量。探讨了不同的人工神经网络结构,包括递归神经网络(RNN)、门控递归单元(GRU)和长短期记忆(LSTM)。此外,将注意力机制集成到最佳表现模型中,以帮助进一步提高其性能。为了保证较好的结果,引入了自适应粒子群优化算法(PSO)来优化每个模型的超参数设置。真实世界数据的模拟显示了有希望的结果,而严格的统计分析证实了观察到的改进在统计上是显著的。对表现最好的模型进行特征重要性分析,以帮助未来的努力,以及数据收集工作。表现最好的模型分别获得了令人印象深刻的归一化均方误差(MSE)和决定系数(R2),分别为0.007240和0.894693,这为现实世界的应用提供了强有力的前景。然而,注意机制的引入并没有对最佳表现模型提供进一步的改进。最后,本研究证实,对基线PSO的修改加强了原始方法,因为它在统计上显著优于其他元启发式方法。
{"title":"Exploring artificial intelligence potential in solar energy production forecasting: Methodology based on modified PSO optimized attention augmented recurrent networks","authors":"Luka Jovanovic ,&nbsp;Nebojsa Bacanin ,&nbsp;Aleksandar Petrovic ,&nbsp;Miodrag Zivkovic ,&nbsp;Milos Antonijevic ,&nbsp;Vuk Gajic ,&nbsp;Mahmoud Mohamed Elsayed ,&nbsp;Mohamed Abouhawwash","doi":"10.1016/j.suscom.2025.101174","DOIUrl":"10.1016/j.suscom.2025.101174","url":null,"abstract":"<div><div>The use of renewable power sources is vital for reducing the world’s reliance on limited fossil fuels, reducing the impact on climate and mitigating the losses associated with power transmission. However, renewable sources such as solar power, often suffer from fluctuations in production due to their heavy reliance on weather conditions. This can have a significant impact on their reliability, as well as an impact on the power grid. Nevertheless, these issues could be mitigated by utilizing powerful and robust forecasting models, allowing for more efficient planning and fuller utilization of the produced power. This work explores the use of artificial intelligence (AI) in order to predict the yield of photovoltaic-generated energy. Different artificial neural network architectures are explored, including recurrent neural network (RNN), gated recurrent unit (GRU), and the long short-term memory (LSTM). Additionally, attention mechanism is integrated into the best-performing model to help further improve its performance. To ensure favorable outcomes, an adapted variant of the particle swarm optimization (PSO) is introduced to optimize hyper-parameter settings of each model. Simulations with real-world data showcased promising results while the rigorous statistical analysis confirmed that the observed improvements are statistically significant. The best-performing models were subjected to feature importance analysis to help future endeavors, as well as data collection efforts. The best performing models attained an impressive normalized mean square error (MSE) and coefficient of determination (<span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span>) of 0.007240 and 0.894693, respectively, suggesting strong perspective for real world applications. Nonetheless, the introduction of attention mechanism did not provide further improvements to the best performing model. Lastly, this study confirmed that the modifications made to the baseline PSO strengthened the original approach, as it statistically significantly outperformed other metaheuristics.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"47 ","pages":"Article 101174"},"PeriodicalIF":5.7,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Phoeni6: A systematic approach for evaluating the energy consumption of neural networks Phoeni6:一种评估神经网络能耗的系统方法
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-08-06 DOI: 10.1016/j.suscom.2025.101172
Antônio Oliveira-Filho , Wellington Silva-de-Souza , Carlos Alberto Valderrama Sakuyama , Samuel Xavier-de-Souza
This paper presents Phoeni6, a systematic approach for assessing the energy consumption of neural networks while upholding the principles of fair comparison and reproducibility. Phoeni6 offers a comprehensive solution for managing energy-related data and configurations, ensuring portability, transparency, and coordination during evaluations. The methodology automates energy evaluations through containerized tools, robust database management, and versatile data models. In the first case study, the energy consumption of AlexNet and MobileNet was compared using raw and resized images. Results showed that MobileNet is up to 6.25% more energy-efficient for raw images and 2.32% for resized datasets, while maintaining competitive accuracy levels. In the second study, the impact of image file formats on energy consumption was evaluated. BMP images reduced energy usage by up to 30% compared to PNG, highlighting the influence of file formats on energy efficiency. These findings emphasize the importance of Phoeni6 in optimizing energy consumption for diverse neural network applications and establishing sustainable artificial intelligence practices.
本文介绍了Phoeni6,这是一种在坚持公平比较和可重复性原则的情况下评估神经网络能耗的系统方法。Phoeni6为管理能源相关数据和配置提供了全面的解决方案,确保了评估过程中的可移植性、透明度和协调性。该方法通过容器化工具、健壮的数据库管理和通用的数据模型自动化能源评估。在第一个案例研究中,AlexNet和MobileNet的能耗使用原始和调整大小的图像进行比较。结果表明,MobileNet在保持具有竞争力的精度水平的同时,对原始图像的能效提高了6.25%,对调整大小的数据集的能效提高了2.32%。在第二项研究中,评估了图像文件格式对能耗的影响。与PNG相比,BMP图像减少了高达30%的能源使用,突出了文件格式对能源效率的影响。这些发现强调了Phoeni6在优化各种神经网络应用的能耗和建立可持续的人工智能实践方面的重要性。
{"title":"Phoeni6: A systematic approach for evaluating the energy consumption of neural networks","authors":"Antônio Oliveira-Filho ,&nbsp;Wellington Silva-de-Souza ,&nbsp;Carlos Alberto Valderrama Sakuyama ,&nbsp;Samuel Xavier-de-Souza","doi":"10.1016/j.suscom.2025.101172","DOIUrl":"10.1016/j.suscom.2025.101172","url":null,"abstract":"<div><div>This paper presents Phoeni6, a systematic approach for assessing the energy consumption of neural networks while upholding the principles of fair comparison and reproducibility. Phoeni6 offers a comprehensive solution for managing energy-related data and configurations, ensuring portability, transparency, and coordination during evaluations. The methodology automates energy evaluations through containerized tools, robust database management, and versatile data models. In the first case study, the energy consumption of AlexNet and MobileNet was compared using raw and resized images. Results showed that MobileNet is up to 6.25% more energy-efficient for raw images and 2.32% for resized datasets, while maintaining competitive accuracy levels. In the second study, the impact of image file formats on energy consumption was evaluated. BMP images reduced energy usage by up to 30% compared to PNG, highlighting the influence of file formats on energy efficiency. These findings emphasize the importance of Phoeni6 in optimizing energy consumption for diverse neural network applications and establishing sustainable artificial intelligence practices.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"47 ","pages":"Article 101172"},"PeriodicalIF":5.7,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IoT and XAI-driven data aggregation framework for intelligent decision-making in smart healthcare systems 智能医疗系统中用于智能决策的物联网和xai驱动的数据聚合框架
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-08-05 DOI: 10.1016/j.suscom.2025.101179
Azath Mubarakali , Asma AlJarullah
The Internet of Things (IoT) is used in healthcare to monitor patients via wearable sensors to measure different physiological parameters. Smart healthcare IoT-enabled sensors and medical device data collaborate with other smart devices to transfer collected sensitive healthcare data to the central server in a secure manner. However, this collected data suffers from noise, imbalance, privacy concerns, and challenges in real-time analysis. Thus, this work is to develop a novel IoT and Explainable Artificial Intelligence (XAI) based data aggregation framework in smart healthcare systems to enable accurate patient health status and decision-making in real-time. Initially, body-integrated wearable sensors and devices collect physiological data, forming a comprehensive dataset. After that, this data is preprocessed and encrypted using Fully Homomorphic Encryption for secure transmission to the centralized servers. Meaningful features are extracted from the preprocessed data using Autoencoders, which perform effective dimensionality reduction while preserving critical information. Finally, Tabular Network (TabNet) classifies health status and risks with high precision. TabNet is a deep learning model specifically designed for structured data, which efficiently handles tabular data using attention mechanisms for feature selection and decision-making. The framework integrates XAI methods to provide interpretable predictions and actionable insights, ensuring transparency for healthcare providers. As a result, TabNet demonstrates a remarkable accuracy rate of 99.57 %, making it possible for doctors to provide consultations at any time, thereby improving the efficiency of traditional medical systems.
物联网(IoT)用于医疗保健,通过可穿戴传感器监测患者的不同生理参数。支持物联网的智能医疗传感器和医疗设备数据与其他智能设备协作,以安全的方式将收集到的敏感医疗数据传输到中央服务器。然而,这些收集的数据在实时分析中存在噪声、不平衡、隐私问题和挑战。因此,这项工作是在智能医疗系统中开发一种新的基于物联网和可解释人工智能(XAI)的数据聚合框架,以实现准确的患者健康状况和实时决策。最初,身体集成的可穿戴传感器和设备收集生理数据,形成一个全面的数据集。之后,使用完全同态加密对这些数据进行预处理和加密,以便安全地传输到集中式服务器。使用自动编码器从预处理数据中提取有意义的特征,在保留关键信息的同时进行有效的降维。最后,TabNet对健康状况和风险进行了高精度的分类。TabNet是一个专门为结构化数据设计的深度学习模型,它利用注意机制有效地处理表格数据,进行特征选择和决策。该框架集成了XAI方法,以提供可解释的预测和可操作的见解,确保医疗保健提供者的透明度。结果,TabNet显示出99.57 %的显著准确率,使医生可以随时提供会诊,从而提高了传统医疗系统的效率。
{"title":"IoT and XAI-driven data aggregation framework for intelligent decision-making in smart healthcare systems","authors":"Azath Mubarakali ,&nbsp;Asma AlJarullah","doi":"10.1016/j.suscom.2025.101179","DOIUrl":"10.1016/j.suscom.2025.101179","url":null,"abstract":"<div><div>The Internet of Things (IoT) is used in healthcare to monitor patients via wearable sensors to measure different physiological parameters. Smart healthcare IoT-enabled sensors and medical device data collaborate with other smart devices to transfer collected sensitive healthcare data to the central server in a secure manner. However, this collected data suffers from noise, imbalance, privacy concerns, and challenges in real-time analysis. Thus, this work is to develop a novel IoT and Explainable Artificial Intelligence (XAI) based data aggregation framework in smart healthcare systems to enable accurate patient health status and decision-making in real-time. Initially, body-integrated wearable sensors and devices collect physiological data, forming a comprehensive dataset. After that, this data is preprocessed and encrypted using Fully Homomorphic Encryption for secure transmission to the centralized servers. Meaningful features are extracted from the preprocessed data using Autoencoders, which perform effective dimensionality reduction while preserving critical information. Finally, Tabular Network (TabNet) classifies health status and risks with high precision. TabNet is a deep learning model specifically designed for structured data, which efficiently handles tabular data using attention mechanisms for feature selection and decision-making. The framework integrates XAI methods to provide interpretable predictions and actionable insights, ensuring transparency for healthcare providers. As a result, TabNet demonstrates a remarkable accuracy rate of 99.57 %, making it possible for doctors to provide consultations at any time, thereby improving the efficiency of traditional medical systems.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101179"},"PeriodicalIF":5.7,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing intrusion detection and Kerberos attack prevention with an integrated blockchain and AI-based approach 通过集成区块链和基于ai的方法增强入侵检测和Kerberos攻击防御
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-25 DOI: 10.1016/j.suscom.2025.101178
Nisha Rajpal , Dinesh Rai
In today's linked digital world, securing computer networks as well as systems is critical. The increasing complexity as well as the regularity of network attacks demand creative and effective intrusion detection solutions to protect against possible threats. Kerberos, an established token-based authentication technology, is notable for its cryptographic method, privacy assurance, and data protection whenever identifying eligible users. At the same time, it fails to offer proper channel protection for transmitting user credentials between the client and server pathways. This study presents an integrated approach for detecting Kerberos attacks and intrusions within computing systems and networks. It combines artificial intelligence and Blockchain technology with a proxy re-encryption scheme to enhance security measures. After pre-processing, the input data is recorded on the blockchain, subjected to proxy re-encryption, and stripped of noise. The utilization of threshold proxy re-encryption in the consensus process eliminates dependence on third-party central service providers. As proxy service nodes, a number of consensus nodes within the blockchain network re-encrypt data and combine translated ciphertext. Throughout the process, no personal information is revealed. In this study, the methods of Principal Component Analysis and Chi-square Test are used to reduce the dimension of the main components with the greatest variation and to discover and pick the most relevant features from the target variable. To detect the normal and attack systems, all selected important features have been categorized utilizing the KNN classifier. Throughout the investigation, the proposed approach was used to evaluate the openly available dataset KDD-99.
在当今互联的数字世界中,保护计算机网络和系统的安全至关重要。随着网络攻击的日益复杂和规律性,需要创新和有效的入侵检测方案来防范可能的威胁。Kerberos是一种已建立的基于令牌的身份验证技术,在识别合格用户时,它以其加密方法、隐私保证和数据保护而闻名。同时,它不能为在客户机和服务器路径之间传输用户凭据提供适当的通道保护。本研究提出了一种用于检测计算系统和网络中的Kerberos攻击和入侵的集成方法。它将人工智能和区块链技术与代理重新加密方案相结合,以增强安全措施。输入数据经过预处理后,记录在区块链上,进行代理重新加密,并去噪。在共识过程中使用阈值代理重新加密消除了对第三方中心服务提供者的依赖。作为代理服务节点,区块链网络中的多个共识节点对数据进行重新加密,并将翻译后的密文组合在一起。在整个过程中,不会泄露任何个人信息。本研究采用主成分分析和卡方检验的方法,对变异最大的主成分进行降维,从目标变量中发现并挑选出最相关的特征。为了检测正常和攻击系统,使用KNN分类器对所有选定的重要特征进行了分类。在整个研究过程中,所提出的方法被用于评估公开可用的数据集KDD-99。
{"title":"Enhancing intrusion detection and Kerberos attack prevention with an integrated blockchain and AI-based approach","authors":"Nisha Rajpal ,&nbsp;Dinesh Rai","doi":"10.1016/j.suscom.2025.101178","DOIUrl":"10.1016/j.suscom.2025.101178","url":null,"abstract":"<div><div>In today's linked digital world, securing computer networks as well as systems is critical. The increasing complexity as well as the regularity of network attacks demand creative and effective intrusion detection solutions to protect against possible threats. Kerberos, an established token-based authentication technology, is notable for its cryptographic method, privacy assurance, and data protection whenever identifying eligible users. At the same time, it fails to offer proper channel protection for transmitting user credentials between the client and server pathways. This study presents an integrated approach for detecting Kerberos attacks and intrusions within computing systems and networks. It combines artificial intelligence and Blockchain technology with a proxy re-encryption scheme to enhance security measures. After pre-processing, the input data is recorded on the blockchain, subjected to proxy re-encryption, and stripped of noise. The utilization of threshold proxy re-encryption in the consensus process eliminates dependence on third-party central service providers. As proxy service nodes, a number of consensus nodes within the blockchain network re-encrypt data and combine translated ciphertext. Throughout the process, no personal information is revealed. In this study, the methods of Principal Component Analysis and Chi-square Test are used to reduce the dimension of the main components with the greatest variation and to discover and pick the most relevant features from the target variable. To detect the normal and attack systems, all selected important features have been categorized utilizing the KNN classifier. Throughout the investigation, the proposed approach was used to evaluate the openly available dataset KDD-99.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"47 ","pages":"Article 101178"},"PeriodicalIF":5.7,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144773064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing microgrid energy management with hybrid energy storage systems using reinforcement learning methods 利用强化学习方法优化混合储能系统微电网能量管理
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-24 DOI: 10.1016/j.suscom.2025.101177
Lejia Li
With the growth of global energy demand and the pursuit of sustainable energy, microgrids, as an emerging energy supply system, are becoming increasingly important. However, the energy management of microgrid hybrid energy storage systems face numerous challenges, including significant energy waste and poor power supply stability. This study aims to optimize the energy management of microgrid hybrid energy storage systems using reinforcement learning methods. By constructing a reinforcement learning model architecture based on the Markov decision process, the state space, action space, and reward function are systematically designed. The improved proximal policy optimization (PPO) algorithm is then used for implementation. Historical microgrid operation data spanning one year was preprocessed to normalize critical variables, and a simulation was run in a Python environment using OpenAI Gym and proprietary energy system dynamics. The experiment utilizes the operational data of a regional microgrid for one year to compare the traditional model, based on fixed-priority energy allocation rules, with the neural network model. The results show that the reinforcement learning model has an average annual energy management efficiency of 84.5 %, which is significantly improved compared with the 54.25 % of the traditional model and 70 % of the neural network model; the energy loss rate is only 8 %, which is much lower than the 25 % of the traditional model and 18 % of the neural network model; the comprehensive index of power supply stability is 0.92, which is also better than other models. This study provides an efficient and adaptable solution for microgrid energy management, which is expected to promote the healthy development of the microgrid industry.
随着全球能源需求的增长和对可持续能源的追求,微电网作为一种新兴的能源供应系统显得越来越重要。然而,微网混合储能系统的能量管理面临着能源浪费严重、供电稳定性差等诸多挑战。本研究旨在利用强化学习方法优化微电网混合储能系统的能量管理。通过构建基于马尔可夫决策过程的强化学习模型体系结构,系统地设计了状态空间、动作空间和奖励函数。然后使用改进的近端策略优化(PPO)算法进行实现。对微电网一年的历史运行数据进行预处理,对关键变量进行归一化,并使用OpenAI Gym和专有能源系统动力学在Python环境下进行仿真。实验利用某区域微电网一年的运行数据,将基于固定优先级能量分配规则的传统模型与神经网络模型进行对比。结果表明:强化学习模型的年平均能量管理效率为84.5 %,较传统模型的54.25 %和神经网络模型的70 %有显著提高;能量损失率仅为8 %,远低于传统模型的25 %和神经网络模型的18 %;供电稳定性综合指数为0.92,也优于其他模型。本研究为微网能源管理提供了一种高效、适应性强的解决方案,有望促进微网产业的健康发展。
{"title":"Optimizing microgrid energy management with hybrid energy storage systems using reinforcement learning methods","authors":"Lejia Li","doi":"10.1016/j.suscom.2025.101177","DOIUrl":"10.1016/j.suscom.2025.101177","url":null,"abstract":"<div><div>With the growth of global energy demand and the pursuit of sustainable energy, microgrids, as an emerging energy supply system, are becoming increasingly important. However, the energy management of microgrid hybrid energy storage systems face numerous challenges, including significant energy waste and poor power supply stability. This study aims to optimize the energy management of microgrid hybrid energy storage systems using reinforcement learning methods. By constructing a reinforcement learning model architecture based on the Markov decision process, the state space, action space, and reward function are systematically designed. The improved proximal policy optimization (PPO) algorithm is then used for implementation. Historical microgrid operation data spanning one year was preprocessed to normalize critical variables, and a simulation was run in a Python environment using OpenAI Gym and proprietary energy system dynamics. The experiment utilizes the operational data of a regional microgrid for one year to compare the traditional model, based on fixed-priority energy allocation rules, with the neural network model. The results show that the reinforcement learning model has an average annual energy management efficiency of 84.5 %, which is significantly improved compared with the 54.25 % of the traditional model and 70 % of the neural network model; the energy loss rate is only 8 %, which is much lower than the 25 % of the traditional model and 18 % of the neural network model; the comprehensive index of power supply stability is 0.92, which is also better than other models. This study provides an efficient and adaptable solution for microgrid energy management, which is expected to promote the healthy development of the microgrid industry.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"47 ","pages":"Article 101177"},"PeriodicalIF":5.7,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144773133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based workload prediction and resource provisioning for mobile edge-cloud computing in healthcare applications 为医疗保健应用程序中的移动边缘云计算提供基于深度学习的工作负载预测和资源配置
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-23 DOI: 10.1016/j.suscom.2025.101176
Durga S , Esther Daniel , Deepakanmani S , Reshma V.K
Edge computing has been greatly assisted by the quick development of cloud computing and mobile communications. Even though there has been a lot of interest in edge computing technologies, the majority of research has been application-specific and did not consider cloud providers' control perspective, which offers general-purpose edge services. Thus, a new model called Parallel Convolutional MobileNet (PConvM-Net) is presented for resource provisioning and workload prediction. First, Multi-Access Edge Computing (MEC) for resource provision is considered, and here resource provisioning manager includes two main components, like workload estimation and monitoring. In the prediction module, the workload prediction is performed by employing a Gated Recurrent Unit (GRU). In the decision module, the threshold scale-up process is executed. Moreover, in order to choose the number of resources in the scale-down and scale-up process, a Parallel Convolutional MobileNet (PConvM-Net) is utilized. Further, the decision is considered based on parameters such as bandwidth, Central Processing Unit (CPU), memory usage, energy, and execution time. Here, PConvM-Net is formulated by the amalgamation of MobileNet and Parallel Convolutional Neural Network (PCNN). The simulation outcomes of PConvM-Net calculated a minimum execution time, energy consumption, CPU utilization, Task Response Time, SLA Violation, and Availability of 8.616 sec, 39.876 J, 83.877 %, 7.644 sec, 2.877 %, and 91.876 %.
云计算和移动通信的快速发展极大地促进了边缘计算的发展。尽管人们对边缘计算技术很感兴趣,但大多数研究都是针对特定应用的,并没有考虑云提供商提供通用边缘服务的控制视角。为此,提出了一种新的并行卷积移动网络模型(PConvM-Net),用于资源配置和工作负载预测。首先,考虑用于资源配置的多访问边缘计算(MEC),这里的资源配置管理器包括两个主要组件,如工作负载估计和监控。在预测模块中,通过采用门控循环单元(GRU)来完成工作量预测。在决策模块中,执行阈值放大过程。此外,为了在缩小和放大过程中选择资源的数量,使用了并行卷积移动网络(PConvM-Net)。此外,决策是基于诸如带宽、中央处理单元(CPU)、内存使用、能源和执行时间等参数来考虑的。在这里,PConvM-Net是由MobileNet和并行卷积神经网络(PCNN)合并而成的。PConvM-Net的仿真结果计算出最小的执行时间、能耗、CPU利用率、任务响应时间、SLA违规和可用性分别为8.616 sec、39.876 J、83.877 %、7.644 sec、2.877 %和91.876 %。
{"title":"Deep learning-based workload prediction and resource provisioning for mobile edge-cloud computing in healthcare applications","authors":"Durga S ,&nbsp;Esther Daniel ,&nbsp;Deepakanmani S ,&nbsp;Reshma V.K","doi":"10.1016/j.suscom.2025.101176","DOIUrl":"10.1016/j.suscom.2025.101176","url":null,"abstract":"<div><div>Edge computing has been greatly assisted by the quick development of cloud computing and mobile communications. Even though there has been a lot of interest in edge computing technologies, the majority of research has been application-specific and did not consider cloud providers' control perspective, which offers general-purpose edge services. Thus, a new model called Parallel Convolutional MobileNet (PConvM-Net) is presented for resource provisioning and workload prediction. First, Multi-Access Edge Computing (MEC) for resource provision is considered, and here resource provisioning manager includes two main components, like workload estimation and monitoring. In the prediction module, the workload prediction is performed by employing a Gated Recurrent Unit (GRU). In the decision module, the threshold scale-up process is executed. Moreover, in order to choose the number of resources in the scale-down and scale-up process, a Parallel Convolutional MobileNet (PConvM-Net) is utilized. Further, the decision is considered based on parameters such as bandwidth, Central Processing Unit (CPU), memory usage, energy, and execution time. Here, PConvM-Net is formulated by the amalgamation of MobileNet and Parallel Convolutional Neural Network (PCNN). The simulation outcomes of PConvM-Net calculated a minimum execution time, energy consumption, CPU utilization, Task Response Time, SLA Violation, and Availability of 8.616 sec, 39.876 J, 83.877 %, 7.644 sec, 2.877 %, and 91.876 %.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"47 ","pages":"Article 101176"},"PeriodicalIF":5.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144739247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does faster mean greener? Runtime and energy trade-offs in iOS applications with compiler optimizations 更快就意味着更环保吗?使用编译器优化的iOS应用程序中的运行时和能量权衡
IF 3.8 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-21 DOI: 10.1016/j.suscom.2025.101166
José Miguel Aragón-Jurado , Abdul Ali Bangash , Bernabé Dorronsoro , Karim Ali , Abram Hindle , Patricia Ruiz
Smartphones outnumber people nowadays, requiring efficient energy management. High application energy use leads to faster battery drain and frequent recharging, negatively impacting both battery life and the environment. This cycle also contributes to rising electronic and chemical waste due to discarded mobile phone batteries. Compiler optimization flags may play a crucial role in mitigating these issues by optimizing software performance. However, there has been little research on examining how compiler optimization flags impact the energy consumption of smartphone applications. This work presents an empirical study on the effect of the most aggressive iOS compiler optimizations on runtime, power consumption, and energy consumption across six different iOS applications. For each application, we developed a benchmark focused on the specified category we aimed to study. Our results show that reducing application runtime does not always directly correlate with improved energy consumption. In fact, we observed that optimizations aimed at enhancing runtime performance often come at an energy cost in the applications studied, highlighting a trade-off between runtime and energy consumption. For example, we found that using -Ounchecked in Swift, combined with -Oz from LLVM in the GhostRun video game, increases energy consumption by 34%, despite improving runtime performance by 9%.
如今,智能手机的数量超过了人口数量,这就需要高效的能源管理。高应用能耗导致电池消耗更快,充电频繁,对电池寿命和环境都有负面影响。这种循环也导致了废弃手机电池造成的电子和化学废物的增加。编译器优化标志可以通过优化软件性能来缓解这些问题。然而,很少有关于编译器优化标志如何影响智能手机应用程序能耗的研究。本文对六个不同的iOS应用程序中最激进的iOS编译器优化对运行时、功耗和能耗的影响进行了实证研究。对于每个应用程序,我们都针对要研究的特定类别开发了基准测试。我们的结果表明,减少应用程序运行时并不总是与改进的能耗直接相关。事实上,我们观察到,在所研究的应用程序中,旨在增强运行时性能的优化通常是以能源成本为代价的,这突出了运行时和能源消耗之间的权衡。例如,我们发现在Swift中使用-Ounchecked,在ghostrn视频游戏中使用LLVM中的-Oz,尽管运行时性能提高了9%,但能耗增加了34%。
{"title":"Does faster mean greener? Runtime and energy trade-offs in iOS applications with compiler optimizations","authors":"José Miguel Aragón-Jurado ,&nbsp;Abdul Ali Bangash ,&nbsp;Bernabé Dorronsoro ,&nbsp;Karim Ali ,&nbsp;Abram Hindle ,&nbsp;Patricia Ruiz","doi":"10.1016/j.suscom.2025.101166","DOIUrl":"10.1016/j.suscom.2025.101166","url":null,"abstract":"<div><div>Smartphones outnumber people nowadays, requiring efficient energy management. High application energy use leads to faster battery drain and frequent recharging, negatively impacting both battery life and the environment. This cycle also contributes to rising electronic and chemical waste due to discarded mobile phone batteries. Compiler optimization flags may play a crucial role in mitigating these issues by optimizing software performance. However, there has been little research on examining how compiler optimization flags impact the energy consumption of smartphone applications. This work presents an empirical study on the effect of the most aggressive iOS compiler optimizations on runtime, power consumption, and energy consumption across six different iOS applications. For each application, we developed a benchmark focused on the specified category we aimed to study. Our results show that reducing application runtime does not always directly correlate with improved energy consumption. In fact, we observed that optimizations aimed at enhancing runtime performance often come at an energy cost in the applications studied, highlighting a trade-off between runtime and energy consumption. For example, we found that using <span>-Ounchecked</span> in Swift, combined with <span>-Oz</span> from LLVM in the GhostRun video game, increases energy consumption by 34%, despite improving runtime performance by 9%.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"47 ","pages":"Article 101166"},"PeriodicalIF":3.8,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144680158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart grid stability prediction using artificial intelligence: A study based on the UCI smart grid stability dataset 基于人工智能的智能电网稳定性预测——基于UCI智能电网稳定性数据集的研究
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-18 DOI: 10.1016/j.suscom.2025.101175
Xuan Wang , XiaoFeng Zhang , Feng Zhou , Xiang Xu
Maintaining the stability of smart grids (SGs) helps ensure that power systems continue to function well and without interruption, as renewable sources and variable demand rise. Conventional ways of monitoring tend to miss the first signs of instability, prompting the need for more intelligent solutions. This work studies the employment of machine learning (ML) to help classify and forecast SG stability, aiming to improve reliability and systems’ operational efficiency. Six algorithms, Random Forest (RF), Extreme Gradient Boosting (XGBoost), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Logistic Regression (LR), and Categorical Boosting (CatBoost), were tested using such robust metrics as accuracy, precision, recall, F1-score, ROC AUC, Log Loss, Cohen Kappa, and Matthews Correlation Coefficient. Performance of the models was increased by using GridSearchCV and Bayesian Optimization (BO) techniques. The finding is that BO-SVM achieved the highest accuracy, precision, recall, F1-score (all by 96.00 %) as well as greatest balanced accuracy and surpassed all the other methods investigated. Moreover, CatBoost and XGBoost had also steady and effective results when used with both optimization techniques. On the other hand, KNN exhibited overfitting and LR failed to capture stability patterns. These results prove that optimized SVM models are very useful for real-time monitoring of superconductor stability. Such models help make wise and prompt decisions which leads to stronger resilience in the smart grid and efficient energy use. Deploying these models under real-time, noisy, and dynamic grid environments for broader applicability would be more beneficial.
随着可再生能源和可变需求的增加,保持智能电网(SGs)的稳定性有助于确保电力系统继续正常运行且不中断。传统的监测方法往往会错过不稳定的最初迹象,这促使人们需要更智能的解决方案。这项工作研究了使用机器学习(ML)来帮助分类和预测SG稳定性,旨在提高可靠性和系统的运行效率。随机森林(RF)、极端梯度增强(XGBoost)、支持向量机(SVM)、k近邻(KNN)、逻辑回归(LR)和分类增强(CatBoost)等六种算法使用准确性、精密度、召回率、f1评分、ROC AUC、对数损失、Cohen Kappa和Matthews相关系数等稳健指标进行了测试。利用GridSearchCV和贝叶斯优化(BO)技术提高了模型的性能。结果表明,BO-SVM的准确率、精密度、查全率、f1分数(均为96.00 %)最高,平衡准确率最高,超过了所有其他方法。此外,CatBoost和XGBoost在使用这两种优化技术时也具有稳定有效的结果。另一方面,KNN表现出过拟合,LR未能捕获稳定模式。这些结果证明了优化后的SVM模型对超导体稳定性的实时监测是非常有用的。这些模型有助于做出明智和迅速的决策,从而增强智能电网的弹性和有效的能源利用。将这些模型部署在实时、嘈杂和动态的网格环境中,以获得更广泛的适用性,这将更有益。
{"title":"Smart grid stability prediction using artificial intelligence: A study based on the UCI smart grid stability dataset","authors":"Xuan Wang ,&nbsp;XiaoFeng Zhang ,&nbsp;Feng Zhou ,&nbsp;Xiang Xu","doi":"10.1016/j.suscom.2025.101175","DOIUrl":"10.1016/j.suscom.2025.101175","url":null,"abstract":"<div><div>Maintaining the stability of smart grids (SGs) helps ensure that power systems continue to function well and without interruption, as renewable sources and variable demand rise. Conventional ways of monitoring tend to miss the first signs of instability, prompting the need for more intelligent solutions. This work studies the employment of machine learning (ML) to help classify and forecast SG stability, aiming to improve reliability and systems’ operational efficiency. Six algorithms, Random Forest (RF), Extreme Gradient Boosting (XGBoost), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Logistic Regression (LR), and Categorical Boosting (CatBoost), were tested using such robust metrics as accuracy, precision, recall, F1-score, ROC AUC, Log Loss, Cohen Kappa, and Matthews Correlation Coefficient. Performance of the models was increased by using GridSearchCV and Bayesian Optimization (BO) techniques. The finding is that BO-SVM achieved the highest accuracy, precision, recall, F1-score (all by 96.00 %) as well as greatest balanced accuracy and surpassed all the other methods investigated. Moreover, CatBoost and XGBoost had also steady and effective results when used with both optimization techniques. On the other hand, KNN exhibited overfitting and LR failed to capture stability patterns. These results prove that optimized SVM models are very useful for real-time monitoring of superconductor stability. Such models help make wise and prompt decisions which leads to stronger resilience in the smart grid and efficient energy use. Deploying these models under real-time, noisy, and dynamic grid environments for broader applicability would be more beneficial.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"47 ","pages":"Article 101175"},"PeriodicalIF":5.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144725089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-heterogeneous renewable energy scheduling optimization based on time series algorithm and green computing-driven sustainable development 基于时间序列算法的多异构可再生能源调度优化与绿色计算驱动的可持续发展
IF 3.8 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-07-17 DOI: 10.1016/j.suscom.2025.101173
Chaoran Ma , Puguang Hou
The integration of heterogeneous renewable energy sources, such as wind and solar, poses significant challenges to the dynamic economic and environmental dispatch of power systems due to their intermittent and uncertain nature. Efficient coordination between generation and consumption is crucial to ensure stability, reduce emissions, and lower costs. Accurate forecasting of renewable outputs is a critical prerequisite for achieving optimal dispatch decisions. To address this, we propose a hybrid prediction and scheduling framework that leverages time series forecasting to support real-time dispatch optimization. Specifically, we develop a novel prediction model based on a Completely Input and Output-connected Long Short-Term Memory (CIAO-LSTM) network, whose parameters are optimized using an Improved Fruit Fly Optimization Algorithm (IFOA). This approach enhances the model’s ability to capture both linear and nonlinear temporal features and improves convergence through adaptive search strategies. The predicted outputs are then incorporated into a rolling real-time scheduling model that jointly minimizes generation costs and pollutant emissions. Simulation results on a six-unit power system demonstrate that our approach significantly improves prediction accuracy and dispatch performance, reducing average generation costs and emissions by over 8 % and 16 %, respectively. These results confirm the effectiveness of the proposed method in promoting green and sustainable power systems.
风能和太阳能等异质可再生能源的整合,由于其间歇性和不确定性,对电力系统的动态经济和环境调度提出了重大挑战。发电和用电之间的有效协调对于确保稳定、减少排放和降低成本至关重要。可再生能源产出的准确预测是实现最优调度决策的关键先决条件。为了解决这个问题,我们提出了一个混合预测和调度框架,利用时间序列预测来支持实时调度优化。具体而言,我们建立了一种基于完全输入输出连接的长短期记忆(CIAO-LSTM)网络的预测模型,该模型的参数使用改进的果蝇优化算法(IFOA)进行优化。该方法增强了模型捕捉线性和非线性时间特征的能力,并通过自适应搜索策略提高了收敛性。然后将预测输出纳入滚动实时调度模型,以共同最小化发电成本和污染物排放。在一个六机组电力系统上的仿真结果表明,该方法显著提高了预测精度和调度性能,平均发电成本和排放分别降低了8% %和16% %。这些结果证实了所提出的方法在促进绿色和可持续电力系统方面的有效性。
{"title":"Multi-heterogeneous renewable energy scheduling optimization based on time series algorithm and green computing-driven sustainable development","authors":"Chaoran Ma ,&nbsp;Puguang Hou","doi":"10.1016/j.suscom.2025.101173","DOIUrl":"10.1016/j.suscom.2025.101173","url":null,"abstract":"<div><div>The integration of heterogeneous renewable energy sources, such as wind and solar, poses significant challenges to the dynamic economic and environmental dispatch of power systems due to their intermittent and uncertain nature. Efficient coordination between generation and consumption is crucial to ensure stability, reduce emissions, and lower costs. Accurate forecasting of renewable outputs is a critical prerequisite for achieving optimal dispatch decisions. To address this, we propose a hybrid prediction and scheduling framework that leverages time series forecasting to support real-time dispatch optimization. Specifically, we develop a novel prediction model based on a Completely Input and Output-connected Long Short-Term Memory (CIAO-LSTM) network, whose parameters are optimized using an Improved Fruit Fly Optimization Algorithm (IFOA). This approach enhances the model’s ability to capture both linear and nonlinear temporal features and improves convergence through adaptive search strategies. The predicted outputs are then incorporated into a rolling real-time scheduling model that jointly minimizes generation costs and pollutant emissions. Simulation results on a six-unit power system demonstrate that our approach significantly improves prediction accuracy and dispatch performance, reducing average generation costs and emissions by over 8 % and 16 %, respectively. These results confirm the effectiveness of the proposed method in promoting green and sustainable power systems.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"47 ","pages":"Article 101173"},"PeriodicalIF":3.8,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144695325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Sustainable Computing-Informatics & Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1