首页 > 最新文献

Sustainable Computing-Informatics & Systems最新文献

英文 中文
Energy-efficient power marketing optimization using XGBoost for enhanced market performance 利用XGBoost优化节能电力营销,提升市场绩效
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-01 Epub Date: 2025-11-01 DOI: 10.1016/j.suscom.2025.101243
Jingxian Lu, Junfeng Li, Guoyi Zhao, Kunpeng Liu, Jing Yang
It is in this setting of power markets competition that any marketing efforts need to be fine-tuned to the maximal levels in achieving the highest revenue from the end-users while at the same time ensuring grid resilience and firmness. Currently used techniques exhibit poor prediction performance, are unable to optimally allocate energy and do not adapt quickly to fluctuating market parameters, thus providing less than optimal solutions. The contribution of this research is the introduction of a new approach to use of the Extreme Gradient Boosting (XGBoost) algorithm in power marketing. The work proposed herein seeks to overcome these difficulties by using the feature importance and gradient-based learning in boosting the model’s prediction capability as well as fine-tuning the price framework. The model’s performance is measured and analyzed in terms of the technical power performance parameters which consists of Energy Utilization Efficiency (EUE), Load Factor (LF), and Power Loss Reduction (PLR). The experiments demonstrated an enhancement of the EUE to 92 %, the increase in LF from 0.78 to 0.91, and the decrease in PLR by 15 % as compared to the standard algorithm. MATLAB based simulation studies are performed using real-world power market data to confirm the usefulness of our model in real, dynamic and large-scale power systems. This is a highly effective and a highly efficient approach to the improvement of market performance and operational functionality.
正是在这种电力市场竞争的环境下,任何营销努力都需要微调到最高水平,以从最终用户那里获得最高收入,同时确保电网的弹性和坚固性。目前使用的技术表现出较差的预测性能,不能优化分配能量,不能快速适应波动的市场参数,因此提供的不是最优解。本研究的贡献是在电力营销中引入了一种使用极限梯度提升(XGBoost)算法的新方法。本文提出的工作旨在通过使用特征重要性和基于梯度的学习来提高模型的预测能力以及微调价格框架来克服这些困难。根据能量利用效率(EUE)、负载系数(LF)和功率损耗降低(PLR)等技术功率性能参数对模型的性能进行了测量和分析。实验表明,与标准算法相比,EUE提高到92 %,LF从0.78提高到0.91,PLR降低了15 %。基于MATLAB的仿真研究使用真实的电力市场数据来验证我们的模型在真实的、动态的和大规模的电力系统中的有效性。这是一个非常有效和高效的方法来改善市场表现和运营功能。
{"title":"Energy-efficient power marketing optimization using XGBoost for enhanced market performance","authors":"Jingxian Lu,&nbsp;Junfeng Li,&nbsp;Guoyi Zhao,&nbsp;Kunpeng Liu,&nbsp;Jing Yang","doi":"10.1016/j.suscom.2025.101243","DOIUrl":"10.1016/j.suscom.2025.101243","url":null,"abstract":"<div><div>It is in this setting of power markets competition that any marketing efforts need to be fine-tuned to the maximal levels in achieving the highest revenue from the end-users while at the same time ensuring grid resilience and firmness. Currently used techniques exhibit poor prediction performance, are unable to optimally allocate energy and do not adapt quickly to fluctuating market parameters, thus providing less than optimal solutions. The contribution of this research is the introduction of a new approach to use of the Extreme Gradient Boosting (XGBoost) algorithm in power marketing. The work proposed herein seeks to overcome these difficulties by using the feature importance and gradient-based learning in boosting the model’s prediction capability as well as fine-tuning the price framework. The model’s performance is measured and analyzed in terms of the technical power performance parameters which consists of Energy Utilization Efficiency (EUE), Load Factor (LF), and Power Loss Reduction (PLR). The experiments demonstrated an enhancement of the EUE to 92 %, the increase in LF from 0.78 to 0.91, and the decrease in PLR by 15 % as compared to the standard algorithm. MATLAB based simulation studies are performed using real-world power market data to confirm the usefulness of our model in real, dynamic and large-scale power systems. This is a highly effective and a highly efficient approach to the improvement of market performance and operational functionality.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"49 ","pages":"Article 101243"},"PeriodicalIF":5.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145651894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retraction notice to “Energy-efficient blockchain-integrated IoT and AI framework for sustainable urban microclimate management” [Sustain. Comput.: Inf. Syst. 47 (2025) 101137] 关于“节能区块链集成物联网和人工智能框架用于可持续城市微气候管理”的撤回通知[…]第一版。[参考文献47 (2025)101137]
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-01 Epub Date: 2025-12-29 DOI: 10.1016/j.suscom.2025.101227
N. Krishnaraj , Hadeel Alsolai , Fahd N. Al-Wesabi , Yahia Said , Ali Alqazzaz , S. Gayathri Priya , S. Shanmathi , B. Narmada
{"title":"Retraction notice to “Energy-efficient blockchain-integrated IoT and AI framework for sustainable urban microclimate management” [Sustain. Comput.: Inf. Syst. 47 (2025) 101137]","authors":"N. Krishnaraj ,&nbsp;Hadeel Alsolai ,&nbsp;Fahd N. Al-Wesabi ,&nbsp;Yahia Said ,&nbsp;Ali Alqazzaz ,&nbsp;S. Gayathri Priya ,&nbsp;S. Shanmathi ,&nbsp;B. Narmada","doi":"10.1016/j.suscom.2025.101227","DOIUrl":"10.1016/j.suscom.2025.101227","url":null,"abstract":"","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"49 ","pages":"Article 101227"},"PeriodicalIF":5.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-objective energy-efficient power system scheduling using Stochastic State Space Model and reinforcement learning 基于随机状态空间模型和强化学习的多目标节能电力系统调度
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-10-09 DOI: 10.1016/j.suscom.2025.101224
Jiaying Wang, Xiaoqian Meng, Xuan Yang, Haibing Yin, Pingkai Fang
The increasing complexity of modern power systems, arising from increased electricity demand and large-scale renewable energy resource integration, creates significant challenges for real-time scheduling and operational reliability. Conventional deterministic scheduling processes mostly cannot incorporate the inherent uncertainty and variability associated with the fluctuations of wind and solar generation, as well as fluctuating load demand, which leads to inefficiencies in the amount of necessary resources required and increased operational costs. This research proposes a novel multi-objective power scheduling framework that incorporates a Stochastic State-Space Model (SSSM) and Reinforcement Learning (RL) for dynamic management of generation, storage, and demand uncertainty. The SSSM takes into account the stochastic variability of renewable generation, uncertain demand profiles, and exogenous contingencies of the system. The RL agent is continuously learning the best scheduling strategies as it operates the power system to minimize operational costs while improving availability and maximizing scheduling performance. Simulation results using Monte Carlo testing over a 24-hour horizon demonstrated that the proposed method achieved a reduction of up to 20 % in operational costs, 10 % more system availability, and scheduling efficiencies of over 90 % compared to traditional methods. The proposed approach offers a feasible way forward for power systems operators to simultaneously meet the objectives of cost, reliability, and sustainability under a paradigm of uncertainty, while also having relevant application to real-time operation in smart grid systems, particularly in systems with high renewable energy.
由于电力需求的增加和可再生能源的大规模整合,现代电力系统的复杂性日益增加,对实时调度和运行可靠性提出了重大挑战。传统的确定性调度程序大多不能考虑到与风能和太阳能发电波动以及负荷需求波动有关的固有不确定性和可变性,从而导致所需资源的效率低下和业务成本增加。本研究提出了一种新的多目标电力调度框架,该框架结合了随机状态空间模型(SSSM)和强化学习(RL),用于发电、存储和需求不确定性的动态管理。SSSM考虑了可再生能源发电的随机可变性、不确定的需求概况和系统的外生突发事件。RL代理在运行电力系统时不断学习最佳调度策略,以最大限度地降低运营成本,同时提高可用性并最大化调度性能。蒙特卡罗测试的模拟结果表明,与传统方法相比,该方法的运行成本降低了20% %,系统可用性提高了10% %,调度效率提高了90% %以上。所提出的方法为电力系统运营商在不确定性范式下同时满足成本、可靠性和可持续性目标提供了一条可行的前进道路,同时也适用于智能电网系统,特别是高可再生能源系统的实时运行。
{"title":"Multi-objective energy-efficient power system scheduling using Stochastic State Space Model and reinforcement learning","authors":"Jiaying Wang,&nbsp;Xiaoqian Meng,&nbsp;Xuan Yang,&nbsp;Haibing Yin,&nbsp;Pingkai Fang","doi":"10.1016/j.suscom.2025.101224","DOIUrl":"10.1016/j.suscom.2025.101224","url":null,"abstract":"<div><div>The increasing complexity of modern power systems, arising from increased electricity demand and large-scale renewable energy resource integration, creates significant challenges for real-time scheduling and operational reliability. Conventional deterministic scheduling processes mostly cannot incorporate the inherent uncertainty and variability associated with the fluctuations of wind and solar generation, as well as fluctuating load demand, which leads to inefficiencies in the amount of necessary resources required and increased operational costs. This research proposes a novel multi-objective power scheduling framework that incorporates a Stochastic State-Space Model (SSSM) and Reinforcement Learning (RL) for dynamic management of generation, storage, and demand uncertainty. The SSSM takes into account the stochastic variability of renewable generation, uncertain demand profiles, and exogenous contingencies of the system. The RL agent is continuously learning the best scheduling strategies as it operates the power system to minimize operational costs while improving availability and maximizing scheduling performance. Simulation results using Monte Carlo testing over a 24-hour horizon demonstrated that the proposed method achieved a reduction of up to 20 % in operational costs, 10 % more system availability, and scheduling efficiencies of over 90 % compared to traditional methods. The proposed approach offers a feasible way forward for power systems operators to simultaneously meet the objectives of cost, reliability, and sustainability under a paradigm of uncertainty, while also having relevant application to real-time operation in smart grid systems, particularly in systems with high renewable energy.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101224"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145267157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel ultra-low power post quantum approach using artificial intelligence based key generation for cyber physical system in Internet of things 一种基于人工智能的物联网网络物理系统密钥生成超低功耗后量子方法
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-11-11 DOI: 10.1016/j.suscom.2025.101242
Ankita Sarkar , Mansi Jhamb
The expansion of Internet of Things (IoT) devices has revolutionized various industries, particularly healthcare, where the Internet of Medical Things (IoMT) enables real-time data collection, analysis, and secure transmission of sensitive patient information. However, these resource-constrained devices face significant security challenges, particularly with the advent of quantum computing. This work introduces an intelligent cryptographic framework tailored to address these challenges, integrating lightweight cryptographic primitives, chaotic systems, and quantum-resistant techniques. Performance evaluation using image metrics such as Mean Squared Error (MSE), Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM) demonstrates the framework's effectiveness. The results indicate an average MSE of 5590.816, an MAE of 83.909, a PSNR of 8.044 dB, and an SSIM of 0.0224, showcasing strong encryption and minimal data distortion. Furthermore, this hybrid cryptographic system ensures diffusion, nonlinearity, randomness, and strong key dependency while demonstrating resistance to cryptanalytic and quantum attacks. The proposed framework is computationally competent, making it particularly well-suited for resource-constrained IoMT devices with a minimum energy consumption of 3.536 µJ.
物联网(IoT)设备的扩展已经彻底改变了各个行业,特别是医疗保健行业,其中医疗物联网(IoMT)实现了实时数据收集、分析和敏感患者信息的安全传输。然而,这些资源受限的设备面临着重大的安全挑战,特别是随着量子计算的出现。这项工作引入了一个智能密码框架来解决这些挑战,集成了轻量级密码原语、混沌系统和抗量子技术。使用均方误差(MSE)、平均绝对误差(MAE)、峰值信噪比(PSNR)和结构相似性指数(SSIM)等图像指标进行性能评估,证明了该框架的有效性。结果表明,平均MSE为5590.816,MAE为83.909,PSNR为8.044 dB, SSIM为0.0224,具有较强的加密性能和最小的数据失真。此外,这种混合密码系统确保了扩散、非线性、随机性和强密钥依赖性,同时显示出对密码分析和量子攻击的抵抗力。所提出的框架具有计算能力,使其特别适合资源受限的IoMT设备,最小能耗为3.536µJ。
{"title":"A novel ultra-low power post quantum approach using artificial intelligence based key generation for cyber physical system in Internet of things","authors":"Ankita Sarkar ,&nbsp;Mansi Jhamb","doi":"10.1016/j.suscom.2025.101242","DOIUrl":"10.1016/j.suscom.2025.101242","url":null,"abstract":"<div><div>The expansion of Internet of Things (IoT) devices has revolutionized various industries, particularly healthcare, where the Internet of Medical Things (IoMT) enables real-time data collection, analysis, and secure transmission of sensitive patient information. However, these resource-constrained devices face significant security challenges, particularly with the advent of quantum computing. This work introduces an intelligent cryptographic framework tailored to address these challenges, integrating lightweight cryptographic primitives, chaotic systems, and quantum-resistant techniques. Performance evaluation using image metrics such as Mean Squared Error (MSE), Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM) demonstrates the framework's effectiveness. The results indicate an average MSE of 5590.816, an MAE of 83.909, a PSNR of 8.044 dB, and an SSIM of 0.0224, showcasing strong encryption and minimal data distortion. Furthermore, this hybrid cryptographic system ensures diffusion, nonlinearity, randomness, and strong key dependency while demonstrating resistance to cryptanalytic and quantum attacks. The proposed framework is computationally competent, making it particularly well-suited for resource-constrained IoMT devices with a minimum energy consumption of 3.536 µJ.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101242"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Power management for smart grids integrating renewable energy sources using Greylag goose optimization and anti-interference dynamic integral neural network 基于灰雁优化和抗干扰动态积分神经网络的可再生能源集成智能电网电源管理
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-09-02 DOI: 10.1016/j.suscom.2025.101199
G.K. Jabash Samuel , P. Rajendran , Papana Venkata Prasad , Chinthalacheruvu Venkata Krishna Reddy
This paper proposes a hybrid power management strategy for smart grids (SGs) that integrates renewable energy sources (RESs), such as battery energy storage systems (BESS), fuel cells (FCs), wind turbines (WT), and solar photovoltaic (PV). The GGO-AIDINN approach integrates Greylag Goose Optimization (GGO) and an Anti-Interference Dynamic Integral Neural Network (AIDINN) to address high emissions during low renewable energy (RE) availability and rising operational costs from advanced infrastructure. The GGO optimizes resource allocation and energy distribution, maximizing the use of available RE. Meanwhile, AIDINN predicts energy consumption patterns based on weather conditions, improving overall system performance. The proposed GGO-AIDINN model is implemented on MATLAB and evaluated against several existing methods, including Fuzzy Logic Control (FLC), Non-dominated Sorting Genetic Algorithm (NSGA-II), and others. Results show the hybrid method achieves significant improvements, with an operational cost of $1328 per MW, emissions of 13.76 kg per MW, and an efficiency of 98.7 %. These outcomes demonstrate that GGO-AIDINN outperforms traditional techniques, offering lower costs, reduced emissions, and enhanced system efficiency. This makes it a superior solution for sustainable power management in SGs incorporating RESs and BESS.
本文提出了一种集成可再生能源(RESs)的智能电网(SGs)混合电源管理策略,如电池储能系统(BESS)、燃料电池(fc)、风力涡轮机(WT)和太阳能光伏(PV)。GGO-AIDINN方法集成了灰雁优化(GGO)和抗干扰动态积分神经网络(AIDINN),以解决低可再生能源(RE)可用性和先进基础设施运营成本上升时的高排放问题。GGO优化资源分配和能源分配,最大限度地利用可用的可再生能源。同时,AIDINN根据天气状况预测能源消耗模式,提高整体系统性能。在MATLAB上实现了所提出的go - aidinn模型,并对几种现有方法进行了评估,包括模糊逻辑控制(FLC)、非支配排序遗传算法(NSGA-II)等。结果表明,混合方法取得了显著的改进,运行成本为1328美元/兆瓦,排放量为13.76 kg /兆瓦,效率为98.7 %。这些结果表明,go - aidinn技术优于传统技术,成本更低,排放更少,系统效率更高。这使得它成为SGs整合RESs和BESS的可持续电源管理的卓越解决方案。
{"title":"Power management for smart grids integrating renewable energy sources using Greylag goose optimization and anti-interference dynamic integral neural network","authors":"G.K. Jabash Samuel ,&nbsp;P. Rajendran ,&nbsp;Papana Venkata Prasad ,&nbsp;Chinthalacheruvu Venkata Krishna Reddy","doi":"10.1016/j.suscom.2025.101199","DOIUrl":"10.1016/j.suscom.2025.101199","url":null,"abstract":"<div><div>This paper proposes a hybrid power management strategy for smart grids (SGs) that integrates renewable energy sources (RESs), such as battery energy storage systems (BESS), fuel cells (FCs), wind turbines (WT), and solar photovoltaic (PV). The GGO-AIDINN approach integrates Greylag Goose Optimization (GGO) and an Anti-Interference Dynamic Integral Neural Network (AIDINN) to address high emissions during low renewable energy (RE) availability and rising operational costs from advanced infrastructure. The GGO optimizes resource allocation and energy distribution, maximizing the use of available RE. Meanwhile, AIDINN predicts energy consumption patterns based on weather conditions, improving overall system performance. The proposed GGO-AIDINN model is implemented on MATLAB and evaluated against several existing methods, including Fuzzy Logic Control (FLC), Non-dominated Sorting Genetic Algorithm (NSGA-II), and others. Results show the hybrid method achieves significant improvements, with an operational cost of $1328 per MW, emissions of 13.76 kg per MW, and an efficiency of 98.7 %. These outcomes demonstrate that GGO-AIDINN outperforms traditional techniques, offering lower costs, reduced emissions, and enhanced system efficiency. This makes it a superior solution for sustainable power management in SGs incorporating RESs and BESS.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101199"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward a secure and scalable IoT: A survey of IOTA-based distributed ledger technologies 迈向安全和可扩展的物联网:基于iota的分布式账本技术调查
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-10-14 DOI: 10.1016/j.suscom.2025.101225
Tariq Alsboui , Hussain Al-Aqrabi , Ahmad Manasrah , Mahmoud Artemi
The increasing adoption of Internet of Things (IoT) systems demands secure, energy-efficient, and scalable solutions capable of supporting mission-critical operations. Traditional blockchain-based Distributed Ledger Technologies (DLTs), however, face limitations such as high energy consumption, poor scalability, and transaction fees, making them less ideal for IoT environments. This paper presents a structured review of IOTA’s Tangle, a lightweight, feeless, and scalable DLT designed specifically for decentralized IoT architectures. The study categorizes recent IOTA-based approaches into four key domains: security, privacy, scalability, and energy efficiency. The surveyed literature is systematically classified and analyzed, highlighting the core challenges addressed by each approach. Comparative evaluation reveals the strengths and limitations of current methods in meeting IoT requirements. The findings suggest that while IOTA offers several advantages over traditional blockchains, integrating hybrid and comprehensive solutions remains a promising direction for future research. The paper concludes by outlining open challenges and opportunities for advancing IOTA-enabled IoT systems.
物联网(IoT)系统的日益普及需要能够支持关键任务操作的安全、节能和可扩展的解决方案。然而,传统的基于区块链的分布式账本技术(dlt)面临着诸如高能耗,可扩展性差和交易费用等限制,使其不太适合物联网环境。本文对IOTA的Tangle进行了结构化的回顾,Tangle是一种轻量级、无感觉、可扩展的DLT,专为分散的物联网架构而设计。该研究将最近基于iota的方法分为四个关键领域:安全性、隐私性、可扩展性和能源效率。所调查的文献被系统地分类和分析,突出了每种方法所解决的核心挑战。对比评估揭示了当前方法在满足物联网要求方面的优势和局限性。研究结果表明,虽然IOTA与传统区块链相比有几个优势,但整合混合和综合解决方案仍然是未来研究的一个有希望的方向。本文最后概述了推进支持iota的物联网系统的开放挑战和机遇。
{"title":"Toward a secure and scalable IoT: A survey of IOTA-based distributed ledger technologies","authors":"Tariq Alsboui ,&nbsp;Hussain Al-Aqrabi ,&nbsp;Ahmad Manasrah ,&nbsp;Mahmoud Artemi","doi":"10.1016/j.suscom.2025.101225","DOIUrl":"10.1016/j.suscom.2025.101225","url":null,"abstract":"<div><div>The increasing adoption of Internet of Things (IoT) systems demands secure, energy-efficient, and scalable solutions capable of supporting mission-critical operations. Traditional blockchain-based Distributed Ledger Technologies (DLTs), however, face limitations such as high energy consumption, poor scalability, and transaction fees, making them less ideal for IoT environments. This paper presents a structured review of IOTA’s Tangle, a lightweight, feeless, and scalable DLT designed specifically for decentralized IoT architectures. The study categorizes recent IOTA-based approaches into four key domains: security, privacy, scalability, and energy efficiency. The surveyed literature is systematically classified and analyzed, highlighting the core challenges addressed by each approach. Comparative evaluation reveals the strengths and limitations of current methods in meeting IoT requirements. The findings suggest that while IOTA offers several advantages over traditional blockchains, integrating hybrid and comprehensive solutions remains a promising direction for future research. The paper concludes by outlining open challenges and opportunities for advancing IOTA-enabled IoT systems.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101225"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-efficient resource scheduling scheme using modified load adaptive sequence arrangement (M-LASA) with FILO polling for optical access network 基于FILO轮询的改进负载自适应序列调度(M-LASA)节能光接入网资源调度方案
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-10-04 DOI: 10.1016/j.suscom.2025.101223
Mohan V , Senthil Kumar T , Chitrakala G
Power conservation gains more attention in passive optical networks for enhanced performance. Optical networks have pooling sequences that schedule the resources based on traffic load to attain better energy efficiency. Various sequence schemes have been introduced by the research community; however, the load adaptive sequence arrangement (LASA) suits well for optical access networks. This research proposes a Modified LASA (M-LASA) model that improves energy efficiency in Optical Access Networks (OAN) by integrating a First-In-Last-Out (FILO) polling sequence. The proposed scheme increases the optical network units’ (ONUs) idle time, thereby reducing power consumption significantly compared to traditional scheduling strategies. Simulation results reveal that the proposed M-LASA-FILO scheme outperforms existing methods—such as fixed polling DFB, fixed polling VCSEL, LASA, FILO-DFB, and FILO-VCSEL—in terms of reduced power consumption, improved energy savings, higher sleep count, lower delay, and minimized polling cycle time. For instance, the proposed model achieves maximum energy savings and lower delay even at increased idle time and higher traffic load, confirming its efficiency and robustness in dynamic network conditions.
为了提高无源光网络的性能,节能越来越受到人们的关注。光网络具有基于流量负载调度资源的池化序列,以获得更好的能源效率。科研界提出了各种序列方案;而负载自适应序列排列(LASA)则适合于光接入网。本文提出了一种改进的LASA (M-LASA)模型,该模型通过集成先入后出(FILO)轮询序列来提高光接入网(OAN)的能源效率。与传统调度策略相比,该方案增加了光网络单元(onu)的空闲时间,从而显著降低了功耗。仿真结果表明,所提出的M-LASA-FILO方案优于现有的固定轮询DFB、固定轮询VCSEL、LASA、FILO-DFB和filo -VCSEL,在降低功耗、提高节能、更高的睡眠计数、更低的延迟和最小化轮询周期时间方面。例如,在空闲时间增加和流量负载增加的情况下,该模型实现了最大的节能和较低的延迟,验证了其在动态网络条件下的有效性和鲁棒性。
{"title":"Energy-efficient resource scheduling scheme using modified load adaptive sequence arrangement (M-LASA) with FILO polling for optical access network","authors":"Mohan V ,&nbsp;Senthil Kumar T ,&nbsp;Chitrakala G","doi":"10.1016/j.suscom.2025.101223","DOIUrl":"10.1016/j.suscom.2025.101223","url":null,"abstract":"<div><div>Power conservation gains more attention in passive optical networks for enhanced performance. Optical networks have pooling sequences that schedule the resources based on traffic load to attain better energy efficiency. Various sequence schemes have been introduced by the research community; however, the load adaptive sequence arrangement (LASA) suits well for optical access networks. This research proposes a Modified LASA (M-LASA) model that improves energy efficiency in Optical Access Networks (OAN) by integrating a First-In-Last-Out (FILO) polling sequence. The proposed scheme increases the optical network units’ (ONUs) idle time, thereby reducing power consumption significantly compared to traditional scheduling strategies. Simulation results reveal that the proposed M-LASA-FILO scheme outperforms existing methods—such as fixed polling DFB, fixed polling VCSEL, LASA, FILO-DFB, and FILO-VCSEL—in terms of reduced power consumption, improved energy savings, higher sleep count, lower delay, and minimized polling cycle time. For instance, the proposed model achieves maximum energy savings and lower delay even at increased idle time and higher traffic load, confirming its efficiency and robustness in dynamic network conditions.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101223"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A resilient IoT-enabled framework using hybrid decision tree and wavelet transform for secure and sustainable photovoltaic energy management 采用混合决策树和小波变换的弹性物联网框架,实现安全和可持续的光伏能源管理
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-10-01 DOI: 10.1016/j.suscom.2025.101221
Mahmoud Elsisi , Mohammed Amer , Mahmoud N. Ali , Chun-Lien Su
The increasing integration of photovoltaic (PV) systems into smart grids necessitates resilient and secure monitoring frameworks to mitigate the impact of cyber threats such as false data injection (FDI) attacks. This study presents an Internet of Things (IoT)-enabled architecture that leverages a hybrid decision tree model combined with continuous wavelet transform (DT-CWT) for real-time anomaly detection and performance monitoring in PV systems. The CWT is used for time-frequency decomposition and feeding the extracted scalograms into a lightweight DT model. Designed with computational efficiency and low memory overhead, the proposed framework is optimized for deployment in resource-constrained edge environments. Experimental results demonstrate that the DT-CWT-based hybrid model significantly enhances detection accuracy by 97.89 % with a processing latency of 1.32 ms on edge devices and operational resilience, outperforming traditional machine learning baselines (e.g., Linear Discriminant Analysis (LDA), Gaussian Naïve Bayes (GNB), Support Vector Classifier (SVC), and Random Forest (RF), and DT) under adversarial conditions. This approach ensures data integrity, strengthens cybersecurity, and supports intelligent energy management, contributing to the realization of resilient and sustainable power grids aligned with Industry 4.0 and global sustainability goals.
光伏(PV)系统越来越多地集成到智能电网中,需要有弹性和安全的监控框架,以减轻虚假数据注入(FDI)攻击等网络威胁的影响。本研究提出了一种支持物联网(IoT)的架构,该架构利用混合决策树模型与连续小波变换(DT-CWT)相结合,用于光伏系统的实时异常检测和性能监控。CWT用于时频分解,并将提取的尺度图馈送到轻量级DT模型中。该框架具有较高的计算效率和较低的内存开销,并针对资源受限的边缘环境进行了优化。实验结果表明,基于DT- cwt的混合模型显著提高了97.89 %的检测精度,在边缘设备上的处理延迟为1.32 ms,并且在对抗条件下优于传统的机器学习基线(例如线性判别分析(LDA),高斯Naïve贝叶斯(GNB),支持向量分类器(SVC)和随机森林(RF)和DT)。这种方法确保了数据完整性,加强了网络安全,并支持智能能源管理,有助于实现符合工业4.0和全球可持续发展目标的弹性和可持续电网。
{"title":"A resilient IoT-enabled framework using hybrid decision tree and wavelet transform for secure and sustainable photovoltaic energy management","authors":"Mahmoud Elsisi ,&nbsp;Mohammed Amer ,&nbsp;Mahmoud N. Ali ,&nbsp;Chun-Lien Su","doi":"10.1016/j.suscom.2025.101221","DOIUrl":"10.1016/j.suscom.2025.101221","url":null,"abstract":"<div><div>The increasing integration of photovoltaic (PV) systems into smart grids necessitates resilient and secure monitoring frameworks to mitigate the impact of cyber threats such as false data injection (FDI) attacks. This study presents an Internet of Things (IoT)-enabled architecture that leverages a hybrid decision tree model combined with continuous wavelet transform (DT-CWT) for real-time anomaly detection and performance monitoring in PV systems. The CWT is used for time-frequency decomposition and feeding the extracted scalograms into a lightweight DT model. Designed with computational efficiency and low memory overhead, the proposed framework is optimized for deployment in resource-constrained edge environments. Experimental results demonstrate that the DT-CWT-based hybrid model significantly enhances detection accuracy by 97.89 % with a processing latency of 1.32 ms on edge devices and operational resilience, outperforming traditional machine learning baselines (e.g., Linear Discriminant Analysis (LDA), Gaussian Naïve Bayes (GNB), Support Vector Classifier (SVC), and Random Forest (RF), and DT) under adversarial conditions. This approach ensures data integrity, strengthens cybersecurity, and supports intelligent energy management, contributing to the realization of resilient and sustainable power grids aligned with Industry 4.0 and global sustainability goals.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101221"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated deep learning for secure and energy-efficient cyber threat mitigation in smart grid automation 联合深度学习在智能电网自动化中安全、节能的网络威胁缓解
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-11-05 DOI: 10.1016/j.suscom.2025.101248
Mohammed Shuaib
This research presents the federated deep-learning (DL) based cybersecurity platform of smart-grid automation with the focus on privacy, distributed intelligence and energy efficiency. The federated learning system allows grid-edge devices (such as substations and smart meters) to cooperate in training a threat-detection model without sharing raw data hence maintaining local confidentiality. The proposed structure is a hybrid convolutional neural network (CNN)-long short-term memory (LSTM) model, which runs locally to predict spatiotemporal threats and the synchronization of the model is done in a Federated Averaging (FedAvg) algorithm. The model achieves a Threat Detection Accuracy (TDA) of 97.2 per cent, and a False Alarm Rate of 3.6 per cent. Compared to centralized learning, communication overhead is reduced by 41 % and, hence, the control response latency is maintained. The importance of optimisation update intervals and pruning of edge models reduce energy consumption during training by 22 % of the original consumption. The resilience of the system to fake data injection and command-spoofing attacks is verified by simulation on the modified KDD 99 data set and real-grid situations in NS −3. The federated solution ensures scalable implementation of heterogeneous grid resources. In general, this study is a safe and energy-efficient approach towards the reduction of changing cyber threats within real-time smart-grid settings.
本研究提出了基于联邦深度学习的智能电网自动化网络安全平台,重点关注隐私、分布式智能和能源效率。联邦学习系统允许电网边缘设备(如变电站和智能电表)在不共享原始数据的情况下合作训练威胁检测模型,从而保持本地机密性。提出的结构是一种混合卷积神经网络(CNN)长短期记忆(LSTM)模型,该模型在局部运行以预测时空威胁,并通过联邦平均(FedAvg)算法实现模型的同步。该模型的威胁检测准确率(TDA)为97.2%,虚警率为3.6%。与集中式学习相比,通信开销降低了41% %,保持了控制响应延迟。优化更新间隔和边缘模型修剪的重要性在训练期间减少了原始消耗的22% %的能量消耗。在改进的KDD 99数据集和NS−3的实际网格情况下,通过仿真验证了系统对虚假数据注入和命令欺骗攻击的弹性。联邦解决方案确保异构网格资源的可伸缩实现。总的来说,这项研究是一种安全和节能的方法,可以减少实时智能电网设置中不断变化的网络威胁。
{"title":"Federated deep learning for secure and energy-efficient cyber threat mitigation in smart grid automation","authors":"Mohammed Shuaib","doi":"10.1016/j.suscom.2025.101248","DOIUrl":"10.1016/j.suscom.2025.101248","url":null,"abstract":"<div><div>This research presents the federated deep-learning (DL) based cybersecurity platform of smart-grid automation with the focus on privacy, distributed intelligence and energy efficiency. The federated learning system allows grid-edge devices (such as substations and smart meters) to cooperate in training a threat-detection model without sharing raw data hence maintaining local confidentiality. The proposed structure is a hybrid convolutional neural network (CNN)-long short-term memory (LSTM) model, which runs locally to predict spatiotemporal threats and the synchronization of the model is done in a Federated Averaging (FedAvg) algorithm. The model achieves a Threat Detection Accuracy (TDA) of 97.2 per cent, and a False Alarm Rate of 3.6 per cent. Compared to centralized learning, communication overhead is reduced by 41 % and, hence, the control response latency is maintained. The importance of optimisation update intervals and pruning of edge models reduce energy consumption during training by 22 % of the original consumption. The resilience of the system to fake data injection and command-spoofing attacks is verified by simulation on the modified KDD 99 data set and real-grid situations in NS −3. The federated solution ensures scalable implementation of heterogeneous grid resources. In general, this study is a safe and energy-efficient approach towards the reduction of changing cyber threats within real-time smart-grid settings.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101248"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual learning-based regression testing for scalable VLSI verification across hierarchical design layers 跨分层设计层的可扩展VLSI验证的持续基于学习的回归测试
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-11-24 DOI: 10.1016/j.suscom.2025.101259
Sindhu Nalla , G. Nagarajan
The high complexity and rapid evolution of Very Large-Scale Integration (VLSI) designs are pressing the limits of traditional regression testing especially in maintaining test relevance across design iterations. This paper introduces a Continual Learning-Based Regression Testing (CLRT) framework specifically designed for scalable VLSI verification across hierarchical abstraction levels such as logic, design, and chip. The framework overcomes the drawbacks of static test models through the stationary learning techniques, which are ingrained in the framework, and the ability to continuously learn and adjust the test strategy against every new design change and test results. To enforce the above property, our approach is based on a two-layer learning mechanism: the first layer is a supervised learning model with historical test outcomes for detecting regression-sensitive regions in the design space through the second one (an online continuous learning module) that can sequentially adapt to new data without catastrophic forgetting. This allows the system to remember learned test behavior and simultaneously adapt to changing design configurations. A hybrid feature selection mechanism is utilized for the extraction of the effective parameters, which should be extracted from design-level netlist, logic-level signals traces and fault logs at the chip bubbled status for a thorough cross-layer coverage. Experimental verification was performed on ITC-99 and Open Cores VLSI benchmark designs. The proposed CLRT framework achieved a remarkable reduction of 28.6 % in test redundancies and improvements of 35.2 % in fault detection accuracy when comparing to the traditional regression suites. Moreover, the system maintained a stable performance across variations of design, and this made it robust in dynamic testing conditions. The findings validate that CL models, if effectively integrated into rebase lining regression testing flows, can drastically augment the efficiency, flexibility, and scalability of the VLSI verification. Not only does this work offer a connecting point between machine learning and hierarchical VLSI testing, but it also paves the way for future self-improving test infrastructures in semiconductor design automation.
超大规模集成电路(VLSI)设计的高复杂性和快速发展正在挑战传统回归测试的极限,特别是在保持跨设计迭代的测试相关性方面。本文介绍了一个基于持续学习的回归测试(CLRT)框架,专为跨层次抽象级别(如逻辑、设计和芯片)的可扩展VLSI验证而设计。该框架通过在框架中根深蒂固的静态学习技术,以及根据每一个新的设计变化和测试结果不断学习和调整测试策略的能力,克服了静态测试模型的缺点。为了强化上述特性,我们的方法基于两层学习机制:第一层是具有历史测试结果的监督学习模型,用于检测设计空间中的回归敏感区域,第二层(在线连续学习模块)可以顺序适应新数据而不会发生灾难性遗忘。这允许系统记住学习的测试行为,同时适应不断变化的设计配置。利用混合特征选择机制提取有效参数,从芯片起泡状态下的设计级网表、逻辑级信号迹线和故障日志中提取有效参数,实现全面的跨层覆盖。在tc -99和Open Cores VLSI基准设计上进行了实验验证。与传统回归套件相比,所提出的CLRT框架的测试冗余度显著降低28.6 %,故障检测准确率显著提高35.2% %。此外,该系统在各种设计中都保持稳定的性能,这使其在动态测试条件下具有鲁棒性。研究结果验证了CL模型,如果有效地集成到rebase lining回归测试流程中,可以极大地提高VLSI验证的效率、灵活性和可扩展性。这项工作不仅提供了机器学习和分层VLSI测试之间的连接点,而且还为半导体设计自动化中未来自我改进的测试基础设施铺平了道路。
{"title":"Continual learning-based regression testing for scalable VLSI verification across hierarchical design layers","authors":"Sindhu Nalla ,&nbsp;G. Nagarajan","doi":"10.1016/j.suscom.2025.101259","DOIUrl":"10.1016/j.suscom.2025.101259","url":null,"abstract":"<div><div>The high complexity and rapid evolution of Very Large-Scale Integration (VLSI) designs are pressing the limits of traditional regression testing especially in maintaining test relevance across design iterations. This paper introduces a Continual Learning-Based Regression Testing (CLRT) framework specifically designed for scalable VLSI verification across hierarchical abstraction levels such as logic, design, and chip. The framework overcomes the drawbacks of static test models through the stationary learning techniques, which are ingrained in the framework, and the ability to continuously learn and adjust the test strategy against every new design change and test results. To enforce the above property, our approach is based on a two-layer learning mechanism: the first layer is a supervised learning model with historical test outcomes for detecting regression-sensitive regions in the design space through the second one (an online continuous learning module) that can sequentially adapt to new data without catastrophic forgetting. This allows the system to remember learned test behavior and simultaneously adapt to changing design configurations. A hybrid feature selection mechanism is utilized for the extraction of the effective parameters, which should be extracted from design-level netlist, logic-level signals traces and fault logs at the chip bubbled status for a thorough cross-layer coverage. Experimental verification was performed on ITC-99 and Open Cores VLSI benchmark designs. The proposed CLRT framework achieved a remarkable reduction of 28.6 % in test redundancies and improvements of 35.2 % in fault detection accuracy when comparing to the traditional regression suites. Moreover, the system maintained a stable performance across variations of design, and this made it robust in dynamic testing conditions. The findings validate<!--> <!-->that CL models, if effectively integrated into rebase lining regression testing flows, can drastically augment the efficiency, flexibility, and scalability of the VLSI verification. Not only does this work offer a connecting point between machine learning and hierarchical VLSI testing, but it also paves the way for future self-improving test infrastructures in semiconductor design automation.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101259"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Sustainable Computing-Informatics & Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1