首页 > 最新文献

Sustainable Computing-Informatics & Systems最新文献

英文 中文
Partitioned scheduling in mixed-criticality systems with thermal-constrained and semi-clairvoyance 热约束半透视混合临界系统的分区调度
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-04 DOI: 10.1016/j.suscom.2025.101217
Yi-Wen Zhang, Jin-Peng Ma
With the exponential growth of power density in modern high-performance processors, it has not only led to significant energy but also resulted in increased chip temperatures. Therefore, reducing energy consumption and temperature have become two important issues in mixed-criticality systems (MCS) design. This paper focused on semi-clairvoyant scheduling in MCS with multiprocessor platforms. In semi-clairvoyant scheduling, high-criticality jobs are aware of whether their execution time will surpass their Worst-Case Execution Time in the low-criticality mode upon their arrival. Firstly, we give temperature constraints for the MCS task set based on steady-state thermal analysis. Secondly, we propose a new thermal-aware partitioned semi-clairvoyant scheduling algorithm called (TAPMC), aiming to minimize the normalized energy consumption under threshold temperature constraints. Finally, we evaluated TAPMC experimentally compared to other benchmark algorithms, and the experimental results illustrate that the TAPMC algorithm surpasses other algorithms in normalized energy consumption.
随着现代高性能处理器中功率密度的指数级增长,它不仅导致了巨大的能量,而且导致了芯片温度的升高。因此,降低能耗和温度成为混合临界系统设计中的两个重要问题。本文主要研究了多处理器平台下MCS的半透视调度问题。在半透视调度中,高临界任务在到达时意识到它们的执行时间是否会超过低临界模式下的最坏情况执行时间。首先,给出了基于稳态热分析的MCS任务集的温度约束。其次,针对阈值温度约束下的归一化能耗最小化问题,提出了一种新的热感知分区半盲视调度算法(TAPMC)。最后,通过实验对TAPMC算法与其他基准算法进行了比较,实验结果表明TAPMC算法在归一化能耗方面优于其他算法。
{"title":"Partitioned scheduling in mixed-criticality systems with thermal-constrained and semi-clairvoyance","authors":"Yi-Wen Zhang,&nbsp;Jin-Peng Ma","doi":"10.1016/j.suscom.2025.101217","DOIUrl":"10.1016/j.suscom.2025.101217","url":null,"abstract":"<div><div>With the exponential growth of power density in modern high-performance processors, it has not only led to significant energy but also resulted in increased chip temperatures. Therefore, reducing energy consumption and temperature have become two important issues in mixed-criticality systems (MCS) design. This paper focused on semi-clairvoyant scheduling in MCS with multiprocessor platforms. In semi-clairvoyant scheduling, high-criticality jobs are aware of whether their execution time will surpass their Worst-Case Execution Time in the low-criticality mode upon their arrival. Firstly, we give temperature constraints for the MCS task set based on steady-state thermal analysis. Secondly, we propose a new thermal-aware partitioned semi-clairvoyant scheduling algorithm called (TAPMC), aiming to minimize the normalized energy consumption under threshold temperature constraints. Finally, we evaluated TAPMC experimentally compared to other benchmark algorithms, and the experimental results illustrate that the TAPMC algorithm surpasses other algorithms in normalized energy consumption.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101217"},"PeriodicalIF":5.7,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent reinforcement learning for enhanced energy efficiency in hybrid electric vehicles 用于提高混合动力汽车能效的智能强化学习
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-04 DOI: 10.1016/j.suscom.2025.101219
Shilpa Ghode , Mayuri Digalwar
Energy Management in Hybrid Electric Vehicles (EMinHEVs) refers to optimizing energy flow within a vehicle’s powertrain to enhance efficiency and range. This process involves complex tasks such as power analysis, component characterization, and hyperparameter reconfiguration, which directly impact the performance of energy management algorithms. However, existing optimization models struggle with scalability and inter-component correlations, limiting their effectiveness. This paper introduces a novel model-based hybrid framework combining Deep Dyna Reinforcement Learning (D2RL) with Genetic Optimization to address these challenges. Unlike conventional model-free approaches, the D2RL leverages a learned internal model to simulate future states, enabling more efficient decision-making and parameter tuning. The framework dynamically refines critical engine parameters — including speed, power, and torque — for both the generator and motor. Initially, D2RL estimates optimal parameter sets, which are then fine-tuned using a Genetic Optimizer. This optimizer incorporates an augmented reward function to iteratively enhance energy efficiency and vehicle performance. The proposed method outperforms state-of-the-art techniques, including Optimal Logical Control, Adaptive Equivalent Consumption Minimization Strategy, and Learnable Partheno-Genetic Algorithm. Experimental results demonstrate a 3.5% reduction in engine costs, an 8.3% improvement in fuel efficiency, optimized torque characteristics, and minimized current requirements. These findings establish our approach as a scalable and effective solution for intelligent energy management in hybrid electric vehicles, offering a significant advancement in model-based optimization strategies.
混合动力汽车的能量管理(eminhev)是指优化汽车动力系统内的能量流,以提高效率和续航里程。这一过程涉及复杂的任务,如功率分析、组件表征和超参数重构,这些任务直接影响能源管理算法的性能。然而,现有的优化模型与可伸缩性和组件间的相关性作斗争,限制了它们的有效性。本文介绍了一种新的基于模型的混合框架,将深度动态强化学习(D2RL)与遗传优化相结合,以解决这些挑战。与传统的无模型方法不同,D2RL利用学习的内部模型来模拟未来的状态,从而实现更有效的决策和参数调整。该框架动态地细化发电机和电动机的关键发动机参数,包括速度、功率和扭矩。最初,D2RL估计最优参数集,然后使用遗传优化器对其进行微调。该优化器结合了一个增强的奖励函数,以迭代地提高能源效率和车辆性能。该方法优于最优逻辑控制、自适应等效消耗最小化策略和可学习的孤雌遗传算法等最先进的技术。实验结果表明,发动机成本降低3.5%,燃油效率提高8.3%,扭矩特性优化,电流需求最小化。这些发现表明,我们的方法是一种可扩展的、有效的混合动力汽车智能能源管理解决方案,在基于模型的优化策略方面取得了重大进展。
{"title":"Intelligent reinforcement learning for enhanced energy efficiency in hybrid electric vehicles","authors":"Shilpa Ghode ,&nbsp;Mayuri Digalwar","doi":"10.1016/j.suscom.2025.101219","DOIUrl":"10.1016/j.suscom.2025.101219","url":null,"abstract":"<div><div>Energy Management in Hybrid Electric Vehicles (EMinHEVs) refers to optimizing energy flow within a vehicle’s powertrain to enhance efficiency and range. This process involves complex tasks such as power analysis, component characterization, and hyperparameter reconfiguration, which directly impact the performance of energy management algorithms. However, existing optimization models struggle with scalability and inter-component correlations, limiting their effectiveness. This paper introduces a novel model-based hybrid framework combining Deep Dyna Reinforcement Learning (D2RL) with Genetic Optimization to address these challenges. Unlike conventional model-free approaches, the D2RL leverages a learned internal model to simulate future states, enabling more efficient decision-making and parameter tuning. The framework dynamically refines critical engine parameters — including speed, power, and torque — for both the generator and motor. Initially, D2RL estimates optimal parameter sets, which are then fine-tuned using a Genetic Optimizer. This optimizer incorporates an augmented reward function to iteratively enhance energy efficiency and vehicle performance. The proposed method outperforms state-of-the-art techniques, including Optimal Logical Control, Adaptive Equivalent Consumption Minimization Strategy, and Learnable Partheno-Genetic Algorithm. Experimental results demonstrate a 3.5% reduction in engine costs, an 8.3% improvement in fuel efficiency, optimized torque characteristics, and minimized current requirements. These findings establish our approach as a scalable and effective solution for intelligent energy management in hybrid electric vehicles, offering a significant advancement in model-based optimization strategies.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101219"},"PeriodicalIF":5.7,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trade-offs between power consumption and response time in deep learning systems: A queueing model perspective 深度学习系统中功耗和响应时间的权衡:排队模型的视角
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-04 DOI: 10.1016/j.suscom.2025.101220
Yuan Yao, Bin Zhu, Yang Xiao, Hao Liu
Deep learning has revolutionized numerous fields, yet the computational resources required for training these models are substantial, leading to high energy consumption and associated costs. This paper explores the trade-off between energy usage and system performance, specifically focusing on the average waiting time of tasks in environments that manage multiple types of jobs with varying levels of priority. Recognizing that not all training tasks have the same urgency, we introduce a framework for optimizing GPU energy consumption by adjusting power limits based on job priority. Using matrix geometric approximations, we develop an algorithm to calculate the mean sojourn time and average power consumption for such systems. Through a series of experiments and simulations, we validate the model’s accuracy and demonstrate the existence of a power-performance trade-off. Our findings provide valuable guidance for practitioners seeking to balance the computational efficiency of deep learning workflows with the need for energy conservation, offering potential for both cost reduction and sustainability in large-scale AI systems.
深度学习已经彻底改变了许多领域,但训练这些模型所需的计算资源非常多,导致高能耗和相关成本。本文探讨了能源使用和系统性能之间的权衡,特别关注了在管理具有不同优先级的多种类型作业的环境中任务的平均等待时间。认识到并非所有训练任务都具有相同的紧迫性,我们引入了一个框架,通过根据任务优先级调整功率限制来优化GPU能耗。利用矩阵几何近似,我们开发了一种算法来计算这类系统的平均逗留时间和平均功耗。通过一系列的实验和仿真,我们验证了模型的准确性,并证明了功率性能权衡的存在。我们的研究结果为寻求平衡深度学习工作流程的计算效率与节能需求的从业者提供了有价值的指导,为大规模人工智能系统的成本降低和可持续性提供了潜力。
{"title":"Trade-offs between power consumption and response time in deep learning systems: A queueing model perspective","authors":"Yuan Yao,&nbsp;Bin Zhu,&nbsp;Yang Xiao,&nbsp;Hao Liu","doi":"10.1016/j.suscom.2025.101220","DOIUrl":"10.1016/j.suscom.2025.101220","url":null,"abstract":"<div><div>Deep learning has revolutionized numerous fields, yet the computational resources required for training these models are substantial, leading to high energy consumption and associated costs. This paper explores the trade-off between energy usage and system performance, specifically focusing on the average waiting time of tasks in environments that manage multiple types of jobs with varying levels of priority. Recognizing that not all training tasks have the same urgency, we introduce a framework for optimizing GPU energy consumption by adjusting power limits based on job priority. Using matrix geometric approximations, we develop an algorithm to calculate the mean sojourn time and average power consumption for such systems. Through a series of experiments and simulations, we validate the model’s accuracy and demonstrate the existence of a power-performance trade-off. Our findings provide valuable guidance for practitioners seeking to balance the computational efficiency of deep learning workflows with the need for energy conservation, offering potential for both cost reduction and sustainability in large-scale AI systems.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101220"},"PeriodicalIF":5.7,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A resilient IoT-enabled framework using hybrid decision tree and wavelet transform for secure and sustainable photovoltaic energy management 采用混合决策树和小波变换的弹性物联网框架,实现安全和可持续的光伏能源管理
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-01 DOI: 10.1016/j.suscom.2025.101221
Mahmoud Elsisi , Mohammed Amer , Mahmoud N. Ali , Chun-Lien Su
The increasing integration of photovoltaic (PV) systems into smart grids necessitates resilient and secure monitoring frameworks to mitigate the impact of cyber threats such as false data injection (FDI) attacks. This study presents an Internet of Things (IoT)-enabled architecture that leverages a hybrid decision tree model combined with continuous wavelet transform (DT-CWT) for real-time anomaly detection and performance monitoring in PV systems. The CWT is used for time-frequency decomposition and feeding the extracted scalograms into a lightweight DT model. Designed with computational efficiency and low memory overhead, the proposed framework is optimized for deployment in resource-constrained edge environments. Experimental results demonstrate that the DT-CWT-based hybrid model significantly enhances detection accuracy by 97.89 % with a processing latency of 1.32 ms on edge devices and operational resilience, outperforming traditional machine learning baselines (e.g., Linear Discriminant Analysis (LDA), Gaussian Naïve Bayes (GNB), Support Vector Classifier (SVC), and Random Forest (RF), and DT) under adversarial conditions. This approach ensures data integrity, strengthens cybersecurity, and supports intelligent energy management, contributing to the realization of resilient and sustainable power grids aligned with Industry 4.0 and global sustainability goals.
光伏(PV)系统越来越多地集成到智能电网中,需要有弹性和安全的监控框架,以减轻虚假数据注入(FDI)攻击等网络威胁的影响。本研究提出了一种支持物联网(IoT)的架构,该架构利用混合决策树模型与连续小波变换(DT-CWT)相结合,用于光伏系统的实时异常检测和性能监控。CWT用于时频分解,并将提取的尺度图馈送到轻量级DT模型中。该框架具有较高的计算效率和较低的内存开销,并针对资源受限的边缘环境进行了优化。实验结果表明,基于DT- cwt的混合模型显著提高了97.89 %的检测精度,在边缘设备上的处理延迟为1.32 ms,并且在对抗条件下优于传统的机器学习基线(例如线性判别分析(LDA),高斯Naïve贝叶斯(GNB),支持向量分类器(SVC)和随机森林(RF)和DT)。这种方法确保了数据完整性,加强了网络安全,并支持智能能源管理,有助于实现符合工业4.0和全球可持续发展目标的弹性和可持续电网。
{"title":"A resilient IoT-enabled framework using hybrid decision tree and wavelet transform for secure and sustainable photovoltaic energy management","authors":"Mahmoud Elsisi ,&nbsp;Mohammed Amer ,&nbsp;Mahmoud N. Ali ,&nbsp;Chun-Lien Su","doi":"10.1016/j.suscom.2025.101221","DOIUrl":"10.1016/j.suscom.2025.101221","url":null,"abstract":"<div><div>The increasing integration of photovoltaic (PV) systems into smart grids necessitates resilient and secure monitoring frameworks to mitigate the impact of cyber threats such as false data injection (FDI) attacks. This study presents an Internet of Things (IoT)-enabled architecture that leverages a hybrid decision tree model combined with continuous wavelet transform (DT-CWT) for real-time anomaly detection and performance monitoring in PV systems. The CWT is used for time-frequency decomposition and feeding the extracted scalograms into a lightweight DT model. Designed with computational efficiency and low memory overhead, the proposed framework is optimized for deployment in resource-constrained edge environments. Experimental results demonstrate that the DT-CWT-based hybrid model significantly enhances detection accuracy by 97.89 % with a processing latency of 1.32 ms on edge devices and operational resilience, outperforming traditional machine learning baselines (e.g., Linear Discriminant Analysis (LDA), Gaussian Naïve Bayes (GNB), Support Vector Classifier (SVC), and Random Forest (RF), and DT) under adversarial conditions. This approach ensures data integrity, strengthens cybersecurity, and supports intelligent energy management, contributing to the realization of resilient and sustainable power grids aligned with Industry 4.0 and global sustainability goals.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101221"},"PeriodicalIF":5.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDN-Based NFV deployment for multi-objective resource allocation in edge computing: A deep reinforcement learning for iot workload scheduling 边缘计算中基于sdn的NFV多目标资源分配部署:物联网工作负载调度的深度强化学习
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-30 DOI: 10.1016/j.suscom.2025.101218
Mehdi Hosseinzadeh , Amir Haider , Amir Masoud Rahmani , Farhad Soleimanian Gharehchopogh , Shakiba Rajabi , Parisa Khoshvaght , Thantrira Porntaveetus , Sang-Woong Lee
The rapid growth of Internet of Things (IoT) devices presents significant challenges, particularly regarding resource management in real-time data processing environments. Traditional cloud computing struggles with high delay times and limited bandwidth, affecting user interaction and cognitive load. Edge computing mitigates these issues by decentralizing data processing and bringing resources closer to IoT devices, ultimately influencing human-computer interaction. This paper introduces a framework for resource allocation in edge computing environments, leveraging Software-Defined Networking (SDN) and Network Function Virtualization (NFV) alongside Deep Q-Network (DQN) optimization. The framework aims to enhance user experiences by improving CPU, memory, and storage efficiency while reducing network delays, contributing to a smoother and more efficient interaction with IoT systems. Simulated results demonstrate a 40 % improvement in CPU utilization, 30 % in memory, and 20 % in storage efficiency, which can positively impact IoT devices' perceived effectiveness and usability.
物联网(IoT)设备的快速增长带来了重大挑战,特别是在实时数据处理环境中的资源管理方面。传统的云计算与高延迟时间和有限的带宽作斗争,影响用户交互和认知负荷。边缘计算通过分散数据处理和使资源更接近物联网设备来缓解这些问题,最终影响人机交互。本文介绍了边缘计算环境中资源分配的框架,利用软件定义网络(SDN)和网络功能虚拟化(NFV)以及深度Q-Network (DQN)优化。该框架旨在通过提高CPU、内存和存储效率来增强用户体验,同时减少网络延迟,从而促进与物联网系统更顺畅、更有效的交互。模拟结果表明,CPU利用率提高了40% %,内存利用率提高了30% %,存储效率提高了20% %,这对物联网设备的感知有效性和可用性产生了积极影响。
{"title":"SDN-Based NFV deployment for multi-objective resource allocation in edge computing: A deep reinforcement learning for iot workload scheduling","authors":"Mehdi Hosseinzadeh ,&nbsp;Amir Haider ,&nbsp;Amir Masoud Rahmani ,&nbsp;Farhad Soleimanian Gharehchopogh ,&nbsp;Shakiba Rajabi ,&nbsp;Parisa Khoshvaght ,&nbsp;Thantrira Porntaveetus ,&nbsp;Sang-Woong Lee","doi":"10.1016/j.suscom.2025.101218","DOIUrl":"10.1016/j.suscom.2025.101218","url":null,"abstract":"<div><div>The rapid growth of Internet of Things (IoT) devices presents significant challenges, particularly regarding resource management in real-time data processing environments. Traditional cloud computing struggles with high delay times and limited bandwidth, affecting user interaction and cognitive load. Edge computing mitigates these issues by decentralizing data processing and bringing resources closer to IoT devices, ultimately influencing human-computer interaction. This paper introduces a framework for resource allocation in edge computing environments, leveraging Software-Defined Networking (SDN) and Network Function Virtualization (NFV) alongside Deep Q-Network (DQN) optimization. The framework aims to enhance user experiences by improving CPU, memory, and storage efficiency while reducing network delays, contributing to a smoother and more efficient interaction with IoT systems. Simulated results demonstrate a 40 % improvement in CPU utilization, 30 % in memory, and 20 % in storage efficiency, which can positively impact IoT devices' perceived effectiveness and usability.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101218"},"PeriodicalIF":5.7,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-efficient Load Balanced Edge Computing model for IoT using FL-HMM and BOA optimization 基于FL-HMM和BOA优化的物联网节能负载均衡边缘计算模型
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-30 DOI: 10.1016/j.suscom.2025.101215
Xiaochang Zheng , Ruixiang Guo , Shujing Lian
Consumers will have access to ubiquitous, low-latency computing services through the deployment of mobile edge computing (MEC) devices situated at the network's peripheral in next-generation wireless networks. Taking into account the design-based constraints on radio-access coverage and CS stability, we investigate the network's latency performance, namely the latency of computation and communication. Here, we want to model a spatial random network that has properties such as randomly dispersed nodes, parallel processing, non-orthogonal multiple access, and computing jobs that are produced at random. The emerging Internet of Things apps are putting a premium on very fast response times, and more and more people are turning to the edge computing system to handle these demands. Regardless, problems with latency (such as very sensitive delay required by emergent traffic). In this paper, we designed a Load Balanced Edge Computing (LBEC) model for Internet of Things (IoT). The overall contributions lies in three fold: First, the IoT devices are clustered based on load status in order to balance load in the network layer. For cluster formation, we presented K-Hop neighbor approach. In next, the cluster level load balancing is achieved by maintaining cluster reformation through Fuzzy Logic based Hidden Markov Model (FL-HMM). Finally, edge-level load balancing is attained through offloading procedure. We proposed Bobcat Optimization Algorithm (BOA). Final experimental results show that the proposed LBEC achieves better performance up to 5 % in each parameter such as response time, offloading time and throughput.
消费者将通过在下一代无线网络的网络外围部署移动边缘计算(MEC)设备,获得无处不在的低延迟计算服务。考虑到基于设计的无线接入覆盖和CS稳定性约束,我们研究了网络的延迟性能,即计算和通信的延迟。在这里,我们想要建模一个空间随机网络,它具有随机分散节点、并行处理、非正交多址访问和随机产生的计算作业等属性。新兴的物联网应用程序非常重视快速响应时间,越来越多的人转向边缘计算系统来处理这些需求。无论如何,延迟问题(例如紧急通信所需的非常敏感的延迟)。本文针对物联网(IoT)设计了一种负载均衡边缘计算(LBEC)模型。总体贡献在于三个方面:首先,物联网设备根据负载状态进行集群,以平衡网络层的负载。对于簇的形成,我们提出了K-Hop邻居方法。其次,利用基于模糊逻辑的隐马尔可夫模型(FL-HMM)维持集群重构,实现集群级负载均衡。最后,通过卸载过程实现边缘级负载均衡。提出了山猫优化算法(BOA)。最后的实验结果表明,所提出的LBEC在响应时间、卸载时间和吞吐量等参数上的性能都达到了5 %。
{"title":"Energy-efficient Load Balanced Edge Computing model for IoT using FL-HMM and BOA optimization","authors":"Xiaochang Zheng ,&nbsp;Ruixiang Guo ,&nbsp;Shujing Lian","doi":"10.1016/j.suscom.2025.101215","DOIUrl":"10.1016/j.suscom.2025.101215","url":null,"abstract":"<div><div>Consumers will have access to ubiquitous, low-latency computing services through the deployment of mobile edge computing (MEC) devices situated at the network's peripheral in next-generation wireless networks. Taking into account the design-based constraints on radio-access coverage and CS stability, we investigate the network's latency performance, namely the latency of computation and communication. Here, we want to model a spatial random network that has properties such as randomly dispersed nodes, parallel processing, non-orthogonal multiple access, and computing jobs that are produced at random. The emerging Internet of Things apps are putting a premium on very fast response times, and more and more people are turning to the edge computing system to handle these demands. Regardless, problems with latency (such as very sensitive delay required by emergent traffic). In this paper, we designed a Load Balanced Edge Computing (LBEC) model for Internet of Things (IoT). The overall contributions lies in three fold: First, the IoT devices are clustered based on load status in order to balance load in the network layer. For cluster formation, we presented K-Hop neighbor approach. In next, the cluster level load balancing is achieved by maintaining cluster reformation through Fuzzy Logic based Hidden Markov Model (FL-HMM). Finally, edge-level load balancing is attained through offloading procedure. We proposed Bobcat Optimization Algorithm (BOA). Final experimental results show that the proposed LBEC achieves better performance up to 5 % in each parameter such as response time, offloading time and throughput.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101215"},"PeriodicalIF":5.7,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A two-stage spatio-temporal flexibility-based energy optimization of internet data centers in active distribution networks based on robust control and transformer machine learning strategy 基于鲁棒控制和变压器机器学习策略的有源配电网互联网数据中心两阶段时空柔性能量优化
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-28 DOI: 10.1016/j.suscom.2025.101214
Ashkan Safari , Kamran Taghizad Tavana , Mehrdad Tarafdar Hagh , Ali Esmaeel Nezhad
Internet data centers (IDCs) are critical infrastructures supporting the digital economy, necessitating stable and resilient energy supply to ensure continuous operation and meet increasing computational demands. This study develops an advanced optimization framework. The framework improves IDC energy efficiency by leveraging their spatio-temporal flexibility for intelligent participation in power system operations. The proposed framework uses an energy portfolio comprising combined heat and power (CHP) units, fuel cells (FCs), locally controllable generators (LCGs), and renewable energy sources (RESs), to reduce reliance on the main grid while maintaining operational efficiency. To address supply/demand uncertainties, robust optimization (RO) is applied. Furthermore, extreme gradient boosting (XGBoost) is used for feature selection and engineering, identifying key parameters mostly effecting the IDCs behavior. These features are then fed into a Transformer-based machine learning (ML) model, which captures complex spatio-temporal dependencies and provides accurate forecasts. The predictions are then incorporated into the RO-based decision-making process to support real-time energy optimization. The proposed framework is validated on the IEEE 33-bus standard distribution network, simulating realistic IDC operation scenarios. Results show the higher performance of the proposed strategy, achieving at least 35.3 % improvement in mean absolute error (MAE), reduced to 16.22 kWh, and 16.7 % improvement in root mean square error (RMSE), reduced to 33.56 kWh, compared to conventional ML models. Additionally, the proposed model is evaluated by the other KPIs of root mean square relative error (RMSRE=0.35), mean square relative error (MSRE=0.12), mean absolute relative error (MARE=0.16), normalized RMSE (nRMSE=0.14), and normalized MAE (nMAE=0.08). These findings confirm the robustness and effectiveness of the proposed hybrid framework in enhancing IDC operational efficiency.
互联网数据中心(idc)是支持数字经济的关键基础设施,需要稳定和有弹性的能源供应来确保持续运行并满足日益增长的计算需求。本研究开发了一个先进的优化框架。该框架通过利用IDC的时空灵活性智能参与电力系统运行,提高了IDC的能源效率。拟议的框架使用包括热电联产(CHP)机组、燃料电池(fc)、局部可控发电机(lcg)和可再生能源(RESs)在内的能源组合,以减少对主电网的依赖,同时保持运行效率。为了解决供需不确定性,应用了鲁棒优化(RO)。此外,将极限梯度增强(XGBoost)用于特征选择和工程,识别出影响idc行为的主要关键参数。然后将这些特征输入到基于transformer的机器学习(ML)模型中,该模型捕获复杂的时空依赖性并提供准确的预测。然后将预测结果整合到基于ro的决策过程中,以支持实时能源优化。该框架在IEEE 33总线标准配电网上进行了验证,模拟了IDC的实际运行场景。结果表明,与传统的ML模型相比,该策略的性能更高,平均绝对误差(MAE)至少提高35.3% %,降至16.22 kWh,均方根误差(RMSE)提高16.7% %,降至33.56 kWh。此外,通过均方根相对误差(RMSRE=0.35)、均方相对误差(MSRE=0.12)、平均绝对相对误差(MARE=0.16)、归一化RMSE (nRMSE=0.14)和归一化MAE (nMAE=0.08)等指标对模型进行评价。这些发现证实了所提出的混合框架在提高IDC运营效率方面的稳健性和有效性。
{"title":"A two-stage spatio-temporal flexibility-based energy optimization of internet data centers in active distribution networks based on robust control and transformer machine learning strategy","authors":"Ashkan Safari ,&nbsp;Kamran Taghizad Tavana ,&nbsp;Mehrdad Tarafdar Hagh ,&nbsp;Ali Esmaeel Nezhad","doi":"10.1016/j.suscom.2025.101214","DOIUrl":"10.1016/j.suscom.2025.101214","url":null,"abstract":"<div><div>Internet data centers (IDCs) are critical infrastructures supporting the digital economy, necessitating stable and resilient energy supply to ensure continuous operation and meet increasing computational demands. This study develops an advanced optimization framework. The framework improves IDC energy efficiency by leveraging their spatio-temporal flexibility for intelligent participation in power system operations. The proposed framework uses an energy portfolio comprising combined heat and power (CHP) units, fuel cells (FCs), locally controllable generators (LCGs), and renewable energy sources (RESs), to reduce reliance on the main grid while maintaining operational efficiency. To address supply/demand uncertainties, robust optimization (RO) is applied. Furthermore, extreme gradient boosting (XGBoost) is used for feature selection and engineering, identifying key parameters mostly effecting the IDCs behavior. These features are then fed into a Transformer-based machine learning (ML) model, which captures complex spatio-temporal dependencies and provides accurate forecasts. The predictions are then incorporated into the RO-based decision-making process to support real-time energy optimization. The proposed framework is validated on the IEEE 33-bus standard distribution network, simulating realistic IDC operation scenarios. Results show the higher performance of the proposed strategy, achieving at least 35.3 % improvement in mean absolute error (MAE), reduced to 16.22 kWh, and 16.7 % improvement in root mean square error (RMSE), reduced to 33.56 kWh, compared to conventional ML models. Additionally, the proposed model is evaluated by the other KPIs of root mean square relative error (RMSRE=0.35), mean square relative error (MSRE=0.12), mean absolute relative error (MARE=0.16), normalized RMSE (nRMSE=0.14), and normalized MAE (nMAE=0.08). These findings confirm the robustness and effectiveness of the proposed hybrid framework in enhancing IDC operational efficiency.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101214"},"PeriodicalIF":5.7,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-objective optimization of regional energy systems with exergy efficiency and user satisfaction dynamics 基于能效和用户满意度动态的区域能源系统多目标优化
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-23 DOI: 10.1016/j.suscom.2025.101213
Xuecheng Wu, Qiongbing Xiong, Cizhen Yu
The evolving energy landscape is increasingly integrating diverse energy sources, electricity, gas, heat, and cooling, reflecting a strategic shift driven by smart technologies and rising renewable adoption. However, the variability of renewable supply requires enhanced flexibility in demand-side management. This study presents a novel approach to optimizing regional integrated energy systems through a two-layer closed-loop model that incorporates exergy efficiency and user satisfaction dynamics. The model addresses the limitations of traditional energy systems, which often operate within the constraints of singular energy resources and fail to fully integrate renewable energies. The proposed model optimizes energy production, conversion, transmission, and consumption by using a multi-objective framework that includes economic, environmental, and exergy efficiency considerations. The proposed optimization approach significantly improves the performance of integrated energy systems. The energy efficiency is enhanced by 8.36 %, while exergy efficiency shows a notable increase of 1.61 %. Emissions are reduced by approximately 16.3 %, demonstrating the environmental benefits of the model. Though operational costs rise slightly, the trade-off favors sustainability with substantial gains in energy and environmental outcomes. The modified Multi-Objective Particle Swarm Optimization (MOPSO) algorithm outperforms traditional methods like NSGA-II and Standard PSO, achieving a higher Hypervolume value, indicating better convergence and solution diversity. This makes MOPSO a robust tool for solving multi-objective optimization problems in energy management.
不断发展的能源格局正日益整合各种能源,包括电力、天然气、热能和制冷,这反映了智能技术和不断增长的可再生能源采用推动的战略转变。然而,可再生能源供应的可变性要求提高需求侧管理的灵活性。本研究提出了一种通过两层闭环模型优化区域综合能源系统的新方法,该模型结合了能源效率和用户满意度动态。该模型解决了传统能源系统的局限性,传统能源系统通常在单一能源的约束下运行,无法完全整合可再生能源。提出的模型通过使用包括经济、环境和能源效率考虑在内的多目标框架来优化能源生产、转换、传输和消费。所提出的优化方法显著提高了综合能源系统的性能。能源效率提高8.36 %,火用效率显著提高1.61 %。排放量减少了约16.3% %,证明了该模型的环境效益。尽管运营成本略有上升,但这种权衡有利于可持续性,并在能源和环境方面取得实质性成果。改进的多目标粒子群优化(Multi-Objective Particle Swarm Optimization, MOPSO)算法优于NSGA-II和Standard PSO等传统方法,实现了更高的Hypervolume值,具有更好的收敛性和解的多样性。这使得MOPSO成为解决能源管理中多目标优化问题的有力工具。
{"title":"Multi-objective optimization of regional energy systems with exergy efficiency and user satisfaction dynamics","authors":"Xuecheng Wu,&nbsp;Qiongbing Xiong,&nbsp;Cizhen Yu","doi":"10.1016/j.suscom.2025.101213","DOIUrl":"10.1016/j.suscom.2025.101213","url":null,"abstract":"<div><div>The evolving energy landscape is increasingly integrating diverse energy sources, electricity, gas, heat, and cooling, reflecting a strategic shift driven by smart technologies and rising renewable adoption. However, the variability of renewable supply requires enhanced flexibility in demand-side management. This study presents a novel approach to optimizing regional integrated energy systems through a two-layer closed-loop model that incorporates exergy efficiency and user satisfaction dynamics. The model addresses the limitations of traditional energy systems, which often operate within the constraints of singular energy resources and fail to fully integrate renewable energies. The proposed model optimizes energy production, conversion, transmission, and consumption by using a multi-objective framework that includes economic, environmental, and exergy efficiency considerations. The proposed optimization approach significantly improves the performance of integrated energy systems. The energy efficiency is enhanced by 8.36 %, while exergy efficiency shows a notable increase of 1.61 %. Emissions are reduced by approximately 16.3 %, demonstrating the environmental benefits of the model. Though operational costs rise slightly, the trade-off favors sustainability with substantial gains in energy and environmental outcomes. The modified Multi-Objective Particle Swarm Optimization (MOPSO) algorithm outperforms traditional methods like NSGA-II and Standard PSO, achieving a higher Hypervolume value, indicating better convergence and solution diversity. This makes MOPSO a robust tool for solving multi-objective optimization problems in energy management.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101213"},"PeriodicalIF":5.7,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An enhanced hybrid optimization model for renewable energy storage: Integrating GWO and WOA, with Lévy mechanisms 一种增强的可再生能源储能混合优化模型:基于lsamvy机制的GWO和WOA集成
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-22 DOI: 10.1016/j.suscom.2025.101207
Ercan Erkalkan
This study addresses renewable-energy storage scheduling — a high-dimensional, multimodal optimization task — by proposing an enhanced Grey Wolf–Whale Optimization Algorithm (EGW–WOA). The method fuses GWO’s hierarchical leadership with WOA’s spiral exploitation and augments them with Lévy flights and progress-triggered chaotic re-initialization. Across 100 Monte-Carlo trials, EGW–WOAreduced 24 h operating cost to 2.94×105±7.97×104, improving over WOA by 16.62%, GA by 10.15%, FPA by 63.6%, and HS by 80.76%, with a 100% feasibility rate. It achieved the lowest dispersion (Std =7.97×104; Max–Min spread =3.82×105), shaved peak-demand charges by 9%, and limited depth-of-discharge swings to <35%, projecting a 12%–18% life extension. A 50-iteration run completed in 38.6 s on a 3.4 GHz CPU — over 20× faster than a comparable MILP baseline — demonstrating suitability for near-real-time PV–wind microgrid control. Within the scope of Sustainable Computing: Informatics and Systems, this work delivers a reproducible, open-source optimization engine with non-parametric statistical validation and edge-suitable runtimes, linking algorithmic advances to system-level sustainability metrics (LCOS, demand charges). The results show how algorithm–system co-design can lower operating cost and risk while preserving battery health in cyber–physical energy systems.
本研究通过提出一种增强型灰狼-鲸优化算法(EGW-WOA),解决了可再生能源存储调度这一高维、多模态优化任务。该方法融合了GWO的分层领导和WOA的螺旋开发,并通过lsamvy飞行和进度触发的混沌重新初始化来增强它们。在100次蒙特卡罗试验中,egw - wo24 h运行成本降低到2.94×105±7.97×104,比WOA提高16.62%,GA提高10.15%,FPA提高63.6%,HS提高80.76%,可行性为100%。它实现了最低的分散(Std =7.97×104; Max-Min分散=3.82×105),将峰值需求电荷削减了约9%,并将放电深度波动限制在35%,预计寿命延长12%-18%。在3.4 GHz CPU上,50次迭代运行在38.6秒内完成,比可比的MILP基线快20倍以上,证明了近乎实时的PV-wind微电网控制的适用性。在可持续计算:信息学和系统的范围内,这项工作提供了一个可重复的、开源的优化引擎,具有非参数统计验证和边缘合适的运行时,将算法进步与系统级可持续性指标(LCOS,需求收费)联系起来。结果表明,在网络物理能源系统中,算法-系统协同设计可以降低运行成本和风险,同时保持电池健康。
{"title":"An enhanced hybrid optimization model for renewable energy storage: Integrating GWO and WOA, with Lévy mechanisms","authors":"Ercan Erkalkan","doi":"10.1016/j.suscom.2025.101207","DOIUrl":"10.1016/j.suscom.2025.101207","url":null,"abstract":"<div><div>This study addresses renewable-energy storage scheduling — a high-dimensional, multimodal optimization task — by proposing an enhanced Grey Wolf–Whale Optimization Algorithm (EGW–WOA). The method fuses GWO’s hierarchical leadership with WOA’s spiral exploitation and augments them with Lévy flights and progress-triggered chaotic re-initialization. Across 100 Monte-Carlo trials, EGW–WOAreduced 24<!--> <!-->h operating cost to <span><math><mrow><mn>2</mn><mo>.</mo><mn>94</mn><mo>×</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>5</mn></mrow></msup><mo>±</mo><mn>7</mn><mo>.</mo><mn>97</mn><mo>×</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>4</mn></mrow></msup></mrow></math></span>, improving over WOA by 16.62%, GA by 10.15%, FPA by 63.6%, and HS by 80.76%, with a 100% feasibility rate. It achieved the lowest dispersion (Std <span><math><mrow><mo>=</mo><mn>7</mn><mo>.</mo><mn>97</mn><mo>×</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>4</mn></mrow></msup></mrow></math></span>; Max–Min spread <span><math><mrow><mo>=</mo><mn>3</mn><mo>.</mo><mn>82</mn><mo>×</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>5</mn></mrow></msup></mrow></math></span>), shaved peak-demand charges by <span><math><mo>≈</mo></math></span>9%, and limited depth-of-discharge swings to <span><math><mrow><mo>&lt;</mo><mn>35</mn></mrow></math></span>%, projecting a 12%–18% life extension. A 50-iteration run completed in 38.6<!--> <!-->s on a 3.4<!--> <!-->GHz CPU — over <span><math><mrow><mn>20</mn><mo>×</mo></mrow></math></span> faster than a comparable MILP baseline — demonstrating suitability for near-real-time PV–wind microgrid control. Within the scope of <em>Sustainable Computing: Informatics and Systems</em>, this work delivers a reproducible, open-source optimization engine with non-parametric statistical validation and edge-suitable runtimes, linking algorithmic advances to system-level sustainability metrics (LCOS, demand charges). The results show how algorithm–system co-design can lower operating cost and risk while preserving battery health in cyber–physical energy systems.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101207"},"PeriodicalIF":5.7,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secured and effective task scheduling in cloud computing using Levy Flight - Secretary Bird Optimization and Hash-based Message Authentication Code – Secure Hash Authentication 256 在云计算中使用Levy飞行-秘书鸟优化和基于哈希的消息认证代码-安全哈希认证256安全有效的任务调度
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-21 DOI: 10.1016/j.suscom.2025.101211
Nida Kousar Gouse, Gopala Krishnan Chandra Sekaran
Dynamic computing resources are accessible through Cloud Computing (CC), which has gained popularity as a computing technology. Effective Task Scheduling (TS) is an essential aspect of CC, crucial in optimizing task distribution over available resources for high performance. Assigning tasks in cloud environments is a complex process influenced by multiple factors such as network bandwidth availability, makespan and cost considerations. This study proposes a Hash-based Message Authentication Code – Secure Hash Authentication 256 (HMAC-SHA256) and Advanced Encryption Standard (AES) to ensure enhanced security in the task scheduling process within the CC environment. The HMAC-SHA256 algorithm is utilized for key generation, providing integrity verification and data authentication. The AES algorithm is employed to encrypt task data, then the Levy Flight - Secretary Bird Optimization (LF-SBO) algorithm is implemented to schedule optimal tasks in the cloud. The proposed HMAC-SHA256 – AES and LF-SBO algorithms demand lower energy requirements of 121.6 J for 10 tasks, 180.48 J for 25 tasks, 310.21 J for 50 tasks, 400.15 J for 75 tasks, and 520.34 J for 100 tasks, outperforming existing Particle Swarm Optimization (PSO).
动态计算资源可以通过云计算(CC)访问,云计算作为一种计算技术已经得到了普及。有效任务调度(TS)是CC的一个重要方面,对于优化可用资源上的任务分配以获得高性能至关重要。在云环境中分配任务是一个复杂的过程,受到多种因素的影响,如网络带宽可用性、完工时间和成本考虑。本研究提出一种基于哈希的讯息验证码-安全哈希验证256 (HMAC-SHA256)和高级加密标准(AES),以确保CC环境下任务调度过程的安全性。使用HMAC-SHA256算法生成密钥,提供完整性验证和数据认证。采用AES算法对任务数据进行加密,然后采用Levy Flight - Secretary Bird Optimization (LF-SBO)算法在云中调度最优任务。提出的HMAC-SHA256 - AES和LF-SBO算法的能量需求较低,10个任务121.6 J, 25个任务180.48 J, 50个任务310.21 J, 75个任务400.15 J, 100个任务520.34 J,优于现有的粒子群优化(PSO)算法。
{"title":"Secured and effective task scheduling in cloud computing using Levy Flight - Secretary Bird Optimization and Hash-based Message Authentication Code – Secure Hash Authentication 256","authors":"Nida Kousar Gouse,&nbsp;Gopala Krishnan Chandra Sekaran","doi":"10.1016/j.suscom.2025.101211","DOIUrl":"10.1016/j.suscom.2025.101211","url":null,"abstract":"<div><div>Dynamic computing resources are accessible through Cloud Computing (CC), which has gained popularity as a computing technology. Effective Task Scheduling (TS) is an essential aspect of CC, crucial in optimizing task distribution over available resources for high performance. Assigning tasks in cloud environments is a complex process influenced by multiple factors such as network bandwidth availability, makespan and cost considerations. This study proposes a Hash-based Message Authentication Code – Secure Hash Authentication 256 (HMAC-SHA256) and Advanced Encryption Standard (AES) to ensure enhanced security in the task scheduling process within the CC environment. The HMAC-SHA256 algorithm is utilized for key generation, providing integrity verification and data authentication. The AES algorithm is employed to encrypt task data, then the Levy Flight - Secretary Bird Optimization (LF-SBO) algorithm is implemented to schedule optimal tasks in the cloud. The proposed HMAC-SHA256 – AES and LF-SBO algorithms demand lower energy requirements of 121.6 J for 10 tasks, 180.48 J for 25 tasks, 310.21 J for 50 tasks, 400.15 J for 75 tasks, and 520.34 J for 100 tasks, outperforming existing Particle Swarm Optimization (PSO).</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101211"},"PeriodicalIF":5.7,"publicationDate":"2025-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145118045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Sustainable Computing-Informatics & Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1