Pub Date : 2025-01-06DOI: 10.1016/j.future.2024.107691
Xiaoding Wang , Haitao Zeng , Xu Yang , Jiwu Shu , Qibin Wu , Youxiong Que , Xuechao Yang , Xun Yi , Ibrahim Khalil , Albert Y. Zomaya
Remote sensing-empowered agriculture is a significant approach that utilizes remote sensing (RS) to improve agricultural production and crop management. In the agricultural sector, RS allows the retrieval of extensive data related to land, vegetation, and crops, providing crucial information for farmers and decision-makers to enhance precision and efficiency in crop cultivation and management. The combination of RS and artificial intelligence (AI) holds tremendous potential for agricultural production. With the integration of AI, remote sensing-empowered agriculture has expanded, and its impact has become increasingly prominent. It is expected to have far-reaching effects on global agriculture, fostering the more efficient, sustainable, and intelligent development. In the agricultural field, this review presents a concise exploration of the principles and usage of RS. It also examines the role of AI in facilitating agricultural RS, summarizes the application of the combination of RS and AI in the field of agriculture, and discusses its effects. Opportunities and challenges arising from the integration of AI and AI in agriculture are also discussed. This review aims to accelerate the entry into a new era for agriculture empowered by RS.
{"title":"Remote sensing revolutionizing agriculture: Toward a new frontier","authors":"Xiaoding Wang , Haitao Zeng , Xu Yang , Jiwu Shu , Qibin Wu , Youxiong Que , Xuechao Yang , Xun Yi , Ibrahim Khalil , Albert Y. Zomaya","doi":"10.1016/j.future.2024.107691","DOIUrl":"10.1016/j.future.2024.107691","url":null,"abstract":"<div><div>Remote sensing-empowered agriculture is a significant approach that utilizes remote sensing (RS) to improve agricultural production and crop management. In the agricultural sector, RS allows the retrieval of extensive data related to land, vegetation, and crops, providing crucial information for farmers and decision-makers to enhance precision and efficiency in crop cultivation and management. The combination of RS and artificial intelligence (AI) holds tremendous potential for agricultural production. With the integration of AI, remote sensing-empowered agriculture has expanded, and its impact has become increasingly prominent. It is expected to have far-reaching effects on global agriculture, fostering the more efficient, sustainable, and intelligent development. In the agricultural field, this review presents a concise exploration of the principles and usage of RS. It also examines the role of AI in facilitating agricultural RS, summarizes the application of the combination of RS and AI in the field of agriculture, and discusses its effects. Opportunities and challenges arising from the integration of AI and AI in agriculture are also discussed. This review aims to accelerate the entry into a new era for agriculture empowered by RS.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107691"},"PeriodicalIF":6.2,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-04DOI: 10.1016/j.future.2024.107702
Fabian Mastenbroek , Tiziano De Matteis , Vincent van Beek , Alexandru Iosup
Datacenter service providers face engineering and operational challenges involving numerous risk aspects. Bad decisions can result in financial penalties, competitive disadvantage, and unsustainable environmental impact. Risk management is an integral aspect of the design and operation of modern datacenters, but frameworks that allow users to consider various risk trade-offs conveniently are missing. We propose RADiCe, an open-source framework that enables data-driven analysis of IT-related operational risks in sustainable datacenters. RADiCe uses monitoring and environmental data and, via discrete event simulation, assists datacenter experts through systematic evaluation of risk scenarios, visualization, and optimization of risks. Our analyses highlight the increasing risk datacenter operators face due to price surges in electricity and sustainability and demonstrate how RADiCe can evaluate and control such risks by optimizing the topology and operational settings of the datacenter. Eventually, RADiCe can evaluate risk scenarios by a factor 70x–330x faster than others, opening possibilities for interactive risk exploration.
{"title":"RADiCe: A Risk Analysis Framework for Data Centers","authors":"Fabian Mastenbroek , Tiziano De Matteis , Vincent van Beek , Alexandru Iosup","doi":"10.1016/j.future.2024.107702","DOIUrl":"10.1016/j.future.2024.107702","url":null,"abstract":"<div><div>Datacenter service providers face engineering and operational challenges involving numerous risk aspects. Bad decisions can result in financial penalties, competitive disadvantage, and unsustainable environmental impact. Risk management is an integral aspect of the design and operation of modern datacenters, but frameworks that allow users to consider various risk trade-offs conveniently are missing. We propose <span>RADiCe</span>, an open-source framework that enables data-driven analysis of IT-related operational risks in sustainable datacenters. <span>RADiCe</span> uses monitoring and environmental data and, via discrete event simulation, assists datacenter experts through systematic evaluation of risk scenarios, visualization, and optimization of risks. Our analyses highlight the increasing risk datacenter operators face due to price surges in electricity and sustainability and demonstrate how <span>RADiCe</span> can evaluate and control such risks by optimizing the topology and operational settings of the datacenter. Eventually, <span>RADiCe</span> can evaluate risk scenarios by a factor 70x–330x faster than others, opening possibilities for interactive risk exploration.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107702"},"PeriodicalIF":6.2,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Privacy-preserving searchable encryption can allow clients to encrypt the data for secure cloud storage, enabling subsequent data retrieval while preserving the privacy of data. In this paper, we initialize the study of constructing a secure dynamic searchable symmetric encryption (DSSE) scheme in a zero-trust environment characterized by the threat model of honest-but-curious data owner (DO) + honest-but-curious data user (DU) + fully malicious cloud server (CS). To tackle these challenges, we introduce a multi-user DSSE scheme that emphasizes verifiability and privacy while integrating forward security. Our contributions include: Employing the oblivious pseudo-random function (OPRF) protocol for secure DO-DU interactions, ensuring the privacy of DO’s keys and DU’s queried keywords from each other, And maintaining the secure separation of data ownership and usage, Utilizing a multiset hash function-based state chain to achieve forward privacy and support DO updates of encrypted cloud data with verifiable query results Proposing a novel hash-based file encryption and authentication approach to protect file privacy and verify query results. additionally, We provide a comprehensive security analysis and experimental evaluation demonstrating the efficacy and efficiency of our approach. these advancements enhance DSSE schemes under a zero-trust environment, Addressing critical challenges of privacy, Verifiability, And operational efficiency
{"title":"Forward-Secure multi-user and verifiable dynamic searchable encryption scheme within a zero-trust environment","authors":"Zhihao Xu , Chengliang Tian , Guoyan Zhang , Weizhong Tian , Lidong Han","doi":"10.1016/j.future.2024.107701","DOIUrl":"10.1016/j.future.2024.107701","url":null,"abstract":"<div><div>Privacy-preserving searchable encryption can allow clients to encrypt the data for secure cloud storage, enabling subsequent data retrieval while preserving the privacy of data. In this paper, we initialize the study of constructing a secure dynamic searchable symmetric encryption (DSSE) scheme in a zero-trust environment characterized by the threat model of <em>honest-but-curious data owner (DO)</em> + <em>honest-but-curious data user (DU)</em> + <em>fully malicious cloud server (CS)</em>. To tackle these challenges, we introduce a multi-user DSSE scheme that emphasizes verifiability and privacy while integrating forward security. Our contributions include: Employing the oblivious pseudo-random function (OPRF) protocol for secure <em>DO</em>-<em>DU</em> interactions, ensuring the privacy of <em>DO</em>’s keys and <em>DU</em>’s queried keywords from each other, And maintaining the secure separation of data ownership and usage, Utilizing a multiset hash function-based state chain to achieve forward privacy and support <em>DO</em> updates of encrypted cloud data with verifiable query results Proposing a novel hash-based file encryption and authentication approach to protect file privacy and verify query results. additionally, We provide a comprehensive security analysis and experimental evaluation demonstrating the efficacy and efficiency of our approach. these advancements enhance DSSE schemes under a zero-trust environment, Addressing critical challenges of privacy, Verifiability, And operational efficiency</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107701"},"PeriodicalIF":6.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02DOI: 10.1016/j.future.2024.107705
Wenjia Zhao , Xu Yang , Saiyu Qi , Junzhe Wei , Xinpei Dong , Xu Yang , Yong Qi
Leveraging the recent surge in the electronic retail industry, retailer reputation has emerged with increasing significance in shaping consumer purchasing decisions. Despite this, the existing reputation platforms remain largely centralized, thereby enabling retailers to exert total control over reputation services, a reality that compromises the authentic portrayal of retailers. In response, we introduce a secure blockchain-based reputation system, named BlockRep, designed explicitly for the Industrial Internet of Things (IIoT) enabled retail industry. By eliminating dependency on trust inherently foundation in established E-retail platforms, BlockRep effectively resists sybil attack while ensuring review anonymity and authenticity, both critical security requirements of reputation systems. Initially, we champion a hybrid framework designed to enhance user interaction with our system. This approach leverages the centralized E-retail platform to facilitate trade services, whilst unfolding upon a blockchain platform that firmly authenticates the legitimacy of individual reviews. The authentication process is thus anchored to the correctness of cryptographic tokens, which are subsequently deposited on the blockchain. Additionally, we introduce a novel concept, ‘tax-endorsed reviews,’ devised to resist sybil attacks, such as injecting fake positive reviews for itself. Consequently, this necessitates the implementation of a four-party collaboration protocol. Finally, the security analysis complemented with our experimental results, definitively showcase the security and efficiency of BlockRep.
{"title":"Secure blockchain-based reputation system for IIoT-enabled retail industry with resistance to sybil attack","authors":"Wenjia Zhao , Xu Yang , Saiyu Qi , Junzhe Wei , Xinpei Dong , Xu Yang , Yong Qi","doi":"10.1016/j.future.2024.107705","DOIUrl":"10.1016/j.future.2024.107705","url":null,"abstract":"<div><div>Leveraging the recent surge in the electronic retail industry, retailer reputation has emerged with increasing significance in shaping consumer purchasing decisions. Despite this, the existing reputation platforms remain largely centralized, thereby enabling retailers to exert total control over reputation services, a reality that compromises the authentic portrayal of retailers. In response, we introduce a secure blockchain-based reputation system, named BlockRep, designed explicitly for the Industrial Internet of Things (IIoT) enabled retail industry. By eliminating dependency on trust inherently foundation in established E-retail platforms, BlockRep effectively resists sybil attack while ensuring review anonymity and authenticity, both critical security requirements of reputation systems. Initially, we champion a hybrid framework designed to enhance user interaction with our system. This approach leverages the centralized E-retail platform to facilitate trade services, whilst unfolding upon a blockchain platform that firmly authenticates the legitimacy of individual reviews. The authentication process is thus anchored to the correctness of cryptographic tokens, which are subsequently deposited on the blockchain. Additionally, we introduce a novel concept, ‘tax-endorsed reviews,’ devised to resist sybil attacks, such as injecting fake positive reviews for itself. Consequently, this necessitates the implementation of a four-party collaboration protocol. Finally, the security analysis complemented with our experimental results, definitively showcase the security and efficiency of BlockRep.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107705"},"PeriodicalIF":6.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143167347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02DOI: 10.1016/j.future.2024.107699
Danyang Liu , Yuanqing Xia , Chenggang Shan , Ke Tian , Yufeng Zhan
In the cloud-native era, Kubernetes-based workflow engines simplify the execution of containerized workflows. However, these engines face challenges in dynamic environments with continuous workflow requests and unpredictable resource demand peaks. The traditional resource allocation approach, which relies merely on current workflow load data, also lacks flexibility and foresight, often leading to resource over-allocation or scarcity. To tackle these issues, we present a containerized workflow resource allocation (CWRA) scheme designed specifically for Kubernetes workflow engines. CWRA predicts future workflow tasks during the current task pod’s lifecycle and employs a dynamic resource scaling strategy to manage high concurrency scenarios effectively. This scheme includes resource discovery and allocation algorithm, which are essential components of our containerized workflow engine (CWE). Our experimental results, across various workflow arrival patterns, indicate significant improvements when compared to the Argo workflow engine. CWRA achieves a reduction in total workflow duration by 0.9% to 11.4%, decreases average workflow duration by a maximum of 21.5%, and increases CPU and memory utilization by 2.07% to 16.95%.
{"title":"A Kubernetes-based scheme for efficient resource allocation in containerized workflow","authors":"Danyang Liu , Yuanqing Xia , Chenggang Shan , Ke Tian , Yufeng Zhan","doi":"10.1016/j.future.2024.107699","DOIUrl":"10.1016/j.future.2024.107699","url":null,"abstract":"<div><div>In the cloud-native era, Kubernetes-based workflow engines simplify the execution of containerized workflows. However, these engines face challenges in dynamic environments with continuous workflow requests and unpredictable resource demand peaks. The traditional resource allocation approach, which relies merely on current workflow load data, also lacks flexibility and foresight, often leading to resource over-allocation or scarcity. To tackle these issues, we present a containerized workflow resource allocation (CWRA) scheme designed specifically for Kubernetes workflow engines. CWRA predicts future workflow tasks during the current task pod’s lifecycle and employs a dynamic resource scaling strategy to manage high concurrency scenarios effectively. This scheme includes resource discovery and allocation algorithm, which are essential components of our containerized workflow engine (CWE). Our experimental results, across various workflow arrival patterns, indicate significant improvements when compared to the Argo workflow engine. CWRA achieves a reduction in total workflow duration by 0.9% to 11.4%, decreases average workflow duration by a maximum of 21.5%, and increases CPU and memory utilization by 2.07% to 16.95%.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107699"},"PeriodicalIF":6.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02DOI: 10.1016/j.future.2024.107704
Jianbo Du , Zuting Yu , Shulei Li , Bintao Hu , Yuan Gao , Xiaoli Chu
Edge caching is considered a promising technology to fulfill user equipment (UE) requirements for content services. In this paper, we explore the use of blockchain and digital twin technologies to support edge caching in a Device-to-Device (D2D) wireless network, where each UE may fetch content from its own caching buffer, from other UEs through D2D links, or from a content server. A digital twin monitors and predicts the operating status of UE by storing crucial data such as the location, estimated processing capability, and remaining energy of each UE. To enable secure and credible trading between UEs, the blockchain technology is used to supervise transactions and constantly update UEs’ reputation values. We formulate an optimization problem to maximize an objective function that considers the content fetching performance, network lifetime and UE’s handover costs by optimizing the content placement and fetching strategies, subject to constraints on the UE’s storage capacity, the upper limit of serving other UEs, and latency requirements. To solve this complicated problem for a dynamic network environment, we propose a proximal policy optimization-based deep reinforcement learning framework. Simulation results demonstrate that our proposed algorithm converges rapidly and can efficiently maximize the rewards, network lifetime and content fetching gain while minimizing handover costs.
{"title":"Blockchain and digital twin empowered edge caching for D2D wireless networks","authors":"Jianbo Du , Zuting Yu , Shulei Li , Bintao Hu , Yuan Gao , Xiaoli Chu","doi":"10.1016/j.future.2024.107704","DOIUrl":"10.1016/j.future.2024.107704","url":null,"abstract":"<div><div>Edge caching is considered a promising technology to fulfill user equipment (UE) requirements for content services. In this paper, we explore the use of blockchain and digital twin technologies to support edge caching in a Device-to-Device (D2D) wireless network, where each UE may fetch content from its own caching buffer, from other UEs through D2D links, or from a content server. A digital twin monitors and predicts the operating status of UE by storing crucial data such as the location, estimated processing capability, and remaining energy of each UE. To enable secure and credible trading between UEs, the blockchain technology is used to supervise transactions and constantly update UEs’ reputation values. We formulate an optimization problem to maximize an objective function that considers the content fetching performance, network lifetime and UE’s handover costs by optimizing the content placement and fetching strategies, subject to constraints on the UE’s storage capacity, the upper limit of serving other UEs, and latency requirements. To solve this complicated problem for a dynamic network environment, we propose a proximal policy optimization-based deep reinforcement learning framework. Simulation results demonstrate that our proposed algorithm converges rapidly and can efficiently maximize the rewards, network lifetime and content fetching gain while minimizing handover costs.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107704"},"PeriodicalIF":6.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143166795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02DOI: 10.1016/j.future.2024.107703
Hongyi Zhang , Mingqian Liu , Yunfei Chen , Nan Zhao
The rapid development of 5G/B5G communication networks and the exponential growth of next-generation wireless devices require more advanced and dynamic spectrum management and control architecture. Dynamic spectrum management and control based on blockchain is efficient and robust, but the cost of traditional consensus mechanisms is too high. In this paper, we propose a new spectrum management and control architecture based on blockchain and deep reinforcement learning, which proposes a new energy-saving consensus mechanism called proof of hierarchy to encourage blockchain users to perform spectrum sensing and detect spectrum violations. Meanwhile, we propose a timely auction mechanism based on deep reinforcement learning for dynamic spectrum management, achieving secure, efficient, and dynamic allocation of spectrum resources. Through intelligent resource allocation and trusted transaction mechanism, efficient spectrum management is realized to improve spectrum utilization and alleviate the shortage of spectrum resources. The simulation verifies the effectiveness of the proposed architecture. We construct a spectrum management scenario and compare it with the traditional spectrum management method. The experimental results show that the proposed architecture can allocate spectrum resources more efficiently and provide a better user experience.
{"title":"Blockchain and timely auction mechanism-based spectrum management","authors":"Hongyi Zhang , Mingqian Liu , Yunfei Chen , Nan Zhao","doi":"10.1016/j.future.2024.107703","DOIUrl":"10.1016/j.future.2024.107703","url":null,"abstract":"<div><div>The rapid development of 5G/B5G communication networks and the exponential growth of next-generation wireless devices require more advanced and dynamic spectrum management and control architecture. Dynamic spectrum management and control based on blockchain is efficient and robust, but the cost of traditional consensus mechanisms is too high. In this paper, we propose a new spectrum management and control architecture based on blockchain and deep reinforcement learning, which proposes a new energy-saving consensus mechanism called proof of hierarchy to encourage blockchain users to perform spectrum sensing and detect spectrum violations. Meanwhile, we propose a timely auction mechanism based on deep reinforcement learning for dynamic spectrum management, achieving secure, efficient, and dynamic allocation of spectrum resources. Through intelligent resource allocation and trusted transaction mechanism, efficient spectrum management is realized to improve spectrum utilization and alleviate the shortage of spectrum resources. The simulation verifies the effectiveness of the proposed architecture. We construct a spectrum management scenario and compare it with the traditional spectrum management method. The experimental results show that the proposed architecture can allocate spectrum resources more efficiently and provide a better user experience.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107703"},"PeriodicalIF":6.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Apache Flink has become one of the highly regarded streaming computing frameworks with its excellent advantages of high throughput, low latency, and high reliability. However, the default task scheduling policy follows the first-come-first-served principle, which fails to fully consider the differences in energy efficiency and resource loading of nodes in heterogeneous clusters and may lead to high energy consumption and uneven load distribution when executing jobs. To solve this problem, this paper proposes a two-tier coordinated load balancing and energy-saving scheduling optimization strategy. First, we construct an energy efficiency model based on Service Level Agreements (SLA) and design an Energy-Saving Scheduling Algorithm (ESSA) based on this model, aiming to reduce the energy consumption of Flink clusters when executing jobs. This ESSA algorithm integrally considers the effects of two SLA performance metrics including node response time and throughput on node energy consumption, as well as the differences in the energy efficiencies of different nodes in heterogeneous clusters. Second, in order to solve the load imbalance problem that may be caused by Flink’s default scheduling policy, an Energy-Aware Two-Tier Coordinated Load Balancing algorithm (TTCLB-EA) is proposed, which optimizes the cluster load at both the inter-node and intra-node levels through task based on energy efficiency priorities. Experimental results show that compared with the default scheduling strategy, round-robin scheduling strategy, and St-Stream, the proposed algorithm improves about 14.59%, 12.75%, and 7.32% in load balancing, while saving about 14.52%, 10.54%, and 7.58% in energy consumption, respectively. The proposed algorithms not only enhance the performance of the Flink cluster but also help to reduce energy consumption and achieve more efficient resource utilization.
{"title":"Energy-aware scheduling and two-tier coordinated load balancing for streaming applications in apache flink","authors":"Hongjian Li, Junlin Li, Xiaolin Duan, Jianglin Xia","doi":"10.1016/j.future.2024.107681","DOIUrl":"10.1016/j.future.2024.107681","url":null,"abstract":"<div><div>Apache Flink has become one of the highly regarded streaming computing frameworks with its excellent advantages of high throughput, low latency, and high reliability. However, the default task scheduling policy follows the first-come-first-served principle, which fails to fully consider the differences in energy efficiency and resource loading of nodes in heterogeneous clusters and may lead to high energy consumption and uneven load distribution when executing jobs. To solve this problem, this paper proposes a two-tier coordinated load balancing and energy-saving scheduling optimization strategy. First, we construct an energy efficiency model based on Service Level Agreements (SLA) and design an Energy-Saving Scheduling Algorithm (ESSA) based on this model, aiming to reduce the energy consumption of Flink clusters when executing jobs. This ESSA algorithm integrally considers the effects of two SLA performance metrics including node response time and throughput on node energy consumption, as well as the differences in the energy efficiencies of different nodes in heterogeneous clusters. Second, in order to solve the load imbalance problem that may be caused by Flink’s default scheduling policy, an Energy-Aware Two-Tier Coordinated Load Balancing algorithm (TTCLB-EA) is proposed, which optimizes the cluster load at both the inter-node and intra-node levels through task based on energy efficiency priorities. Experimental results show that compared with the default scheduling strategy, round-robin scheduling strategy, and St-Stream, the proposed algorithm improves about 14.59%, 12.75%, and 7.32% in load balancing, while saving about 14.52%, 10.54%, and 7.58% in energy consumption, respectively. The proposed algorithms not only enhance the performance of the Flink cluster but also help to reduce energy consumption and achieve more efficient resource utilization.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107681"},"PeriodicalIF":6.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142929300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.future.2024.107700
Changzhen Zhang, Jun Yang
Satellite Edge Computing (SEC) can provide task computation services to terrestrial users, particularly in areas lacking terrestrial network coverage. With the increasing frequency of computational demands from Internet of Things (IoT) devices and the limited and dynamic nature of computational resources in Low Earth Orbit (LEO) satellites, making effective real-time scheduling decisions in dynamic environments to ensure high task success rate is a critical challenge. In this work, we investigate the dynamic task scheduling of SEC based on Genetic Programming Hyper-Heuristic (GPHH). Firstly, a new problem model for the dynamic task scheduling of SEC is proposed with the objective of improving the task success rate, where the real-world situations (limited and dynamic nature of satellite resources, randomness and difference of tasks) are taken into account. Secondly, to make efficient real-time routing decision and queuing decision during the dynamic scheduling process, a novel scheduling heuristic with routing rule and queuing rule is developed, considering dynamic features of the SEC system such as real-time load, energy consumption, and remaining deadlines. Thirdly, to automatically learn both routing rule and queuing rule, and improve the performance of the algorithm, a Multi-Tree Genetic Programming with Elite Recombination (MTGPER) is proposed, which exploits the recombination of the excellent rules to obtain the better scheduling heuristics. The experimental results show that the proposed MTGPER significantly outperforms existing state-of-the-art methods. The scheduling heuristic evolved by MTGPER has quite good interpretability, which facilitates scheduling management in engineering practice.
{"title":"Multi-Tree Genetic Programming with Elite Recombination for dynamic task scheduling of satellite edge computing","authors":"Changzhen Zhang, Jun Yang","doi":"10.1016/j.future.2024.107700","DOIUrl":"10.1016/j.future.2024.107700","url":null,"abstract":"<div><div>Satellite Edge Computing (SEC) can provide task computation services to terrestrial users, particularly in areas lacking terrestrial network coverage. With the increasing frequency of computational demands from Internet of Things (IoT) devices and the limited and dynamic nature of computational resources in Low Earth Orbit (LEO) satellites, making effective real-time scheduling decisions in dynamic environments to ensure high task success rate is a critical challenge. In this work, we investigate the dynamic task scheduling of SEC based on Genetic Programming Hyper-Heuristic (GPHH). Firstly, a new problem model for the dynamic task scheduling of SEC is proposed with the objective of improving the task success rate, where the real-world situations (limited and dynamic nature of satellite resources, randomness and difference of tasks) are taken into account. Secondly, to make efficient real-time routing decision and queuing decision during the dynamic scheduling process, a novel scheduling heuristic with routing rule and queuing rule is developed, considering dynamic features of the SEC system such as real-time load, energy consumption, and remaining deadlines. Thirdly, to automatically learn both routing rule and queuing rule, and improve the performance of the algorithm, a Multi-Tree Genetic Programming with Elite Recombination (MTGPER) is proposed, which exploits the recombination of the excellent rules to obtain the better scheduling heuristics. The experimental results show that the proposed MTGPER significantly outperforms existing state-of-the-art methods. The scheduling heuristic evolved by MTGPER has quite good interpretability, which facilitates scheduling management in engineering practice.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107700"},"PeriodicalIF":6.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.future.2024.107680
José Santos , Mattia Zaccarini , Filippo Poltronieri , Mauro Tortonesi , Cesare Stefanelli , Nicola Di Cicco , Filip De Turck
With the advent of containerization technologies, microservices have revolutionized application deployment by converting old monolithic software into a group of loosely coupled containers, aiming to offer greater flexibility and improve operational efficiency. This transition made applications more complex, consisting of tens to hundreds of microservices. Designing effective orchestration mechanisms remains a crucial challenge, especially for emerging distributed cloud paradigms such as the Compute Continuum (CC). Orchestration across multiple clusters is still not extensively explored in the literature since most works consider single-cluster scenarios. In the CC scenario, the orchestrator must decide the optimal locations for each microservice, deciding whether instances are deployed altogether or placed across different clusters, significantly increasing orchestration complexity. This paper addresses orchestration in a containerized CC environment by studying a Reinforcement Learning (RL) approach for efficient microservice deployment in Kubernetes (K8s) clusters, a widely adopted container orchestration platform. This work demonstrates the effectiveness of RL in achieving near-optimal deployment schemes under dynamic conditions, where network latency and resource capacity fluctuate. We extensively evaluate a multi-objective reward function that aims to minimize overall latency, reduce deployment costs, and promote fair distribution of microservice instances, and we compare it against typical heuristic-based approaches. The results from an implemented OpenAI Gym framework, named as HephaestusForge, show that RL algorithms achieve minimal rejection rates (as low as 0.002%, 90x less than the baseline Karmada scheduler). Cost-aware strategies result in lower deployment costs (2.5 units), and latency-aware functions achieve lower latency (268–290 ms), improving by 1.5x and 1.3x, respectively, over the best-performing baselines. HephaestusForge is available in a public open-source repository, allowing researchers to validate their own placement algorithms. This study also highlights the adaptability of the DeepSets (DS) neural network in optimizing microservice placement across diverse multi-cluster setups without retraining. The DS neural network can handle inputs and outputs as arbitrarily sized sets, enabling the RL algorithm to learn a policy not bound to a fixed number of clusters.
{"title":"HephaestusForge: Optimal microservice deployment across the Compute Continuum via Reinforcement Learning","authors":"José Santos , Mattia Zaccarini , Filippo Poltronieri , Mauro Tortonesi , Cesare Stefanelli , Nicola Di Cicco , Filip De Turck","doi":"10.1016/j.future.2024.107680","DOIUrl":"10.1016/j.future.2024.107680","url":null,"abstract":"<div><div>With the advent of containerization technologies, microservices have revolutionized application deployment by converting old monolithic software into a group of loosely coupled containers, aiming to offer greater flexibility and improve operational efficiency. This transition made applications more complex, consisting of tens to hundreds of microservices. Designing effective orchestration mechanisms remains a crucial challenge, especially for emerging distributed cloud paradigms such as the Compute Continuum (CC). Orchestration across multiple clusters is still not extensively explored in the literature since most works consider single-cluster scenarios. In the CC scenario, the orchestrator must decide the optimal locations for each microservice, deciding whether instances are deployed altogether or placed across different clusters, significantly increasing orchestration complexity. This paper addresses orchestration in a containerized CC environment by studying a Reinforcement Learning (RL) approach for efficient microservice deployment in Kubernetes (K8s) clusters, a widely adopted container orchestration platform. This work demonstrates the effectiveness of RL in achieving near-optimal deployment schemes under dynamic conditions, where network latency and resource capacity fluctuate. We extensively evaluate a multi-objective reward function that aims to minimize overall latency, reduce deployment costs, and promote fair distribution of microservice instances, and we compare it against typical heuristic-based approaches. The results from an implemented OpenAI Gym framework, named as <em>HephaestusForge</em>, show that RL algorithms achieve minimal rejection rates (as low as 0.002%, 90x less than the baseline Karmada scheduler). Cost-aware strategies result in lower deployment costs (2.5 units), and latency-aware functions achieve lower latency (268–290 ms), improving by 1.5x and 1.3x, respectively, over the best-performing baselines. <em>HephaestusForge</em> is available in a public open-source repository, allowing researchers to validate their own placement algorithms. This study also highlights the adaptability of the DeepSets (DS) neural network in optimizing microservice placement across diverse multi-cluster setups without retraining. The DS neural network can handle inputs and outputs as arbitrarily sized sets, enabling the RL algorithm to learn a policy not bound to a fixed number of clusters.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107680"},"PeriodicalIF":6.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}