Pub Date : 2026-06-01Epub Date: 2025-12-30DOI: 10.1016/j.future.2025.108356
Ruinan Ma , Zuobin Ying , Wenjuan Li , Dehua Zhu , Wanlei Zhou , Yu-An Tan , Hongyi Liu
With deep learning-based object detectors widely deployed as visual components in Industrial Internet of Things (IIoT) devices like cameras, their adversarial robustness has become paramount to the security and resilience of hyperconnected industrial systems. Existing adversarial defenses are often inadequate for the complexities of object detection, and securing already deployed detectors with a lightweight defense that avoids costly retraining remains a major challenge. In this paper, we propose XAIAD-YOLO: Explainable AI-Guided Adversarial Defense for YOLO detectors, a novel test-time defense to enable resilient YOLO detectors. XAIAD-YOLO introduces a synergistic two-stage purification framework grounded in distinct theoretical principles. Its initial stage, based on signal processing principles, filters high-frequency adversarial noise from genuine image structures. The second stage performs targeted feature destabilization; guided by our efficient XAI saliency map and grounded in the principle of differential feature stability, it precisely neutralizes fragile adversarial artifacts. Experiments show that our XAI method achieves 66.08 FPS (1.56x faster than Grad-CAM++), and our defense method significantly improves adversarial robustness, making anchor-based, anchor-free, lightweight, and non-lightweight YOLO detectors more resilient in both white-box and black-box scenarios. By uniquely integrating explainability into the defense mechanism, XAIAD-YOLO provides a practical and effective solution for enhancing the resilience and trustworthiness of AI in critical industrial applications. Our source code and datasets are available https://anonymous.4open.science/r/XAIAD-YOLO-B0A3/here.
{"title":"Explainable AI-guided test-time adversarial defense for resilient YOLO detectors in Industrial Internet of Things","authors":"Ruinan Ma , Zuobin Ying , Wenjuan Li , Dehua Zhu , Wanlei Zhou , Yu-An Tan , Hongyi Liu","doi":"10.1016/j.future.2025.108356","DOIUrl":"10.1016/j.future.2025.108356","url":null,"abstract":"<div><div>With deep learning-based object detectors widely deployed as visual components in Industrial Internet of Things (IIoT) devices like cameras, their adversarial robustness has become paramount to the security and resilience of hyperconnected industrial systems. Existing adversarial defenses are often inadequate for the complexities of object detection, and securing already deployed detectors with a lightweight defense that avoids costly retraining remains a major challenge. In this paper, we propose XAIAD-YOLO: Explainable AI-Guided Adversarial Defense for YOLO detectors, a novel test-time defense to enable resilient YOLO detectors. XAIAD-YOLO introduces a synergistic two-stage purification framework grounded in distinct theoretical principles. Its initial stage, based on signal processing principles, filters high-frequency adversarial noise from genuine image structures. The second stage performs targeted feature destabilization; guided by our efficient XAI saliency map and grounded in the principle of differential feature stability, it precisely neutralizes fragile adversarial artifacts. Experiments show that our XAI method achieves 66.08 FPS (1.56x faster than Grad-CAM++), and our defense method significantly improves adversarial robustness, making anchor-based, anchor-free, lightweight, and non-lightweight YOLO detectors more resilient in both white-box and black-box scenarios. By uniquely integrating explainability into the defense mechanism, XAIAD-YOLO provides a practical and effective solution for enhancing the resilience and trustworthiness of AI in critical industrial applications. Our source code and datasets are available <span><span>https://anonymous.4open.science/r/XAIAD-YOLO-B0A3/here</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108356"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2025-12-30DOI: 10.1016/j.future.2025.108340
Hongjian Li , Shuheng Wang , Gangfan Tan , Xiaolin Duan
With the rapid growth of data volume and increasing real-time processing requirements, stream processing systems face challenges of execution inefficiency and excessive resource consumption. Apache Storm employs a simplistic round-robin scheduling strategy by default, neglecting node heterogeneity, task topology, and varying traffic patterns, leading to performance degradation and resource wastage. To address these limitations, this paper proposes two novel scheduling strategies: a resource-cost and topology-aware distributed method (MMO-Stream) and a resource-aware cooperative strategy (D-Storm). MMO-Stream integrates a cost-effective Quality-of-Service (QoS) model with a meta-heuristic-based multi-criteria optimization algorithm to optimize resource consumption, latency, and throughput simultaneously. D-Storm utilizes historical performance data and resource-awareness mechanisms to dynamically optimize task reallocation, mitigating performance deterioration from frequent rescheduling. Experimental results show MMO-Stream achieves cost-effective QoS (C-QoS) improvements of 41.7% and 39.5%, and latency reductions of 23.9% and 15.8%, compared to Storm’s default scheduling and Ts-Stream, respectively. D-Storm reduces latency by 23.9% and 37.5% compared to default and Ts-Stream strategies, significantly outperforming MMO-Stream. The proposed methods effectively enhance Storm’s scheduling performance and resource efficiency.
{"title":"Cost-efficient and topology-aware scheduling algorithms in distributed stream computing systems","authors":"Hongjian Li , Shuheng Wang , Gangfan Tan , Xiaolin Duan","doi":"10.1016/j.future.2025.108340","DOIUrl":"10.1016/j.future.2025.108340","url":null,"abstract":"<div><div>With the rapid growth of data volume and increasing real-time processing requirements, stream processing systems face challenges of execution inefficiency and excessive resource consumption. Apache Storm employs a simplistic round-robin scheduling strategy by default, neglecting node heterogeneity, task topology, and varying traffic patterns, leading to performance degradation and resource wastage. To address these limitations, this paper proposes two novel scheduling strategies: a resource-cost and topology-aware distributed method (<strong>MMO-Stream</strong>) and a resource-aware cooperative strategy (<strong>D-Storm</strong>). MMO-Stream integrates a cost-effective Quality-of-Service (QoS) model with a meta-heuristic-based multi-criteria optimization algorithm to optimize resource consumption, latency, and throughput simultaneously. D-Storm utilizes historical performance data and resource-awareness mechanisms to dynamically optimize task reallocation, mitigating performance deterioration from frequent rescheduling. Experimental results show MMO-Stream achieves cost-effective QoS (C-QoS) improvements of 41.7% and 39.5%, and latency reductions of 23.9% and 15.8%, compared to Storm’s default scheduling and Ts-Stream, respectively. D-Storm reduces latency by 23.9% and 37.5% compared to default and Ts-Stream strategies, significantly outperforming MMO-Stream. The proposed methods effectively enhance Storm’s scheduling performance and resource efficiency.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108340"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2025-12-27DOI: 10.1016/j.future.2025.108355
Chennian Xiong , Weiwei Lin , Huikang Huang , Jianpeng Lin , Keqin Li
Cloud computing and virtualization technologies have significantly improved resource utilization in data centers. However, performance interference caused by resource contention remains a major challenge, particularly for compute-intensive batch applications, which are vital for large-scale data processing and task scheduling. Addressing performance interference in the modeling and scheduling of such applications still requires improvement. Existing interference models often rely on stereotypical metrics and average values, ignoring the impact of temporal fluctuations, while conventional scheduling algorithms overlook interference dynamics, leading to suboptimal scheduling results. To overcome these limitations, this article investigates the key factors influencing the performance of compute-intensive workloads and introduces a novel performance interference model that incorporates temporal fluctuations. Furthermore, we propose a historical-data-driven scheduling method that accounts for both temporal dynamics and batch application interference characteristics. Experimental results demonstrate that the proposed performance interference model achieves higher accuracy and robustness against overfitting compared to existing models that neglect temporal variations. Additionally, our interference-aware scheduling algorithm significantly outperforms traditional methods in throughput, scheduling efficiency, and server load balancing, providing an effective solution to mitigate performance interference in cloud environments.
{"title":"Interference modeling and scheduling for compute-intensive batch applications","authors":"Chennian Xiong , Weiwei Lin , Huikang Huang , Jianpeng Lin , Keqin Li","doi":"10.1016/j.future.2025.108355","DOIUrl":"10.1016/j.future.2025.108355","url":null,"abstract":"<div><div>Cloud computing and virtualization technologies have significantly improved resource utilization in data centers. However, performance interference caused by resource contention remains a major challenge, particularly for compute-intensive batch applications, which are vital for large-scale data processing and task scheduling. Addressing performance interference in the modeling and scheduling of such applications still requires improvement. Existing interference models often rely on stereotypical metrics and average values, ignoring the impact of temporal fluctuations, while conventional scheduling algorithms overlook interference dynamics, leading to suboptimal scheduling results. To overcome these limitations, this article investigates the key factors influencing the performance of compute-intensive workloads and introduces a novel performance interference model that incorporates temporal fluctuations. Furthermore, we propose a historical-data-driven scheduling method that accounts for both temporal dynamics and batch application interference characteristics. Experimental results demonstrate that the proposed performance interference model achieves higher accuracy and robustness against overfitting compared to existing models that neglect temporal variations. Additionally, our interference-aware scheduling algorithm significantly outperforms traditional methods in throughput, scheduling efficiency, and server load balancing, providing an effective solution to mitigate performance interference in cloud environments.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108355"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145845125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2025-12-19DOI: 10.1016/j.future.2025.108329
Helong Wang, Changchang Che, Haiyuan Xu
Traffic forecasting is fundamental to intelligent transportation systems.However, existing traffic prediction models struggle to balance modeling ability and computational efficiency. Complex graph-based or attention-based models effectively capture spatio-temporal dependencies but incur high computational costs that hinder practical deployment. To address this, we propose a linear decomposition network incorporating multi-mode spatial embedding. This embedding strategy replaces traditional graph convolution or attention mechanisms by adaptively learning distinct traffic patterns to capture dynamic spatial dependencies. The network utilizes linear blocks to decompose time series into periodic and residual terms for separate modeling. A gating mechanism subsequently fuses these components to generate predictions. Additionally, we introduce PEMS06, a new dataset reflecting recent traffic characteristics. Extensive experiments on five datasets prove our model achieves superior performance and efficiency, as well as strong generalization ability.
{"title":"MSE-LDN: Linear decomposition networks under multi-mode spatial embedding for traffic prediction","authors":"Helong Wang, Changchang Che, Haiyuan Xu","doi":"10.1016/j.future.2025.108329","DOIUrl":"10.1016/j.future.2025.108329","url":null,"abstract":"<div><div>Traffic forecasting is fundamental to intelligent transportation systems.However, existing traffic prediction models struggle to balance modeling ability and computational efficiency. Complex graph-based or attention-based models effectively capture spatio-temporal dependencies but incur high computational costs that hinder practical deployment. To address this, we propose a linear decomposition network incorporating multi-mode spatial embedding. This embedding strategy replaces traditional graph convolution or attention mechanisms by adaptively learning distinct traffic patterns to capture dynamic spatial dependencies. The network utilizes linear blocks to decompose time series into periodic and residual terms for separate modeling. A gating mechanism subsequently fuses these components to generate predictions. Additionally, we introduce PEMS06, a new dataset reflecting recent traffic characteristics. Extensive experiments on five datasets prove our model achieves superior performance and efficiency, as well as strong generalization ability.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108329"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2025-12-19DOI: 10.1016/j.future.2025.108311
Mohsen Seyedkazemi Ardebili , Andrea Acquaviva , Luca Benini , Andrea Bartolini
In the era of digital transformation, datacenters and High Performance Computing (HPC) Systems have emerged as the backbone of global technology infrastructure, powering essential services across various industries, including finance and healthcare. Therefore, ensuring the uninterrupted service of these datacenters has become a critical challenge. Thermal anomalies pose a significant risk to datacenter operation, potentially leading to hardware deterioration, system downtime, and catastrophic failures. This threat is exacerbated by the growing number of datacenters, increased power density, and heat waves fostered by global warming. Detecting thermal anomalies in datacenters involves several challenges. Large-scale data collection is difficult, requiring diverse monitoring signals from thousands of nodes over long periods. The absence of labeled data complicates the identification of normal and abnormal states. Establishing accurate classification thresholds to minimize false positives and negatives is another significant hurdle. Traditional statistical methods often fail to capture temporal dependencies and complex correlations in monitoring signals. Additionally, finding anomalies at both the system and subsystem levels adds to the complexity. Deploying machine learning models in production environments presents technical and operational challenges, making real-time anomaly detection a demanding task. This paper introduces ThermADNet, a Thermal Anomaly Detection framework that combines statistical rules-based methods with Deep Neural Network (DNN) techniques for thermal anomaly detection in datacenters. ThermADNet utilizes a semi-supervised learning approach by training on a “semi-normal” dataset, addressing the challenges of large-scale data collection, semi-normal dataset identification, and classification threshold establishment. This framework’s efficacy is validated by its success in identifying real physical thermal failure events within a Tier-0 datacenter, pinpointing anomalies at both the system and subsystem levels, including compute nodes and datacenter infrastructure. In the critical evaluation window covering the July 28 failure, ThermADNet achieves precision and recall up to 0.97, with F1-scores as high as 0.97. By providing detailed information about anomalies, the framework clarifies the characteristics and reasoning behind the DNN outputs, thereby building trust in the AI model and ensuring that users can understand and rely on the system’s decisions. By offering a sophisticated method for thermal anomaly detection, ThermADNet significantly contributes to enhancing datacenter reliability and efficiency. This advancement supports the uninterrupted operation of critical HPC systems, averting considerable economic and societal losses.
{"title":"Elevating Datacenter Resilience with ThermADNet: A Thermal Anomaly Detection System","authors":"Mohsen Seyedkazemi Ardebili , Andrea Acquaviva , Luca Benini , Andrea Bartolini","doi":"10.1016/j.future.2025.108311","DOIUrl":"10.1016/j.future.2025.108311","url":null,"abstract":"<div><div>In the era of digital transformation, datacenters and High Performance Computing (HPC) Systems have emerged as the backbone of global technology infrastructure, powering essential services across various industries, including finance and healthcare. Therefore, ensuring the uninterrupted service of these datacenters has become a critical challenge. Thermal anomalies pose a significant risk to datacenter operation, potentially leading to hardware deterioration, system downtime, and catastrophic failures. This threat is exacerbated by the growing number of datacenters, increased power density, and heat waves fostered by global warming. Detecting thermal anomalies in datacenters involves several challenges. Large-scale data collection is difficult, requiring diverse monitoring signals from thousands of nodes over long periods. The absence of labeled data complicates the identification of normal and abnormal states. Establishing accurate classification thresholds to minimize false positives and negatives is another significant hurdle. Traditional statistical methods often fail to capture temporal dependencies and complex correlations in monitoring signals. Additionally, finding anomalies at both the system and subsystem levels adds to the complexity. Deploying machine learning models in production environments presents technical and operational challenges, making real-time anomaly detection a demanding task. This paper introduces ThermADNet, a Thermal Anomaly Detection framework that combines statistical rules-based methods with Deep Neural Network (DNN) techniques for thermal anomaly detection in datacenters. ThermADNet utilizes a semi-supervised learning approach by training on a “semi-normal” dataset, addressing the challenges of large-scale data collection, semi-normal dataset identification, and classification threshold establishment. This framework’s efficacy is validated by its success in identifying real physical thermal failure events within a Tier-0 datacenter, pinpointing anomalies at both the system and subsystem levels, including compute nodes and datacenter infrastructure. In the critical evaluation window covering the July 28 failure, ThermADNet achieves precision and recall up to 0.97, with F1-scores as high as 0.97. By providing detailed information about anomalies, the framework clarifies the characteristics and reasoning behind the DNN outputs, thereby building trust in the AI model and ensuring that users can understand and rely on the system’s decisions. By offering a sophisticated method for thermal anomaly detection, ThermADNet significantly contributes to enhancing datacenter reliability and efficiency. This advancement supports the uninterrupted operation of critical HPC systems, averting considerable economic and societal losses.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108311"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2025-12-18DOI: 10.1016/j.future.2025.108326
Tian Wang , Jianfei Chen , Liying Li , Wei Shen , Lei Zhou , Linli Xu , Junlong Zhou
With the popularization of cloud computing, more and more cloud providers charge for cloud services based on the performance of computing resource provisioning. For cloud service providers, maximizing profit by focusing on multicore-based multiserver systems is a perennial goal. However, existing research on multiserver systems that maximize the service profit either limits itself to optimizing the multiserver configuration while neglecting the schedulability of cloud service requests or focuses on cloud service scheduling while ignoring the dynamic scalability of the multiserver. Furthermore, the potential impact of transient faults on service processing presents a significant opportunity for improving cloud profitability, an area that has received less attention in profit-oriented research. Therefore, it is necessary to design a collaborative optimization method for cloud service scheduling and multiserver configuration, specifically targeting soft real-time cloud service requests, to fill the gap in existing works. In this work, we first model cloud service scheduling and multiserver configuration as a profit maximization problem that is a mixed integer nonlinear optimization. Then, we propose a depth-based grey wolf optimizer to solve our formulated problem. Finally, extensive experiments are conducted to validate the effectiveness of our proposed method. The empirical results demonstrate that our method achieves an average increase of 7.04 % in service profits compared to six benchmark methods.
{"title":"DGWOSC: A depth-based grey wolf optimizer for reliability aware soft real-time service scheduling and multiserver configuration","authors":"Tian Wang , Jianfei Chen , Liying Li , Wei Shen , Lei Zhou , Linli Xu , Junlong Zhou","doi":"10.1016/j.future.2025.108326","DOIUrl":"10.1016/j.future.2025.108326","url":null,"abstract":"<div><div>With the popularization of cloud computing, more and more cloud providers charge for cloud services based on the performance of computing resource provisioning. For cloud service providers, maximizing profit by focusing on multicore-based multiserver systems is a perennial goal. However, existing research on multiserver systems that maximize the service profit either limits itself to optimizing the multiserver configuration while neglecting the schedulability of cloud service requests or focuses on cloud service scheduling while ignoring the dynamic scalability of the multiserver. Furthermore, the potential impact of transient faults on service processing presents a significant opportunity for improving cloud profitability, an area that has received less attention in profit-oriented research. Therefore, it is necessary to design a collaborative optimization method for cloud service scheduling and multiserver configuration, specifically targeting soft real-time cloud service requests, to fill the gap in existing works. In this work, we first model cloud service scheduling and multiserver configuration as a profit maximization problem that is a mixed integer nonlinear optimization. Then, we propose a depth-based grey wolf optimizer to solve our formulated problem. Finally, extensive experiments are conducted to validate the effectiveness of our proposed method. The empirical results demonstrate that our method achieves an average increase of 7.04 % in service profits compared to six benchmark methods.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108326"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qubit allocation is a central step in adapting abstract quantum circuits to noisy intermediate-scale quantum devices, yet exact approaches for solving it face severe scalability limitations. In this work, we revisit the formulation of qubit allocation as a permutation-based quadratic assignment problem and develop a branch-and-bound algorithm for its exact resolution. We first establish a refined sequential implementation that achieves significantly faster runtimes than previous exact approaches on most problem instances, thereby setting a new state-of-the-art for this formulation. Building on this foundation, we extend the approach to a performance-aware parallel implementation that exploits both intra-node and inter-node parallelism on High-Performance Computing (HPC) infrastructures. Our experimental evaluation demonstrates near-linear strong scaling at the intra-node level and substantial scalability in distributed settings across nodes. Leveraging these capabilities, we provide reference optimal solutions for challenging benchmark circuits of up to 26 qubits—significantly larger than previously reported instances. These results show that large-scale parallelization can effectively extend the reach of exact methods for qubit allocation, thereby advancing the integration of combinatorial optimization and HPC techniques in quantum computing.
{"title":"Efficient and scalable branch-and-bound algorithm for exact qubit allocation","authors":"Jean-Philippe Valois, Guillaume Helbecque, Nouredine Melab","doi":"10.1016/j.future.2025.108342","DOIUrl":"10.1016/j.future.2025.108342","url":null,"abstract":"<div><div>Qubit allocation is a central step in adapting abstract quantum circuits to noisy intermediate-scale quantum devices, yet exact approaches for solving it face severe scalability limitations. In this work, we revisit the formulation of qubit allocation as a permutation-based quadratic assignment problem and develop a branch-and-bound algorithm for its exact resolution. We first establish a refined sequential implementation that achieves significantly faster runtimes than previous exact approaches on most problem instances, thereby setting a new state-of-the-art for this formulation. Building on this foundation, we extend the approach to a performance-aware parallel implementation that exploits both intra-node and inter-node parallelism on High-Performance Computing (HPC) infrastructures. Our experimental evaluation demonstrates near-linear strong scaling at the intra-node level and substantial scalability in distributed settings across nodes. Leveraging these capabilities, we provide reference optimal solutions for challenging benchmark circuits of up to 26 qubits—significantly larger than previously reported instances. These results show that large-scale parallelization can effectively extend the reach of exact methods for qubit allocation, thereby advancing the integration of combinatorial optimization and HPC techniques in quantum computing.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108342"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2025-12-23DOI: 10.1016/j.future.2025.108339
Tarek Othmani , Sadok Ben Yahia , Antonio Lalaguna
Due to the growing issues of urban population, including mobility requirements, this paper addresses the phenomenon of traffic congestion in urban environments by employing a Model Context Protocol-Based Agentic ReAct Large Language Model for Adaptive Traffic Signals (MARLATS) framework based on adaptive Traffic Management, Reinforcement Learning (RL), and Large Language Models (LLMs). The framework assessed energy consumption, emissions measures, traffic performance, and economic performance. The incorporation of various types of vehicles and practical trip scenarios within the MARLATS framework of Luxembourg City was carried out to support traffic control in urban areas. The study findings revealed a 89% cut in average travel time, a 96% drop in average waiting time, 74% gain in average speed, a remarkable 50% reduction in fuel consumption and emission abatement (CO, CO2, NOx, PM, NMVOC), while increasing noise pollution by 6.9% but MARLATS was capable of halving the operating costs by 50% from 14.14€ /h to 7.05€ /h. Compared with leading RL/DRL/LLM studies, MARLATS outperforms by 34% to 73%. These results position MARLATS as a turnkey, rapid-payback pathway to net-zero, congestion-free cities. Despite the good results, MARLATS suffer from some limitations that need to be considered in future projects, such as reducing noise emissions, mixing vehicle fleets like battery electric and plug-in hybrid vehicles, quantifying V2X infrastructure costs, and providing cybersecurity analysis for efficient and safer data transfer.
{"title":"Model context protocol-based agentic react large language model for adaptive traffic signals: Luxembourg case study","authors":"Tarek Othmani , Sadok Ben Yahia , Antonio Lalaguna","doi":"10.1016/j.future.2025.108339","DOIUrl":"10.1016/j.future.2025.108339","url":null,"abstract":"<div><div>Due to the growing issues of urban population, including mobility requirements, this paper addresses the phenomenon of traffic congestion in urban environments by employing a <strong>M</strong>odel Context Protocol-Based <strong>A</strong>gentic <strong>R</strong>eAct <strong>L</strong>arge Language Model for <strong>A</strong>daptive <strong>T</strong>raffic <strong>S</strong>ignals (<strong>MARLATS</strong>) framework based on adaptive Traffic Management, Reinforcement Learning (RL), and Large Language Models (LLMs). The framework assessed energy consumption, emissions measures, traffic performance, and economic performance. The incorporation of various types of vehicles and practical trip scenarios within the <strong>MARLATS</strong> framework of Luxembourg City was carried out to support traffic control in urban areas. The study findings revealed a 89% cut in average travel time, a 96% drop in average waiting time, 74% gain in average speed, a remarkable 50% reduction in fuel consumption and emission abatement (CO, CO<sub>2</sub>, NO<sub><em>x</em></sub>, PM, NMVOC), while increasing noise pollution by 6.9% but <strong>MARLATS</strong> was capable of halving the operating costs by 50% from 14.14€ /h to 7.05€ /h. Compared with leading RL/DRL/LLM studies, <strong>MARLATS</strong> outperforms by 34% to 73%. These results position <strong>MARLATS</strong> as a turnkey, rapid-payback pathway to net-zero, congestion-free cities. Despite the good results, <strong>MARLATS</strong> suffer from some limitations that need to be considered in future projects, such as reducing noise emissions, mixing vehicle fleets like battery electric and plug-in hybrid vehicles, quantifying V2X infrastructure costs, and providing cybersecurity analysis for efficient and safer data transfer.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108339"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2025-12-23DOI: 10.1016/j.future.2025.108320
Changjin Zhao, Xiang Feng, Huiqun Yu
In the field of distributed agent communication, privacy protection has always been a core concern. With ongoing advances in privacy-preserving technologies, integrating these techniques into distributed reinforcement learning has become a prevailing trend. However, the key challenge lies in safeguarding privacy while ensuring that model learning efficiency remains unaffected. To tackle this concern, a privacy-preserving framework named Zero-Knowledge proof for Distributed Reinforcement Learning (ZKDRL) is proposed. This framework equips each agent with strict differential privacy and integrates a privacy-aware receiver at the Learner end to mitigate the impact of noise on model aggregation. Additionally, zero-knowledge proof techniques are incorporated to ensure communication security and integrity within the distributed system, thereby verifying information authenticity without revealing any additional details. Implementation of ZKDRL on the open-source Surreal framework shows that, compared to baseline methods, the approach enhances data privacy by at least 21.9 % while increasing the model’s average cumulative reward by 9.5 %. Consequently, the model’s performance loss remains confined to an acceptable range, which confirms the framework’s practical applicability in distributed reinforcement learning.
{"title":"A privacy protection mechanism in distributed reinforcement learning using zero-knowledge proof","authors":"Changjin Zhao, Xiang Feng, Huiqun Yu","doi":"10.1016/j.future.2025.108320","DOIUrl":"10.1016/j.future.2025.108320","url":null,"abstract":"<div><div>In the field of distributed agent communication, privacy protection has always been a core concern. With ongoing advances in privacy-preserving technologies, integrating these techniques into distributed reinforcement learning has become a prevailing trend. However, the key challenge lies in safeguarding privacy while ensuring that model learning efficiency remains unaffected. To tackle this concern, a privacy-preserving framework named Zero-Knowledge proof for Distributed Reinforcement Learning (ZKDRL) is proposed. This framework equips each agent with strict differential privacy and integrates a privacy-aware receiver at the Learner end to mitigate the impact of noise on model aggregation. Additionally, zero-knowledge proof techniques are incorporated to ensure communication security and integrity within the distributed system, thereby verifying information authenticity without revealing any additional details. Implementation of ZKDRL on the open-source Surreal framework shows that, compared to baseline methods, the approach enhances data privacy by at least 21.9 % while increasing the model’s average cumulative reward by 9.5 %. Consequently, the model’s performance loss remains confined to an acceptable range, which confirms the framework’s practical applicability in distributed reinforcement learning.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108320"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-01-01DOI: 10.1016/j.future.2025.108357
Cristina Alcaraz, Hector Guzman, Javier Lopez
A Digital Twin (DT) is a cutting-edge technology that has gained relevance in recent years, demonstrating huge potential for the simulation of processes and the provision of valuable insights to improve and optimise systems. Leveraging a high degree of fidelity in replicating real-world processes, DTs are being explored for advanced applications such as deception and proactive protection of critical infrastructures. However, this same advantage also raises concerns with respect to a system’s exposure, as the detailed digital representation may introduce new cybersecurity risks. With the aim of assisting the growth of this technology, this paper presents an adaptive DT solution that facilitates the configuration of particular components of the digital system, tailoring different application scenarios specifically for protection, deception, and testing purposes. Finally, the proposed architecture is tested under a specific IoT-oriented use case to validate, experiment, and extract conclusions of the proposed solution.
{"title":"Adaptive Digital Twin: Protection, deception, and testing","authors":"Cristina Alcaraz, Hector Guzman, Javier Lopez","doi":"10.1016/j.future.2025.108357","DOIUrl":"10.1016/j.future.2025.108357","url":null,"abstract":"<div><div>A Digital Twin (DT) is a cutting-edge technology that has gained relevance in recent years, demonstrating huge potential for the simulation of processes and the provision of valuable insights to improve and optimise systems. Leveraging a high degree of fidelity in replicating real-world processes, DTs are being explored for advanced applications such as deception and proactive protection of critical infrastructures. However, this same advantage also raises concerns with respect to a system’s exposure, as the detailed digital representation may introduce new cybersecurity risks. With the aim of assisting the growth of this technology, this paper presents an adaptive DT solution that facilitates the configuration of particular components of the digital system, tailoring different application scenarios specifically for protection, deception, and testing purposes. Finally, the proposed architecture is tested under a specific IoT-oriented use case to validate, experiment, and extract conclusions of the proposed solution.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108357"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}