首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
Explainable AI-guided test-time adversarial defense for resilient YOLO detectors in Industrial Internet of Things 工业物联网中弹性YOLO探测器的可解释ai引导测试时间对抗防御
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-30 DOI: 10.1016/j.future.2025.108356
Ruinan Ma , Zuobin Ying , Wenjuan Li , Dehua Zhu , Wanlei Zhou , Yu-An Tan , Hongyi Liu
With deep learning-based object detectors widely deployed as visual components in Industrial Internet of Things (IIoT) devices like cameras, their adversarial robustness has become paramount to the security and resilience of hyperconnected industrial systems. Existing adversarial defenses are often inadequate for the complexities of object detection, and securing already deployed detectors with a lightweight defense that avoids costly retraining remains a major challenge. In this paper, we propose XAIAD-YOLO: Explainable AI-Guided Adversarial Defense for YOLO detectors, a novel test-time defense to enable resilient YOLO detectors. XAIAD-YOLO introduces a synergistic two-stage purification framework grounded in distinct theoretical principles. Its initial stage, based on signal processing principles, filters high-frequency adversarial noise from genuine image structures. The second stage performs targeted feature destabilization; guided by our efficient XAI saliency map and grounded in the principle of differential feature stability, it precisely neutralizes fragile adversarial artifacts. Experiments show that our XAI method achieves 66.08 FPS (1.56x faster than Grad-CAM++), and our defense method significantly improves adversarial robustness, making anchor-based, anchor-free, lightweight, and non-lightweight YOLO detectors more resilient in both white-box and black-box scenarios. By uniquely integrating explainability into the defense mechanism, XAIAD-YOLO provides a practical and effective solution for enhancing the resilience and trustworthiness of AI in critical industrial applications. Our source code and datasets are available https://anonymous.4open.science/r/XAIAD-YOLO-B0A3/here.
随着基于深度学习的对象检测器作为视觉组件广泛部署在工业物联网(IIoT)设备(如摄像头)中,它们的对抗性鲁棒性对于超连接工业系统的安全性和弹性至关重要。现有的对抗性防御通常不足以应对目标检测的复杂性,并且使用轻量级防御来保护已经部署的探测器,以避免昂贵的再培训仍然是一个主要挑战。在本文中,我们提出了XAIAD-YOLO:用于YOLO探测器的可解释ai制导对抗防御,这是一种新的测试时间防御,可以使YOLO探测器具有弹性。XAIAD-YOLO引入了基于不同理论原理的协同两阶段净化框架。它的初始阶段,基于信号处理原理,从真实图像结构中过滤高频对抗噪声。第二阶段执行目标特征不稳定;在我们高效的XAI显著性地图的指导下,基于差分特征稳定性的原则,它精确地中和了脆弱的对抗性人工制品。实验表明,我们的XAI方法达到了66.08 FPS(比Grad-CAM++快1.56倍),并且我们的防御方法显著提高了对抗鲁棒性,使基于锚点的、无锚点的、轻量级的和非轻量级的YOLO探测器在白盒和黑盒场景下都更具弹性。通过独特地将可解释性集成到防御机制中,XAIAD-YOLO为增强关键工业应用中人工智能的弹性和可信度提供了实用有效的解决方案。我们的源代码和数据集可以在https://anonymous.4open.science/r/XAIAD-YOLO-B0A3/here上找到。
{"title":"Explainable AI-guided test-time adversarial defense for resilient YOLO detectors in Industrial Internet of Things","authors":"Ruinan Ma ,&nbsp;Zuobin Ying ,&nbsp;Wenjuan Li ,&nbsp;Dehua Zhu ,&nbsp;Wanlei Zhou ,&nbsp;Yu-An Tan ,&nbsp;Hongyi Liu","doi":"10.1016/j.future.2025.108356","DOIUrl":"10.1016/j.future.2025.108356","url":null,"abstract":"<div><div>With deep learning-based object detectors widely deployed as visual components in Industrial Internet of Things (IIoT) devices like cameras, their adversarial robustness has become paramount to the security and resilience of hyperconnected industrial systems. Existing adversarial defenses are often inadequate for the complexities of object detection, and securing already deployed detectors with a lightweight defense that avoids costly retraining remains a major challenge. In this paper, we propose XAIAD-YOLO: Explainable AI-Guided Adversarial Defense for YOLO detectors, a novel test-time defense to enable resilient YOLO detectors. XAIAD-YOLO introduces a synergistic two-stage purification framework grounded in distinct theoretical principles. Its initial stage, based on signal processing principles, filters high-frequency adversarial noise from genuine image structures. The second stage performs targeted feature destabilization; guided by our efficient XAI saliency map and grounded in the principle of differential feature stability, it precisely neutralizes fragile adversarial artifacts. Experiments show that our XAI method achieves 66.08 FPS (1.56x faster than Grad-CAM++), and our defense method significantly improves adversarial robustness, making anchor-based, anchor-free, lightweight, and non-lightweight YOLO detectors more resilient in both white-box and black-box scenarios. By uniquely integrating explainability into the defense mechanism, XAIAD-YOLO provides a practical and effective solution for enhancing the resilience and trustworthiness of AI in critical industrial applications. Our source code and datasets are available <span><span>https://anonymous.4open.science/r/XAIAD-YOLO-B0A3/here</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108356"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cost-efficient and topology-aware scheduling algorithms in distributed stream computing systems 分布式流计算系统中具有成本效益和拓扑感知的调度算法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-30 DOI: 10.1016/j.future.2025.108340
Hongjian Li , Shuheng Wang , Gangfan Tan , Xiaolin Duan
With the rapid growth of data volume and increasing real-time processing requirements, stream processing systems face challenges of execution inefficiency and excessive resource consumption. Apache Storm employs a simplistic round-robin scheduling strategy by default, neglecting node heterogeneity, task topology, and varying traffic patterns, leading to performance degradation and resource wastage. To address these limitations, this paper proposes two novel scheduling strategies: a resource-cost and topology-aware distributed method (MMO-Stream) and a resource-aware cooperative strategy (D-Storm). MMO-Stream integrates a cost-effective Quality-of-Service (QoS) model with a meta-heuristic-based multi-criteria optimization algorithm to optimize resource consumption, latency, and throughput simultaneously. D-Storm utilizes historical performance data and resource-awareness mechanisms to dynamically optimize task reallocation, mitigating performance deterioration from frequent rescheduling. Experimental results show MMO-Stream achieves cost-effective QoS (C-QoS) improvements of 41.7% and 39.5%, and latency reductions of 23.9% and 15.8%, compared to Storm’s default scheduling and Ts-Stream, respectively. D-Storm reduces latency by 23.9% and 37.5% compared to default and Ts-Stream strategies, significantly outperforming MMO-Stream. The proposed methods effectively enhance Storm’s scheduling performance and resource efficiency.
随着数据量的快速增长和实时处理需求的不断提高,流处理系统面临着执行效率低下和资源消耗过大的挑战。Apache Storm默认采用简单的循环调度策略,忽略了节点异构性、任务拓扑和不同的流量模式,导致性能下降和资源浪费。为了解决这些限制,本文提出了两种新的调度策略:资源成本和拓扑感知的分布式方法(MMO-Stream)和资源感知的协作策略(D-Storm)。MMO-Stream集成了具有成本效益的服务质量(QoS)模型和基于元启发式的多准则优化算法,以同时优化资源消耗、延迟和吞吐量。D-Storm利用历史性能数据和资源感知机制来动态优化任务重新分配,减轻频繁重新调度带来的性能下降。实验结果表明,与Storm的默认调度和Ts-Stream相比,MMO-Stream的C-QoS性能分别提高了41.7%和39.5%,时延分别降低了23.9%和15.8%。与默认策略和Ts-Stream策略相比,D-Storm减少了23.9%和37.5%的延迟,明显优于MMO-Stream。提出的方法有效地提高了Storm的调度性能和资源效率。
{"title":"Cost-efficient and topology-aware scheduling algorithms in distributed stream computing systems","authors":"Hongjian Li ,&nbsp;Shuheng Wang ,&nbsp;Gangfan Tan ,&nbsp;Xiaolin Duan","doi":"10.1016/j.future.2025.108340","DOIUrl":"10.1016/j.future.2025.108340","url":null,"abstract":"<div><div>With the rapid growth of data volume and increasing real-time processing requirements, stream processing systems face challenges of execution inefficiency and excessive resource consumption. Apache Storm employs a simplistic round-robin scheduling strategy by default, neglecting node heterogeneity, task topology, and varying traffic patterns, leading to performance degradation and resource wastage. To address these limitations, this paper proposes two novel scheduling strategies: a resource-cost and topology-aware distributed method (<strong>MMO-Stream</strong>) and a resource-aware cooperative strategy (<strong>D-Storm</strong>). MMO-Stream integrates a cost-effective Quality-of-Service (QoS) model with a meta-heuristic-based multi-criteria optimization algorithm to optimize resource consumption, latency, and throughput simultaneously. D-Storm utilizes historical performance data and resource-awareness mechanisms to dynamically optimize task reallocation, mitigating performance deterioration from frequent rescheduling. Experimental results show MMO-Stream achieves cost-effective QoS (C-QoS) improvements of 41.7% and 39.5%, and latency reductions of 23.9% and 15.8%, compared to Storm’s default scheduling and Ts-Stream, respectively. D-Storm reduces latency by 23.9% and 37.5% compared to default and Ts-Stream strategies, significantly outperforming MMO-Stream. The proposed methods effectively enhance Storm’s scheduling performance and resource efficiency.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108340"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interference modeling and scheduling for compute-intensive batch applications 计算密集型批处理应用的干扰建模和调度
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-27 DOI: 10.1016/j.future.2025.108355
Chennian Xiong , Weiwei Lin , Huikang Huang , Jianpeng Lin , Keqin Li
Cloud computing and virtualization technologies have significantly improved resource utilization in data centers. However, performance interference caused by resource contention remains a major challenge, particularly for compute-intensive batch applications, which are vital for large-scale data processing and task scheduling. Addressing performance interference in the modeling and scheduling of such applications still requires improvement. Existing interference models often rely on stereotypical metrics and average values, ignoring the impact of temporal fluctuations, while conventional scheduling algorithms overlook interference dynamics, leading to suboptimal scheduling results. To overcome these limitations, this article investigates the key factors influencing the performance of compute-intensive workloads and introduces a novel performance interference model that incorporates temporal fluctuations. Furthermore, we propose a historical-data-driven scheduling method that accounts for both temporal dynamics and batch application interference characteristics. Experimental results demonstrate that the proposed performance interference model achieves higher accuracy and robustness against overfitting compared to existing models that neglect temporal variations. Additionally, our interference-aware scheduling algorithm significantly outperforms traditional methods in throughput, scheduling efficiency, and server load balancing, providing an effective solution to mitigate performance interference in cloud environments.
云计算和虚拟化技术显著提高了数据中心的资源利用率。然而,由资源争用引起的性能干扰仍然是一个主要挑战,特别是对于计算密集型批处理应用程序,这对于大规模数据处理和任务调度至关重要。在这些应用程序的建模和调度中处理性能干扰仍然需要改进。现有的干扰模型往往依赖于刻板的度量和平均值,忽略了时间波动的影响,而传统的调度算法忽略了干扰动力学,导致调度结果不是最优的。为了克服这些限制,本文研究了影响计算密集型工作负载性能的关键因素,并引入了一种包含时间波动的新型性能干扰模型。此外,我们提出了一种历史数据驱动的调度方法,该方法考虑了时间动态和批处理应用程序的干扰特性。实验结果表明,与忽略时间变化的现有模型相比,所提出的性能干扰模型对过拟合具有更高的精度和鲁棒性。此外,我们的干扰感知调度算法在吞吐量、调度效率和服务器负载平衡方面显著优于传统方法,为减轻云环境中的性能干扰提供了有效的解决方案。
{"title":"Interference modeling and scheduling for compute-intensive batch applications","authors":"Chennian Xiong ,&nbsp;Weiwei Lin ,&nbsp;Huikang Huang ,&nbsp;Jianpeng Lin ,&nbsp;Keqin Li","doi":"10.1016/j.future.2025.108355","DOIUrl":"10.1016/j.future.2025.108355","url":null,"abstract":"<div><div>Cloud computing and virtualization technologies have significantly improved resource utilization in data centers. However, performance interference caused by resource contention remains a major challenge, particularly for compute-intensive batch applications, which are vital for large-scale data processing and task scheduling. Addressing performance interference in the modeling and scheduling of such applications still requires improvement. Existing interference models often rely on stereotypical metrics and average values, ignoring the impact of temporal fluctuations, while conventional scheduling algorithms overlook interference dynamics, leading to suboptimal scheduling results. To overcome these limitations, this article investigates the key factors influencing the performance of compute-intensive workloads and introduces a novel performance interference model that incorporates temporal fluctuations. Furthermore, we propose a historical-data-driven scheduling method that accounts for both temporal dynamics and batch application interference characteristics. Experimental results demonstrate that the proposed performance interference model achieves higher accuracy and robustness against overfitting compared to existing models that neglect temporal variations. Additionally, our interference-aware scheduling algorithm significantly outperforms traditional methods in throughput, scheduling efficiency, and server load balancing, providing an effective solution to mitigate performance interference in cloud environments.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108355"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145845125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSE-LDN: Linear decomposition networks under multi-mode spatial embedding for traffic prediction MSE-LDN:基于多模空间嵌入的线性分解网络交通预测
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-19 DOI: 10.1016/j.future.2025.108329
Helong Wang, Changchang Che, Haiyuan Xu
Traffic forecasting is fundamental to intelligent transportation systems.However, existing traffic prediction models struggle to balance modeling ability and computational efficiency. Complex graph-based or attention-based models effectively capture spatio-temporal dependencies but incur high computational costs that hinder practical deployment. To address this, we propose a linear decomposition network incorporating multi-mode spatial embedding. This embedding strategy replaces traditional graph convolution or attention mechanisms by adaptively learning distinct traffic patterns to capture dynamic spatial dependencies. The network utilizes linear blocks to decompose time series into periodic and residual terms for separate modeling. A gating mechanism subsequently fuses these components to generate predictions. Additionally, we introduce PEMS06, a new dataset reflecting recent traffic characteristics. Extensive experiments on five datasets prove our model achieves superior performance and efficiency, as well as strong generalization ability.
交通预测是智能交通系统的基础。然而,现有的交通预测模型难以平衡建模能力和计算效率。复杂的基于图或基于注意力的模型可以有效地捕获时空依赖关系,但会产生高昂的计算成本,阻碍实际部署。为了解决这个问题,我们提出了一个包含多模空间嵌入的线性分解网络。该嵌入策略通过自适应学习不同的交通模式来捕获动态空间依赖关系,取代了传统的图卷积或注意机制。该网络利用线性块将时间序列分解为周期项和残差项进行独立建模。一个门控机制随后融合这些组件来生成预测。此外,我们还介绍了PEMS06,这是一个反映最近交通特征的新数据集。在5个数据集上的大量实验证明,我们的模型具有优异的性能和效率,并且具有较强的泛化能力。
{"title":"MSE-LDN: Linear decomposition networks under multi-mode spatial embedding for traffic prediction","authors":"Helong Wang,&nbsp;Changchang Che,&nbsp;Haiyuan Xu","doi":"10.1016/j.future.2025.108329","DOIUrl":"10.1016/j.future.2025.108329","url":null,"abstract":"<div><div>Traffic forecasting is fundamental to intelligent transportation systems.However, existing traffic prediction models struggle to balance modeling ability and computational efficiency. Complex graph-based or attention-based models effectively capture spatio-temporal dependencies but incur high computational costs that hinder practical deployment. To address this, we propose a linear decomposition network incorporating multi-mode spatial embedding. This embedding strategy replaces traditional graph convolution or attention mechanisms by adaptively learning distinct traffic patterns to capture dynamic spatial dependencies. The network utilizes linear blocks to decompose time series into periodic and residual terms for separate modeling. A gating mechanism subsequently fuses these components to generate predictions. Additionally, we introduce PEMS06, a new dataset reflecting recent traffic characteristics. Extensive experiments on five datasets prove our model achieves superior performance and efficiency, as well as strong generalization ability.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108329"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elevating Datacenter Resilience with ThermADNet: A Thermal Anomaly Detection System 用ThermADNet提升数据中心弹性:热异常检测系统
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-19 DOI: 10.1016/j.future.2025.108311
Mohsen Seyedkazemi Ardebili , Andrea Acquaviva , Luca Benini , Andrea Bartolini
In the era of digital transformation, datacenters and High Performance Computing (HPC) Systems have emerged as the backbone of global technology infrastructure, powering essential services across various industries, including finance and healthcare. Therefore, ensuring the uninterrupted service of these datacenters has become a critical challenge. Thermal anomalies pose a significant risk to datacenter operation, potentially leading to hardware deterioration, system downtime, and catastrophic failures. This threat is exacerbated by the growing number of datacenters, increased power density, and heat waves fostered by global warming. Detecting thermal anomalies in datacenters involves several challenges. Large-scale data collection is difficult, requiring diverse monitoring signals from thousands of nodes over long periods. The absence of labeled data complicates the identification of normal and abnormal states. Establishing accurate classification thresholds to minimize false positives and negatives is another significant hurdle. Traditional statistical methods often fail to capture temporal dependencies and complex correlations in monitoring signals. Additionally, finding anomalies at both the system and subsystem levels adds to the complexity. Deploying machine learning models in production environments presents technical and operational challenges, making real-time anomaly detection a demanding task. This paper introduces ThermADNet, a Thermal Anomaly Detection framework that combines statistical rules-based methods with Deep Neural Network (DNN) techniques for thermal anomaly detection in datacenters. ThermADNet utilizes a semi-supervised learning approach by training on a “semi-normal” dataset, addressing the challenges of large-scale data collection, semi-normal dataset identification, and classification threshold establishment. This framework’s efficacy is validated by its success in identifying real physical thermal failure events within a Tier-0 datacenter, pinpointing anomalies at both the system and subsystem levels, including compute nodes and datacenter infrastructure. In the critical evaluation window covering the July 28 failure, ThermADNet achieves precision and recall up to 0.97, with F1-scores as high as 0.97. By providing detailed information about anomalies, the framework clarifies the characteristics and reasoning behind the DNN outputs, thereby building trust in the AI model and ensuring that users can understand and rely on the system’s decisions. By offering a sophisticated method for thermal anomaly detection, ThermADNet significantly contributes to enhancing datacenter reliability and efficiency. This advancement supports the uninterrupted operation of critical HPC systems, averting considerable economic and societal losses.
在数字化转型时代,数据中心和高性能计算(HPC)系统已成为全球技术基础设施的支柱,为包括金融和医疗保健在内的各个行业的基本服务提供支持。因此,确保这些数据中心的业务不间断已成为一个关键的挑战。热异常会给数据中心的运行带来重大风险,可能导致硬件退化、系统停机和灾难性故障。数据中心数量的增加、功率密度的增加以及全球变暖引发的热浪加剧了这一威胁。检测数据中心的热异常涉及几个挑战。大规模的数据收集是困难的,需要来自数千个节点的长时间不同的监测信号。标记数据的缺失使正常和异常状态的识别变得复杂。建立准确的分类阈值以最大限度地减少假阳性和阴性是另一个重大障碍。传统的统计方法往往不能捕捉监测信号的时间依赖性和复杂相关性。此外,在系统和子系统级别上发现异常会增加复杂性。在生产环境中部署机器学习模型带来了技术和操作上的挑战,使得实时异常检测成为一项艰巨的任务。本文介绍了一种热异常检测框架ThermADNet,它将基于统计规则的方法与深度神经网络(DNN)技术相结合,用于数据中心的热异常检测。ThermADNet利用半监督学习方法,在“半正态”数据集上进行训练,解决了大规模数据收集、半正态数据集识别和分类阈值建立的挑战。该框架的有效性得到了验证,它成功地识别了Tier-0数据中心内的真实物理热故障事件,精确定位了系统和子系统级别(包括计算节点和数据中心基础设施)的异常情况。在覆盖7月28日故障的关键评估窗口中,ThermADNet的精度和召回率高达0.97,f1得分高达0.97。通过提供有关异常的详细信息,该框架阐明了DNN输出背后的特征和推理,从而建立了对人工智能模型的信任,并确保用户能够理解和依赖系统的决策。通过提供一种复杂的热异常检测方法,ThermADNet显著提高了数据中心的可靠性和效率。这一进步支持关键高性能计算系统的不间断运行,避免了相当大的经济和社会损失。
{"title":"Elevating Datacenter Resilience with ThermADNet: A Thermal Anomaly Detection System","authors":"Mohsen Seyedkazemi Ardebili ,&nbsp;Andrea Acquaviva ,&nbsp;Luca Benini ,&nbsp;Andrea Bartolini","doi":"10.1016/j.future.2025.108311","DOIUrl":"10.1016/j.future.2025.108311","url":null,"abstract":"<div><div>In the era of digital transformation, datacenters and High Performance Computing (HPC) Systems have emerged as the backbone of global technology infrastructure, powering essential services across various industries, including finance and healthcare. Therefore, ensuring the uninterrupted service of these datacenters has become a critical challenge. Thermal anomalies pose a significant risk to datacenter operation, potentially leading to hardware deterioration, system downtime, and catastrophic failures. This threat is exacerbated by the growing number of datacenters, increased power density, and heat waves fostered by global warming. Detecting thermal anomalies in datacenters involves several challenges. Large-scale data collection is difficult, requiring diverse monitoring signals from thousands of nodes over long periods. The absence of labeled data complicates the identification of normal and abnormal states. Establishing accurate classification thresholds to minimize false positives and negatives is another significant hurdle. Traditional statistical methods often fail to capture temporal dependencies and complex correlations in monitoring signals. Additionally, finding anomalies at both the system and subsystem levels adds to the complexity. Deploying machine learning models in production environments presents technical and operational challenges, making real-time anomaly detection a demanding task. This paper introduces ThermADNet, a Thermal Anomaly Detection framework that combines statistical rules-based methods with Deep Neural Network (DNN) techniques for thermal anomaly detection in datacenters. ThermADNet utilizes a semi-supervised learning approach by training on a “semi-normal” dataset, addressing the challenges of large-scale data collection, semi-normal dataset identification, and classification threshold establishment. This framework’s efficacy is validated by its success in identifying real physical thermal failure events within a Tier-0 datacenter, pinpointing anomalies at both the system and subsystem levels, including compute nodes and datacenter infrastructure. In the critical evaluation window covering the July 28 failure, ThermADNet achieves precision and recall up to 0.97, with F1-scores as high as 0.97. By providing detailed information about anomalies, the framework clarifies the characteristics and reasoning behind the DNN outputs, thereby building trust in the AI model and ensuring that users can understand and rely on the system’s decisions. By offering a sophisticated method for thermal anomaly detection, ThermADNet significantly contributes to enhancing datacenter reliability and efficiency. This advancement supports the uninterrupted operation of critical HPC systems, averting considerable economic and societal losses.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108311"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DGWOSC: A depth-based grey wolf optimizer for reliability aware soft real-time service scheduling and multiserver configuration 基于深度的灰狼优化器,用于可靠性感知软实时服务调度和多服务器配置
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-18 DOI: 10.1016/j.future.2025.108326
Tian Wang , Jianfei Chen , Liying Li , Wei Shen , Lei Zhou , Linli Xu , Junlong Zhou
With the popularization of cloud computing, more and more cloud providers charge for cloud services based on the performance of computing resource provisioning. For cloud service providers, maximizing profit by focusing on multicore-based multiserver systems is a perennial goal. However, existing research on multiserver systems that maximize the service profit either limits itself to optimizing the multiserver configuration while neglecting the schedulability of cloud service requests or focuses on cloud service scheduling while ignoring the dynamic scalability of the multiserver. Furthermore, the potential impact of transient faults on service processing presents a significant opportunity for improving cloud profitability, an area that has received less attention in profit-oriented research. Therefore, it is necessary to design a collaborative optimization method for cloud service scheduling and multiserver configuration, specifically targeting soft real-time cloud service requests, to fill the gap in existing works. In this work, we first model cloud service scheduling and multiserver configuration as a profit maximization problem that is a mixed integer nonlinear optimization. Then, we propose a depth-based grey wolf optimizer to solve our formulated problem. Finally, extensive experiments are conducted to validate the effectiveness of our proposed method. The empirical results demonstrate that our method achieves an average increase of 7.04 % in service profits compared to six benchmark methods.
随着云计算的普及,越来越多的云提供商根据计算资源提供的性能对云服务进行收费。对于云服务提供商来说,通过专注于基于多核的多服务器系统来实现利润最大化是一个长期的目标。然而,现有的多服务器系统研究要么局限于优化多服务器配置,而忽视了云服务请求的可调度性,要么只关注云服务调度,而忽视了多服务器的动态可扩展性。此外,暂态故障对业务处理的潜在影响为提高云盈利能力提供了重要机会,而这一领域在以利润为导向的研究中受到的关注较少。因此,有必要设计一种针对软实时云服务请求的云服务调度和多服务器配置协同优化方法,以填补现有工作的空白。在这项工作中,我们首先将云服务调度和多服务器配置建模为一个混合整数非线性优化的利润最大化问题。然后,我们提出了一个基于深度的灰狼优化器来解决我们制定的问题。最后,进行了大量的实验来验证我们提出的方法的有效性。实证结果表明,与六种基准方法相比,我们的方法实现了7.04%的服务利润平均增长。
{"title":"DGWOSC: A depth-based grey wolf optimizer for reliability aware soft real-time service scheduling and multiserver configuration","authors":"Tian Wang ,&nbsp;Jianfei Chen ,&nbsp;Liying Li ,&nbsp;Wei Shen ,&nbsp;Lei Zhou ,&nbsp;Linli Xu ,&nbsp;Junlong Zhou","doi":"10.1016/j.future.2025.108326","DOIUrl":"10.1016/j.future.2025.108326","url":null,"abstract":"<div><div>With the popularization of cloud computing, more and more cloud providers charge for cloud services based on the performance of computing resource provisioning. For cloud service providers, maximizing profit by focusing on multicore-based multiserver systems is a perennial goal. However, existing research on multiserver systems that maximize the service profit either limits itself to optimizing the multiserver configuration while neglecting the schedulability of cloud service requests or focuses on cloud service scheduling while ignoring the dynamic scalability of the multiserver. Furthermore, the potential impact of transient faults on service processing presents a significant opportunity for improving cloud profitability, an area that has received less attention in profit-oriented research. Therefore, it is necessary to design a collaborative optimization method for cloud service scheduling and multiserver configuration, specifically targeting soft real-time cloud service requests, to fill the gap in existing works. In this work, we first model cloud service scheduling and multiserver configuration as a profit maximization problem that is a mixed integer nonlinear optimization. Then, we propose a depth-based grey wolf optimizer to solve our formulated problem. Finally, extensive experiments are conducted to validate the effectiveness of our proposed method. The empirical results demonstrate that our method achieves an average increase of 7.04 % in service profits compared to six benchmark methods.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108326"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and scalable branch-and-bound algorithm for exact qubit allocation 精确量子位分配的高效可扩展分支定界算法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-24 DOI: 10.1016/j.future.2025.108342
Jean-Philippe Valois, Guillaume Helbecque, Nouredine Melab
Qubit allocation is a central step in adapting abstract quantum circuits to noisy intermediate-scale quantum devices, yet exact approaches for solving it face severe scalability limitations. In this work, we revisit the formulation of qubit allocation as a permutation-based quadratic assignment problem and develop a branch-and-bound algorithm for its exact resolution. We first establish a refined sequential implementation that achieves significantly faster runtimes than previous exact approaches on most problem instances, thereby setting a new state-of-the-art for this formulation. Building on this foundation, we extend the approach to a performance-aware parallel implementation that exploits both intra-node and inter-node parallelism on High-Performance Computing (HPC) infrastructures. Our experimental evaluation demonstrates near-linear strong scaling at the intra-node level and substantial scalability in distributed settings across nodes. Leveraging these capabilities, we provide reference optimal solutions for challenging benchmark circuits of up to 26 qubits—significantly larger than previously reported instances. These results show that large-scale parallelization can effectively extend the reach of exact methods for qubit allocation, thereby advancing the integration of combinatorial optimization and HPC techniques in quantum computing.
量子位分配是使抽象量子电路适应有噪声的中等规模量子器件的核心步骤,但解决它的确切方法面临严重的可扩展性限制。在这项工作中,我们重新审视量子位分配作为一个基于置换的二次分配问题的公式,并开发了一个分支定界算法来精确解决它。我们首先建立了一个精炼的顺序实现,它在大多数问题实例上实现的运行时间比以前的精确方法快得多,从而为这个公式设置了一个新的状态。在此基础上,我们将该方法扩展为性能感知的并行实现,该实现利用高性能计算(HPC)基础设施上的节点内和节点间并行性。我们的实验评估证明了在节点内级别的近线性强缩放和跨节点的分布式设置中的大量可扩展性。利用这些功能,我们为高达26量子位的具有挑战性的基准电路提供了参考最佳解决方案-比以前报道的实例大得多。这些结果表明,大规模并行化可以有效地扩展精确量子比特分配方法的范围,从而推进组合优化和高性能计算技术在量子计算中的集成。
{"title":"Efficient and scalable branch-and-bound algorithm for exact qubit allocation","authors":"Jean-Philippe Valois,&nbsp;Guillaume Helbecque,&nbsp;Nouredine Melab","doi":"10.1016/j.future.2025.108342","DOIUrl":"10.1016/j.future.2025.108342","url":null,"abstract":"<div><div>Qubit allocation is a central step in adapting abstract quantum circuits to noisy intermediate-scale quantum devices, yet exact approaches for solving it face severe scalability limitations. In this work, we revisit the formulation of qubit allocation as a permutation-based quadratic assignment problem and develop a branch-and-bound algorithm for its exact resolution. We first establish a refined sequential implementation that achieves significantly faster runtimes than previous exact approaches on most problem instances, thereby setting a new state-of-the-art for this formulation. Building on this foundation, we extend the approach to a performance-aware parallel implementation that exploits both intra-node and inter-node parallelism on High-Performance Computing (HPC) infrastructures. Our experimental evaluation demonstrates near-linear strong scaling at the intra-node level and substantial scalability in distributed settings across nodes. Leveraging these capabilities, we provide reference optimal solutions for challenging benchmark circuits of up to 26 qubits—significantly larger than previously reported instances. These results show that large-scale parallelization can effectively extend the reach of exact methods for qubit allocation, thereby advancing the integration of combinatorial optimization and HPC techniques in quantum computing.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108342"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model context protocol-based agentic react large language model for adaptive traffic signals: Luxembourg case study 基于模型上下文协议的自适应交通信号代理反应大语言模型:卢森堡案例研究
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-23 DOI: 10.1016/j.future.2025.108339
Tarek Othmani , Sadok Ben Yahia , Antonio Lalaguna
Due to the growing issues of urban population, including mobility requirements, this paper addresses the phenomenon of traffic congestion in urban environments by employing a Model Context Protocol-Based Agentic ReAct Large Language Model for Adaptive Traffic Signals (MARLATS) framework based on adaptive Traffic Management, Reinforcement Learning (RL), and Large Language Models (LLMs). The framework assessed energy consumption, emissions measures, traffic performance, and economic performance. The incorporation of various types of vehicles and practical trip scenarios within the MARLATS framework of Luxembourg City was carried out to support traffic control in urban areas. The study findings revealed a 89% cut in average travel time, a 96% drop in average waiting time, 74% gain in average speed, a remarkable 50% reduction in fuel consumption and emission abatement (CO, CO2, NOx, PM, NMVOC), while increasing noise pollution by 6.9% but MARLATS was capable of halving the operating costs by 50% from 14.14€ /h to 7.05€ /h. Compared with leading RL/DRL/LLM studies, MARLATS outperforms by 34% to 73%. These results position MARLATS as a turnkey, rapid-payback pathway to net-zero, congestion-free cities. Despite the good results, MARLATS suffer from some limitations that need to be considered in future projects, such as reducing noise emissions, mixing vehicle fleets like battery electric and plug-in hybrid vehicles, quantifying V2X infrastructure costs, and providing cybersecurity analysis for efficient and safer data transfer.
由于城市人口日益增长的问题,包括流动性需求,本文通过采用基于自适应交通管理、强化学习(RL)和大型语言模型(llm)的基于模型上下文协议的自适应交通信号代理ReAct大型语言模型(MARLATS)框架来解决城市环境中的交通拥堵现象。该框架评估了能源消耗、排放措施、交通绩效和经济绩效。在卢森堡市的MARLATS框架内纳入了各种类型的车辆和实际旅行场景,以支持城市地区的交通管制。研究结果显示,平均行驶时间减少了89%,平均等待时间减少了96%,平均速度提高了74%,燃油消耗和排放(CO, CO2, NOx, PM, NMVOC)显著降低了50%,同时噪音污染增加了6.9%,但MARLATS能够将运营成本减半50%,从14.14欧元/小时降至7.05欧元/小时。与领先的RL/DRL/LLM研究相比,MARLATS的性能高出34%至73%。这些结果将MARLATS定位为通往零净、无拥堵城市的交钥匙、快速回报途径。尽管取得了良好的效果,但MARLATS仍存在一些局限性,需要在未来的项目中加以考虑,例如减少噪音排放、混合车辆(如纯电动汽车和插电式混合动力汽车)、量化V2X基础设施成本,以及为高效、安全的数据传输提供网络安全分析。
{"title":"Model context protocol-based agentic react large language model for adaptive traffic signals: Luxembourg case study","authors":"Tarek Othmani ,&nbsp;Sadok Ben Yahia ,&nbsp;Antonio Lalaguna","doi":"10.1016/j.future.2025.108339","DOIUrl":"10.1016/j.future.2025.108339","url":null,"abstract":"<div><div>Due to the growing issues of urban population, including mobility requirements, this paper addresses the phenomenon of traffic congestion in urban environments by employing a <strong>M</strong>odel Context Protocol-Based <strong>A</strong>gentic <strong>R</strong>eAct <strong>L</strong>arge Language Model for <strong>A</strong>daptive <strong>T</strong>raffic <strong>S</strong>ignals (<strong>MARLATS</strong>) framework based on adaptive Traffic Management, Reinforcement Learning (RL), and Large Language Models (LLMs). The framework assessed energy consumption, emissions measures, traffic performance, and economic performance. The incorporation of various types of vehicles and practical trip scenarios within the <strong>MARLATS</strong> framework of Luxembourg City was carried out to support traffic control in urban areas. The study findings revealed a 89% cut in average travel time, a 96% drop in average waiting time, 74% gain in average speed, a remarkable 50% reduction in fuel consumption and emission abatement (CO, CO<sub>2</sub>, NO<sub><em>x</em></sub>, PM, NMVOC), while increasing noise pollution by 6.9% but <strong>MARLATS</strong> was capable of halving the operating costs by 50% from 14.14€ /h to 7.05€ /h. Compared with leading RL/DRL/LLM studies, <strong>MARLATS</strong> outperforms by 34% to 73%. These results position <strong>MARLATS</strong> as a turnkey, rapid-payback pathway to net-zero, congestion-free cities. Despite the good results, <strong>MARLATS</strong> suffer from some limitations that need to be considered in future projects, such as reducing noise emissions, mixing vehicle fleets like battery electric and plug-in hybrid vehicles, quantifying V2X infrastructure costs, and providing cybersecurity analysis for efficient and safer data transfer.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108339"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A privacy protection mechanism in distributed reinforcement learning using zero-knowledge proof 基于零知识证明的分布式强化学习中的隐私保护机制
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-23 DOI: 10.1016/j.future.2025.108320
Changjin Zhao, Xiang Feng, Huiqun Yu
In the field of distributed agent communication, privacy protection has always been a core concern. With ongoing advances in privacy-preserving technologies, integrating these techniques into distributed reinforcement learning has become a prevailing trend. However, the key challenge lies in safeguarding privacy while ensuring that model learning efficiency remains unaffected. To tackle this concern, a privacy-preserving framework named Zero-Knowledge proof for Distributed Reinforcement Learning (ZKDRL) is proposed. This framework equips each agent with strict differential privacy and integrates a privacy-aware receiver at the Learner end to mitigate the impact of noise on model aggregation. Additionally, zero-knowledge proof techniques are incorporated to ensure communication security and integrity within the distributed system, thereby verifying information authenticity without revealing any additional details. Implementation of ZKDRL on the open-source Surreal framework shows that, compared to baseline methods, the approach enhances data privacy by at least 21.9 % while increasing the model’s average cumulative reward by 9.5 %. Consequently, the model’s performance loss remains confined to an acceptable range, which confirms the framework’s practical applicability in distributed reinforcement learning.
在分布式代理通信领域,隐私保护一直是一个核心问题。随着隐私保护技术的不断进步,将这些技术集成到分布式强化学习中已经成为一种主流趋势。然而,关键的挑战在于在保证模型学习效率不受影响的情况下保护隐私。为了解决这个问题,提出了一个名为分布式强化学习零知识证明(ZKDRL)的隐私保护框架。该框架为每个智能体配备严格的差分隐私,并在学习端集成隐私感知接收器,以减轻噪声对模型聚合的影响。此外,采用零知识证明技术来确保分布式系统内的通信安全性和完整性,从而在不泄露任何额外细节的情况下验证信息的真实性。ZKDRL在开源超现实框架上的实现表明,与基线方法相比,该方法将数据隐私性提高了至少21.9%,同时将模型的平均累积奖励提高了9.5%。因此,模型的性能损失仍然被限制在可接受的范围内,这证实了该框架在分布式强化学习中的实际适用性。
{"title":"A privacy protection mechanism in distributed reinforcement learning using zero-knowledge proof","authors":"Changjin Zhao,&nbsp;Xiang Feng,&nbsp;Huiqun Yu","doi":"10.1016/j.future.2025.108320","DOIUrl":"10.1016/j.future.2025.108320","url":null,"abstract":"<div><div>In the field of distributed agent communication, privacy protection has always been a core concern. With ongoing advances in privacy-preserving technologies, integrating these techniques into distributed reinforcement learning has become a prevailing trend. However, the key challenge lies in safeguarding privacy while ensuring that model learning efficiency remains unaffected. To tackle this concern, a privacy-preserving framework named Zero-Knowledge proof for Distributed Reinforcement Learning (ZKDRL) is proposed. This framework equips each agent with strict differential privacy and integrates a privacy-aware receiver at the Learner end to mitigate the impact of noise on model aggregation. Additionally, zero-knowledge proof techniques are incorporated to ensure communication security and integrity within the distributed system, thereby verifying information authenticity without revealing any additional details. Implementation of ZKDRL on the open-source Surreal framework shows that, compared to baseline methods, the approach enhances data privacy by at least 21.9 % while increasing the model’s average cumulative reward by 9.5 %. Consequently, the model’s performance loss remains confined to an acceptable range, which confirms the framework’s practical applicability in distributed reinforcement learning.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108320"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Digital Twin: Protection, deception, and testing 自适应数字孪生:保护、欺骗和测试
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2026-01-01 DOI: 10.1016/j.future.2025.108357
Cristina Alcaraz, Hector Guzman, Javier Lopez
A Digital Twin (DT) is a cutting-edge technology that has gained relevance in recent years, demonstrating huge potential for the simulation of processes and the provision of valuable insights to improve and optimise systems. Leveraging a high degree of fidelity in replicating real-world processes, DTs are being explored for advanced applications such as deception and proactive protection of critical infrastructures. However, this same advantage also raises concerns with respect to a system’s exposure, as the detailed digital representation may introduce new cybersecurity risks. With the aim of assisting the growth of this technology, this paper presents an adaptive DT solution that facilitates the configuration of particular components of the digital system, tailoring different application scenarios specifically for protection, deception, and testing purposes. Finally, the proposed architecture is tested under a specific IoT-oriented use case to validate, experiment, and extract conclusions of the proposed solution.
数字孪生(DT)是近年来获得关注的一项前沿技术,在过程模拟和提供有价值的见解以改进和优化系统方面显示出巨大的潜力。利用复制真实世界过程的高度保真度,DTs正在探索用于高级应用,例如欺骗和关键基础设施的主动保护。然而,同样的优势也引起了对系统暴露的担忧,因为详细的数字表示可能会引入新的网络安全风险。为了帮助这项技术的发展,本文提出了一种自适应DT解决方案,该解决方案促进了数字系统特定组件的配置,专门针对保护,欺骗和测试目的定制不同的应用场景。最后,在一个特定的面向物联网的用例下测试所提出的架构,以验证、实验和提取所提出的解决方案的结论。
{"title":"Adaptive Digital Twin: Protection, deception, and testing","authors":"Cristina Alcaraz,&nbsp;Hector Guzman,&nbsp;Javier Lopez","doi":"10.1016/j.future.2025.108357","DOIUrl":"10.1016/j.future.2025.108357","url":null,"abstract":"<div><div>A Digital Twin (DT) is a cutting-edge technology that has gained relevance in recent years, demonstrating huge potential for the simulation of processes and the provision of valuable insights to improve and optimise systems. Leveraging a high degree of fidelity in replicating real-world processes, DTs are being explored for advanced applications such as deception and proactive protection of critical infrastructures. However, this same advantage also raises concerns with respect to a system’s exposure, as the detailed digital representation may introduce new cybersecurity risks. With the aim of assisting the growth of this technology, this paper presents an adaptive DT solution that facilitates the configuration of particular components of the digital system, tailoring different application scenarios specifically for protection, deception, and testing purposes. Finally, the proposed architecture is tested under a specific IoT-oriented use case to validate, experiment, and extract conclusions of the proposed solution.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108357"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1