首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
PSCD : A privacy-preserving framework for structural constraint mitigation in deep neural networks on encrypted distributed datasets PSCD:加密分布式数据集上深度神经网络结构约束缓解的隐私保护框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-07-01 Epub Date: 2026-01-20 DOI: 10.1016/j.future.2026.108390
Yuhao Zhang , Weiwei Zhao , Changhui Hu
The proliferation of deep neural networks (DNNs) drives the need for collaborative data processing across distributed nodes in next-generation systems. This mode poses a potential threat to distributed data privacy, necessitating the development of more reliable privacy-preserving machine learning (PPML) solutions. The functional encryption (FE) provides a new paradigm for PPML due to its unique advantages. Unfortunately, privacy requirements in existing FE-based schemes impose a priori constraints on permissible neural architectures, highlighting a fundamental tension with model expressiveness. To close this gap, we design a privacy-preserving DNN framework (PSCD) based on FE, mitigating structural constraints on model by integrating three independent modules. Specifically, we first design a secure aggregation module SAM with FE to ensure the confidentiality of local data upload. Then, we introduce FM Sketch to propose a query control module QCM to control the number of times ciphertext vectors are queried by cloud server. Finally, we develop a privacy-preserving training mechanism PPTM, which incorporates Dropout to flexibly adjust the network structure and synchronously enhance the robustness of model. Formal security analysis proves that PSCD can against semi-honest attacks and collusion attacks. Experiments on real-world datasets demonstrate that PSCD achieves at least a 48.5% improvement in operational efficiency and a 38.9% reduction in communication overhead compared to benchmark PPML schemes, while maintaining model accuracy comparable to that of a plaintext DNN.
在下一代系统中,深度神经网络(dnn)的激增推动了跨分布式节点协作数据处理的需求。这种模式对分布式数据隐私构成潜在威胁,需要开发更可靠的隐私保护机器学习(PPML)解决方案。功能加密(functional encryption, FE)以其独特的优势为PPML提供了一种新的范式。不幸的是,现有的基于fe的方案中的隐私要求对允许的神经架构施加了先验约束,突出了模型表达性的基本张力。为了缩小这一差距,我们设计了一个基于FE的隐私保护DNN框架(PSCD),通过集成三个独立模块来减轻模型的结构约束。具体而言,我们首先设计了一个安全聚合模块SAM和FE,以确保本地数据上传的保密性。然后,我们引入FM Sketch,提出了一个查询控制模块QCM来控制云服务器对密文向量的查询次数。最后,我们开发了一种隐私保护训练机制PPTM,该机制结合Dropout来灵活调整网络结构,同时增强模型的鲁棒性。正式的安全性分析证明了PSCD可以抵御半诚实攻击和合谋攻击。在真实数据集上的实验表明,与基准PPML方案相比,PSCD在操作效率方面至少提高了48.5%,在通信开销方面降低了38.9%,同时保持了与明文DNN相当的模型准确性。
{"title":"PSCD : A privacy-preserving framework for structural constraint mitigation in deep neural networks on encrypted distributed datasets","authors":"Yuhao Zhang ,&nbsp;Weiwei Zhao ,&nbsp;Changhui Hu","doi":"10.1016/j.future.2026.108390","DOIUrl":"10.1016/j.future.2026.108390","url":null,"abstract":"<div><div>The proliferation of deep neural networks (DNNs) drives the need for collaborative data processing across distributed nodes in next-generation systems. This mode poses a potential threat to distributed data privacy, necessitating the development of more reliable privacy-preserving machine learning (PPML) solutions. The functional encryption (FE) provides a new paradigm for PPML due to its unique advantages. Unfortunately, privacy requirements in existing FE-based schemes impose a priori constraints on permissible neural architectures, highlighting a fundamental tension with model expressiveness. To close this gap, we design a privacy-preserving DNN framework (PSCD) based on FE, mitigating structural constraints on model by integrating three independent modules. Specifically, we first design a secure aggregation module SAM with FE to ensure the confidentiality of local data upload. Then, we introduce FM Sketch to propose a query control module QCM to control the number of times ciphertext vectors are queried by cloud server. Finally, we develop a privacy-preserving training mechanism PPTM, which incorporates Dropout to flexibly adjust the network structure and synchronously enhance the robustness of model. Formal security analysis proves that PSCD can against semi-honest attacks and collusion attacks. Experiments on real-world datasets demonstrate that PSCD achieves at least a 48.5% improvement in operational efficiency and a 38.9% reduction in communication overhead compared to benchmark PPML schemes, while maintaining model accuracy comparable to that of a plaintext DNN.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108390"},"PeriodicalIF":6.2,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge distillation-based Multi-Optimization intrusion detection system 基于知识提取的多优化入侵检测系统
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-07-01 Epub Date: 2025-12-31 DOI: 10.1016/j.future.2025.108296
Haofan Wang , Farah Kandah
Network attacks have expanded in scope, increased in frequency, and evolved in many ways in recent years. Internet of Things (IoT) devices, due to their limited computational resources, massive deployment, direct exposure to the public Internet, and lack of maintenance, face even more severe threat landscapes. Nowadays, numerous lightweight methods have been proposed, but they all rely on single-perspective optimizations, making it difficult to achieve an optimal balance between performance and computational resource consumption. In this work, we proposed a Knowledge Distillation-based Multi-Optimization Intrusion Detection System (KDMO-IDS) that reduces resource consumption at the feature, sample, and model levels. At the feature level, we compute the Analysis of Variance (ANOVA) F-value for each feature to rank them and determine the optimal subset. At the sample level, we use MiniBatchKMeans with Medoid clustering to compress data under preset ratios At the model level, we combine knowledge distillation with attention transfer so that a compact student model retains the performance of its teacher, further optimized by block operator fusion, pruning, and early stopping. We conduct extensive ablation studies to validate the contribution of each component. Experiments on WUSTL-IIoT and X-IIoTID datasets show that our proposed KDMO-IDS demonstrates superior performance and exhibits strong lightweight characteristics and generalizability compare to existing baseline models, making it well-suited for seamless integration into edge-cloud and distributed computing environments and providing a scalable security solution for next-generation high-performance IoT systems.
近年来,网络攻击的范围扩大了,频率增加了,并且在许多方面发生了变化。物联网(IoT)设备由于其有限的计算资源、大规模部署、直接暴露于公共互联网以及缺乏维护,面临着更加严重的威胁。目前,已经提出了许多轻量级方法,但它们都依赖于单视角优化,因此很难在性能和计算资源消耗之间实现最佳平衡。在这项工作中,我们提出了一种基于知识蒸馏的多优化入侵检测系统(KDMO-IDS),该系统减少了特征,样本和模型级别的资源消耗。在特征层面,我们计算每个特征的方差分析(ANOVA) f值,对它们进行排序并确定最优子集。在样本层面,我们使用MiniBatchKMeans和mediid聚类在预设比例下压缩数据。在模型层面,我们将知识蒸馏和注意力转移结合起来,使紧凑的学生模型保留了教师的表现,并通过块算子融合、剪枝和提前停止进一步优化。我们进行了广泛的消融研究,以验证每个组成部分的贡献。在WUSTL-IIoT和X-IIoTID数据集上的实验表明,与现有基线模型相比,我们提出的KDMO-IDS表现出卓越的性能,具有强大的轻量级特性和通用性,使其非常适合无缝集成到边缘云和分布式计算环境中,并为下一代高性能物联网系统提供可扩展的安全解决方案。
{"title":"Knowledge distillation-based Multi-Optimization intrusion detection system","authors":"Haofan Wang ,&nbsp;Farah Kandah","doi":"10.1016/j.future.2025.108296","DOIUrl":"10.1016/j.future.2025.108296","url":null,"abstract":"<div><div>Network attacks have expanded in scope, increased in frequency, and evolved in many ways in recent years. Internet of Things (IoT) devices, due to their limited computational resources, massive deployment, direct exposure to the public Internet, and lack of maintenance, face even more severe threat landscapes. Nowadays, numerous lightweight methods have been proposed, but they all rely on single-perspective optimizations, making it difficult to achieve an optimal balance between performance and computational resource consumption. In this work, we proposed a Knowledge Distillation-based Multi-Optimization Intrusion Detection System (KDMO-IDS) that reduces resource consumption at the feature, sample, and model levels. At the feature level, we compute the Analysis of Variance (ANOVA) F-value for each feature to rank them and determine the optimal subset. At the sample level, we use MiniBatchKMeans with Medoid clustering to compress data under preset ratios At the model level, we combine knowledge distillation with attention transfer so that a compact student model retains the performance of its teacher, further optimized by block operator fusion, pruning, and early stopping. We conduct extensive ablation studies to validate the contribution of each component. Experiments on WUSTL-IIoT and X-IIoTID datasets show that our proposed KDMO-IDS demonstrates superior performance and exhibits strong lightweight characteristics and generalizability compare to existing baseline models, making it well-suited for seamless integration into edge-cloud and distributed computing environments and providing a scalable security solution for next-generation high-performance IoT systems.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108296"},"PeriodicalIF":6.2,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A scalable and modular open-source stack for computing continuum digital twins 用于连续体数字孪生计算的可扩展和模块化开源堆栈
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-07-01 Epub Date: 2026-02-02 DOI: 10.1016/j.future.2026.108411
Nikos Filinis, Dimitrios Spatharakis, Ioannis Dimolitsas, Eleni Fotopoulou, Constantinos Vassilakis, Anastasios Zafeiropoulos, Symeon Papavassiliou
The exponential rise of intelligent Internet of Things (IoT) devices and the development of Cyber-Physical Systems (CPS) pose new challenges and requirements for modern applications. These include the need for seamless interconnectivity and interoperable interaction between various physical and virtual elements. The enrichment and transformation of IoT technologies to support such interactions is undergoing, considering the need for convergence with edge and cloud computing technologies and the management of IoT applications across resources in the computing continuum. This broader sense of connectivity is tightly connected with the development of Digital Twins (DT), which take advantage of the development of virtual counterparts of IoT devices and CPS. Novel architectural approaches are required to manage complex DTs’ topologies, collectively forming a Digital Twin Network (DTN) that acts as a middleware to provide advanced communication, efficient orchestration, and autonomous decision-making capabilities. This manuscript presents an architectural approach and a relevant open-source software stack implementation -called VOStack- for developing DTs. VOStack is open and modular by design, while it tackles IoT interoperability and convergence challenges with edge and cloud computing technologies. VOStack is thoroughly evaluated under various deployment schemas, virtualization techniques, and based on the provision of an IoT application in the context of a smart city scenario, demonstrating efficient utilization of resources and high efficiency of Machine Learning (ML)-driven orchestration mechanisms.
智能物联网(IoT)设备的指数级增长和网络物理系统(CPS)的发展对现代应用提出了新的挑战和要求。其中包括对各种物理和虚拟元素之间无缝互连和可互操作交互的需求。考虑到需要与边缘和云计算技术融合,以及在计算连续体中跨资源管理物联网应用,支持这种交互的物联网技术的丰富和转型正在进行。这种更广泛的连通性与数字孪生(DT)的发展密切相关,后者利用了物联网设备和CPS的虚拟对应物的发展。需要新颖的体系结构方法来管理复杂的dt拓扑,共同形成数字孪生网络(DTN),作为中间件提供高级通信、高效编排和自主决策能力。本文提出了一种架构方法和相关的开源软件堆栈实现(称为VOStack),用于开发dt。VOStack在设计上是开放和模块化的,同时它通过边缘和云计算技术解决物联网互操作性和融合挑战。VOStack在各种部署模式、虚拟化技术下进行了全面评估,并基于智慧城市场景下提供的物联网应用,展示了资源的高效利用和机器学习驱动的高效编排机制。
{"title":"A scalable and modular open-source stack for computing continuum digital twins","authors":"Nikos Filinis,&nbsp;Dimitrios Spatharakis,&nbsp;Ioannis Dimolitsas,&nbsp;Eleni Fotopoulou,&nbsp;Constantinos Vassilakis,&nbsp;Anastasios Zafeiropoulos,&nbsp;Symeon Papavassiliou","doi":"10.1016/j.future.2026.108411","DOIUrl":"10.1016/j.future.2026.108411","url":null,"abstract":"<div><div>The exponential rise of intelligent Internet of Things (IoT) devices and the development of Cyber-Physical Systems (CPS) pose new challenges and requirements for modern applications. These include the need for seamless interconnectivity and interoperable interaction between various physical and virtual elements. The enrichment and transformation of IoT technologies to support such interactions is undergoing, considering the need for convergence with edge and cloud computing technologies and the management of IoT applications across resources in the computing continuum. This broader sense of connectivity is tightly connected with the development of Digital Twins (DT), which take advantage of the development of virtual counterparts of IoT devices and CPS. Novel architectural approaches are required to manage complex DTs’ topologies, collectively forming a Digital Twin Network (DTN) that acts as a middleware to provide advanced communication, efficient orchestration, and autonomous decision-making capabilities. This manuscript presents an architectural approach and a relevant open-source software stack implementation -called VOStack- for developing DTs. VOStack is open and modular by design, while it tackles IoT interoperability and convergence challenges with edge and cloud computing technologies. VOStack is thoroughly evaluated under various deployment schemas, virtualization techniques, and based on the provision of an IoT application in the context of a smart city scenario, demonstrating efficient utilization of resources and high efficiency of Machine Learning (ML)-driven orchestration mechanisms.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108411"},"PeriodicalIF":6.2,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146110185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Long integer NTT execution on UPMEM-PIM for 128-bit secure fully homomorphic encryption 128位安全全同态加密的upmemi - pim长整数NTT执行
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-07-01 Epub Date: 2026-01-20 DOI: 10.1016/j.future.2026.108386
Tathagata Barik , Priyam Mehta , Zaira Pindado , Harshita Gupta , Mayank Kabra , Mohammad Sadrosadati , Onur Mutlu , Antonio J. Peña
Fully Homomorphic Encryption (FHE) enables secure computations on encrypted data, hence becoming an appealing technology for privacy-preserving data processing. A core kernel in many cryptographic and FHE workloads is the Number Theoretic Transform (NTT). While NTT involves frequent non-contiguous data accesses, limiting overall performance, processing–in–memory (PIM) has the potential to address this limitation. PIM, performing computations close to the data, reduces the need for extensive data transfers between memory and compute units. However, the performance of current PIM solutions is limited by inherent factors related to the integration of processing capabilities within memory modules.
In this article we analyze the performance trade-offs of NTT kernel designs along with optimized modular multiplication algorithms on PIM systems based on UPMEM hardware. Our results include significant performance improvements of up to 4.3 ×  over baseline approaches on UPMEM-PIM, while preserving, for the first time in the literature, 128-bit security at high precision.
完全同态加密(FHE)实现了对加密数据的安全计算,因此成为保护隐私数据处理的一种有吸引力的技术。数论变换(NTT)是许多密码学和FHE工作负载的核心。虽然NTT涉及频繁的非连续数据访问,限制了整体性能,但内存中处理(PIM)有可能解决这一限制。PIM在数据附近执行计算,减少了在内存和计算单元之间进行大量数据传输的需要。然而,当前PIM解决方案的性能受到与内存模块内处理能力集成相关的固有因素的限制。在本文中,我们分析了NTT内核设计的性能权衡以及基于UPMEM硬件的PIM系统上优化的模块化乘法算法。我们的研究结果包括显著的性能改进,在upmemm - pim上比基线方法提高了4.3 × ,同时在文献中首次保持了高精度的128位安全性。
{"title":"Long integer NTT execution on UPMEM-PIM for 128-bit secure fully homomorphic encryption","authors":"Tathagata Barik ,&nbsp;Priyam Mehta ,&nbsp;Zaira Pindado ,&nbsp;Harshita Gupta ,&nbsp;Mayank Kabra ,&nbsp;Mohammad Sadrosadati ,&nbsp;Onur Mutlu ,&nbsp;Antonio J. Peña","doi":"10.1016/j.future.2026.108386","DOIUrl":"10.1016/j.future.2026.108386","url":null,"abstract":"<div><div>Fully Homomorphic Encryption (FHE) enables secure computations on encrypted data, hence becoming an appealing technology for privacy-preserving data processing. A core kernel in many cryptographic and FHE workloads is the Number Theoretic Transform (NTT). While NTT involves frequent non-contiguous data accesses, limiting overall performance, processing–in–memory (PIM) has the potential to address this limitation. PIM, performing computations close to the data, reduces the need for extensive data transfers between memory and compute units. However, the performance of current PIM solutions is limited by inherent factors related to the integration of processing capabilities within memory modules.</div><div>In this article we analyze the performance trade-offs of NTT kernel designs along with optimized modular multiplication algorithms on PIM systems based on UPMEM hardware. Our results include significant performance improvements of up to 4.3 ×  over baseline approaches on UPMEM-PIM, while preserving, for the first time in the literature, 128-bit security at high precision.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108386"},"PeriodicalIF":6.2,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AWTO: A latency-optimized task offloading scheme for LLM-driven agentic workflows on heterogeneous edge 异构边缘上llm驱动的代理工作流的延迟优化任务卸载方案
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-07-01 Epub Date: 2026-02-02 DOI: 10.1016/j.future.2026.108415
Peng Yu , Bo Liu , Shaomin Tang , Dongdong Li , Weiwei Lin
Agentic workflows, driven by Large Language Models (LLMs), present new opportunities for realizing advanced edge intelligence in data-sensitive domains such as finance and healthcare. However, deploying these workflows in private, resource-constrained edge environments poses unique challenges. Unlike public cloud services, these scenarios require computations to be performed locally on dedicated edge clusters to meet strict data compliance and privacy regulations. This restriction, coupled with the limited memory capacity of edge devices relative to the massive size of LLMs, makes dynamic memory management and model loading critical factors. Furthermore, the autoregressive nature of LLMs introduces high dynamic uncertainty in inference latency and memory footprint, fundamentally contradicting the static information assumptions of traditional scheduling methods. To address these challenges, we propose AWTO, a Deep Reinforcement Learning (DRL) offloading scheme designed to minimize the makespan of agentic workflows in isolated edge environments. The core of AWTO is a task-by-task dynamic decision-making mechanism that explicitly handles on-demand model loading and memory contention. We formulate this problem as a Markov Decision Process (MDP) and employ a Proximal Policy Optimization (PPO)-based algorithm. A novel three-module LSTM encoder is designed to capture task dependencies, device heterogeneity, and real-time memory states. Experimental results in heterogeneous environments demonstrate that AWTO reduces the average makespan by 16.99% to 36.36% compared to heuristic baselines. Furthermore, it achieves a 14.00% gain over DRL methods, validating its adaptability to dynamic memory constraints and cache-aware scheduling.
由大型语言模型(llm)驱动的代理工作流为在金融和医疗保健等数据敏感领域实现高级边缘智能提供了新的机会。然而,在私有的、资源受限的边缘环境中部署这些工作流带来了独特的挑战。与公共云服务不同,这些场景需要在专用边缘集群上本地执行计算,以满足严格的数据遵从性和隐私法规。这种限制,再加上边缘设备相对于llm的巨大尺寸的有限内存容量,使得动态内存管理和模型加载成为关键因素。此外,llm的自回归特性在推理延迟和内存占用方面引入了高度的动态不确定性,从根本上与传统调度方法的静态信息假设相矛盾。为了应对这些挑战,我们提出了AWTO,这是一种深度强化学习(DRL)卸载方案,旨在最大限度地减少孤立边缘环境中代理工作流的最长时间。AWTO的核心是一个逐任务的动态决策机制,它显式地处理按需模型加载和内存争用。我们将这个问题表述为一个马尔可夫决策过程(MDP),并采用了一种基于近端策略优化(PPO)的算法。设计了一种新颖的三模块LSTM编码器,用于捕获任务依赖性、设备异质性和实时内存状态。在异构环境下的实验结果表明,与启发式基线相比,AWTO将平均完工时间减少了16.99%至36.36%。此外,它比DRL方法获得了14.00%的增益,验证了它对动态内存约束和缓存感知调度的适应性。
{"title":"AWTO: A latency-optimized task offloading scheme for LLM-driven agentic workflows on heterogeneous edge","authors":"Peng Yu ,&nbsp;Bo Liu ,&nbsp;Shaomin Tang ,&nbsp;Dongdong Li ,&nbsp;Weiwei Lin","doi":"10.1016/j.future.2026.108415","DOIUrl":"10.1016/j.future.2026.108415","url":null,"abstract":"<div><div>Agentic workflows, driven by Large Language Models (LLMs), present new opportunities for realizing advanced edge intelligence in data-sensitive domains such as finance and healthcare. However, deploying these workflows in private, resource-constrained edge environments poses unique challenges. Unlike public cloud services, these scenarios require computations to be performed locally on dedicated edge clusters to meet strict data compliance and privacy regulations. This restriction, coupled with the limited memory capacity of edge devices relative to the massive size of LLMs, makes dynamic memory management and model loading critical factors. Furthermore, the autoregressive nature of LLMs introduces high dynamic uncertainty in inference latency and memory footprint, fundamentally contradicting the static information assumptions of traditional scheduling methods. To address these challenges, we propose AWTO, a Deep Reinforcement Learning (DRL) offloading scheme designed to minimize the makespan of agentic workflows in isolated edge environments. The core of AWTO is a task-by-task dynamic decision-making mechanism that explicitly handles on-demand model loading and memory contention. We formulate this problem as a Markov Decision Process (MDP) and employ a Proximal Policy Optimization (PPO)-based algorithm. A novel three-module LSTM encoder is designed to capture task dependencies, device heterogeneity, and real-time memory states. Experimental results in heterogeneous environments demonstrate that AWTO reduces the average makespan by 16.99% to 36.36% compared to heuristic baselines. Furthermore, it achieves a 14.00% gain over DRL methods, validating its adaptability to dynamic memory constraints and cache-aware scheduling.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108415"},"PeriodicalIF":6.2,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146110187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maximizing the benefits of in-network aggregation with joint job placement and routing control 联合作业配置和路由控制的网内聚合效益最大化
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-07-01 Epub Date: 2026-02-03 DOI: 10.1016/j.future.2026.108416
Shouxi Luo , Xiaoyu Yu , Huanlai Xing , Ke Li , Bo Peng
As recent studies have shown, in-network aggregation (INA) is excellent in relieving the communication bottleneck faced by the gradient uploading of parameter server (PS) based data-parallel training. In such systems, INA-aware routing optimization is crucial to release the power of deployed aggregators to improve the throughput, since correlated flows can be aggregated into a single one only if they go through the same aggregator. However, only optimizing routing is not enough, since the placement of the PS and training workers (i.e., jobs) determines where the traffic enters and leaves the network, thus limiting the space that routing optimizations can explore.
In this article, we examine the problem of how to maximize the benefits of deployed aggregators by jointly optimizing job placement and routing paths. We not only find that such a joint scheduling problem can be converted to conducting INA-aware routing optimization with resource constraints on an augmented topology, thus abundant existing INA-aware routing optimization designs might help, but also propose ARO+, a case study of achieving theory-guided optimal (or near-optimal) joint optimization for Clos-based clusters, by extending the state-of-the-art INA-aware routing optimization proposal of ARO. To reduce the solving time, ARO+ employs a novel model simplification scheme by taking advantage of the problem structure. Performance studies show that the joint optimization of ARO+ does markedly increase the throughput of gradient uploading, and our model simplification could accelerate its solving up to an order of magnitude.
最近的研究表明,网内聚合(INA)可以很好地缓解基于参数服务器(PS)的数据并行训练梯度上传所面临的通信瓶颈。在这样的系统中,感知ina的路由优化对于释放部署的聚合器的能力以提高吞吐量至关重要,因为只有当相关流经过相同的聚合器时,它们才能聚合为单个流。然而,仅仅优化路由是不够的,因为PS和培训工人(即工作)的位置决定了流量进入和离开网络的位置,从而限制了路由优化可以探索的空间。在本文中,我们将研究如何通过联合优化工作分配和路由路径来最大化部署的聚合器的好处。我们不仅发现这种联合调度问题可以转化为在增强拓扑上进行资源约束下的ina感知路由优化,从而丰富的现有ina感知路由优化设计可能有所帮助,而且通过扩展ARO最先进的ina感知路由优化方案,提出ARO+作为实现基于clos集群的理论指导的最优(或近最优)联合优化的案例研究。为了缩短求解时间,ARO+利用问题结构的优势,采用了一种新颖的模型简化方案。性能研究表明,ARO+的联合优化确实显著提高了梯度上传的吞吐量,我们的模型简化可以将其求解速度提高一个数量级。
{"title":"Maximizing the benefits of in-network aggregation with joint job placement and routing control","authors":"Shouxi Luo ,&nbsp;Xiaoyu Yu ,&nbsp;Huanlai Xing ,&nbsp;Ke Li ,&nbsp;Bo Peng","doi":"10.1016/j.future.2026.108416","DOIUrl":"10.1016/j.future.2026.108416","url":null,"abstract":"<div><div>As recent studies have shown, in-network aggregation (INA) is excellent in relieving the communication bottleneck faced by the gradient uploading of parameter server (PS) based data-parallel training. In such systems, INA-aware routing optimization is crucial to release the power of deployed aggregators to improve the throughput, since correlated flows can be aggregated into a single one only if they go through the same aggregator. However, only optimizing routing is not enough, since the placement of the PS and training workers (i.e., jobs) determines where the traffic enters and leaves the network, thus limiting the space that routing optimizations can explore.</div><div>In this article, we examine the problem of how to maximize the benefits of deployed aggregators by jointly optimizing job placement and routing paths. We not only find that such a joint scheduling problem can be converted to conducting INA-aware routing optimization with resource constraints on an augmented topology, thus abundant existing INA-aware routing optimization designs might help, but also propose ARO+, a case study of achieving theory-guided optimal (or near-optimal) joint optimization for Clos-based clusters, by extending the state-of-the-art INA-aware routing optimization proposal of ARO. To reduce the solving time, ARO+ employs a novel model simplification scheme by taking advantage of the problem structure. Performance studies show that the joint optimization of ARO+ does markedly increase the throughput of gradient uploading, and our model simplification could accelerate its solving up to an order of magnitude.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108416"},"PeriodicalIF":6.2,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146110190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards real-time sensor-based human activity recognition: a re-parameterized multidimensional feature communication fusion framework 面向实时传感器的人体活动识别:一种重新参数化的多维特征通信融合框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-07-01 Epub Date: 2026-01-01 DOI: 10.1016/j.future.2025.108364
Hao Zheng, Hongji Xu, Yupeng Duan, Yonghui Yu, Fei Gao, Yu Qiao
Sensor-based human activity recognition (HAR) has become a research hotspot due to its far-reaching applications including health monitoring, smart home, and sports tracking. Deep learning, particularly convolutional neural networks (CNNs) with automatic feature extraction capabilities, has demonstrated outstanding recognition accuracy. However, some current deep learning-based HAR methods require substantial parameters and computation resources to pursue high accuracy, resulting in expensive resource and time costs. To handle the challenge, we propose a lightweight one-dimensional re-parameterization multidimensional feature communication fusion (ORepMFCF) framework for HAR, achieving an impressive trade-off between recognition accuracy and speed. The one-dimensional re-parameterization VGG (ORV) block is developed to decouple the trained multi-branch structure into a single path to improve the inference speed without compromising accuracy. The time-dependent information channel shuffle (TDICS) module is proposed to strengthen feature extraction and communication ability. The channel attention feature fusion (CAFF) module is presented to improve multidimensional feature fusion. The ORepMFCF achieves the accuracy of 98.60% on the WISDM, 89.38% on the UniMiB-SHAR, 86.43% on the OPPORTUNITY, and 96.26% on the MobiACT, respectively. The ORepMFCF requires fewer parameters and incurs less computational overhead with better recognition performance, which significantly surpasses the recent state-of-the-art HAR networks. The real-world inference time of the ORepMFCF is evaluated on a Raspberry Pi and a smartphone, demonstrating its effectiveness and practicality.
基于传感器的人体活动识别(HAR)因其在健康监测、智能家居、运动跟踪等方面的广泛应用而成为研究热点。深度学习,特别是具有自动特征提取能力的卷积神经网络(cnn),已经证明了出色的识别准确性。然而,目前一些基于深度学习的HAR方法需要大量的参数和计算资源来追求高精度,从而导致昂贵的资源和时间成本。为了应对这一挑战,我们提出了一种用于HAR的轻量级一维重新参数化多维特征通信融合(ORepMFCF)框架,在识别精度和速度之间实现了令人印象深刻的权衡。提出了一维再参数化VGG (ORV)块,将训练好的多分支结构解耦为单路径,在不影响精度的前提下提高推理速度。提出了时变信息信道洗牌(TDICS)模块,增强了特征提取和通信能力。为了改进多维特征融合,提出了信道注意特征融合(CAFF)模块。ORepMFCF在WISDM、unimib - share、OPPORTUNITY和mobact上的准确率分别为98.60%、89.38%、86.43%和96.26%。ORepMFCF需要更少的参数,产生更少的计算开销,具有更好的识别性能,这大大超过了最近最先进的HAR网络。在树莓派和智能手机上对ORepMFCF的实际推理时间进行了评估,证明了其有效性和实用性。
{"title":"Towards real-time sensor-based human activity recognition: a re-parameterized multidimensional feature communication fusion framework","authors":"Hao Zheng,&nbsp;Hongji Xu,&nbsp;Yupeng Duan,&nbsp;Yonghui Yu,&nbsp;Fei Gao,&nbsp;Yu Qiao","doi":"10.1016/j.future.2025.108364","DOIUrl":"10.1016/j.future.2025.108364","url":null,"abstract":"<div><div>Sensor-based human activity recognition (HAR) has become a research hotspot due to its far-reaching applications including health monitoring, smart home, and sports tracking. Deep learning, particularly convolutional neural networks (CNNs) with automatic feature extraction capabilities, has demonstrated outstanding recognition accuracy. However, some current deep learning-based HAR methods require substantial parameters and computation resources to pursue high accuracy, resulting in expensive resource and time costs. To handle the challenge, we propose a lightweight one-dimensional re-parameterization multidimensional feature communication fusion (ORepMFCF) framework for HAR, achieving an impressive trade-off between recognition accuracy and speed. The one-dimensional re-parameterization VGG (ORV) block is developed to decouple the trained multi-branch structure into a single path to improve the inference speed without compromising accuracy. The time-dependent information channel shuffle (TDICS) module is proposed to strengthen feature extraction and communication ability. The channel attention feature fusion (CAFF) module is presented to improve multidimensional feature fusion. The ORepMFCF achieves the accuracy of 98.60% on the WISDM, 89.38% on the UniMiB-SHAR, 86.43% on the OPPORTUNITY, and 96.26% on the MobiACT, respectively. The ORepMFCF requires fewer parameters and incurs less computational overhead with better recognition performance, which significantly surpasses the recent state-of-the-art HAR networks. The real-world inference time of the ORepMFCF is evaluated on a Raspberry Pi and a smartphone, demonstrating its effectiveness and practicality.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108364"},"PeriodicalIF":6.2,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic task transmission control and improved greedy strategy for vehicular edge computing 车辆边缘计算的动态任务传输控制与改进贪婪策略
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-07-01 Epub Date: 2026-01-07 DOI: 10.1016/j.future.2026.108369
Sheng Cai , Jianmao Xiao , Yuanlong Cao , Qinghang Gao , Zhiyong Feng , Shuiguang Deng
With the rapid development of on-board applications, edge computing is now widely used in Internet of vehicles, enabling vehicles with limited resources to offload tasks to the edge for execution via computation offloading. However, current research methods are often hard to adapt to dynamic scenarios due to model training costs and vehicle mobility and also lack consideration for load balancing in high-load situations. To improve the quality of experience of users and balance the load of edge servers simultaneously, this paper proposes an improved greedy strategy method for computation offloading. First, to mitigate potential communication overload during peak hours, this study analyzes the relationship between transmission scheduling and execution queues, and investigates a dynamic task transmission control method. Second, explicit modeling of round-trip communication reliability in mobile environments is provided to extend the vehicle interconnection model. Subsequently, by analyzing the structure of the optimal solution for total latency optimization, the priority of offloaded tasks is classified. A multi-perspective analysis of task offloading is then conducted, and a greedy strategy is adopted to ensure both the quality of user experience and load balancing at the edge. Finally, comparative experiments on real-world datasets validate the efficiency of the proposed method and model under high-mobility and high-load experimental scenarios.
随着车载应用的快速发展,边缘计算已广泛应用于车联网,使资源有限的车辆可以通过计算卸载将任务卸载到边缘执行。然而,目前的研究方法由于模型训练成本和车辆移动性等因素,往往难以适应动态场景,也缺乏对高负载情况下负载平衡的考虑。为了提高用户的体验质量,同时平衡边缘服务器的负载,本文提出了一种改进的贪心策略计算卸载方法。首先,为了缓解高峰时段潜在的通信过载,本研究分析了传输调度与执行队列之间的关系,并研究了一种动态任务传输控制方法。其次,对移动环境下的往返通信可靠性进行了显式建模,扩展了车辆互联模型。然后,通过分析总时延优化最优解的结构,对卸载任务的优先级进行分类。然后对任务卸载进行多角度分析,采用贪心策略保证用户体验质量和边缘负载均衡。最后,通过对真实数据集的对比实验,验证了该方法和模型在高迁移率和高负载实验场景下的有效性。
{"title":"Dynamic task transmission control and improved greedy strategy for vehicular edge computing","authors":"Sheng Cai ,&nbsp;Jianmao Xiao ,&nbsp;Yuanlong Cao ,&nbsp;Qinghang Gao ,&nbsp;Zhiyong Feng ,&nbsp;Shuiguang Deng","doi":"10.1016/j.future.2026.108369","DOIUrl":"10.1016/j.future.2026.108369","url":null,"abstract":"<div><div>With the rapid development of on-board applications, edge computing is now widely used in Internet of vehicles, enabling vehicles with limited resources to offload tasks to the edge for execution via computation offloading. However, current research methods are often hard to adapt to dynamic scenarios due to model training costs and vehicle mobility and also lack consideration for load balancing in high-load situations. To improve the quality of experience of users and balance the load of edge servers simultaneously, this paper proposes an improved greedy strategy method for computation offloading. First, to mitigate potential communication overload during peak hours, this study analyzes the relationship between transmission scheduling and execution queues, and investigates a dynamic task transmission control method. Second, explicit modeling of round-trip communication reliability in mobile environments is provided to extend the vehicle interconnection model. Subsequently, by analyzing the structure of the optimal solution for total latency optimization, the priority of offloaded tasks is classified. A multi-perspective analysis of task offloading is then conducted, and a greedy strategy is adopted to ensure both the quality of user experience and load balancing at the edge. Finally, comparative experiments on real-world datasets validate the efficiency of the proposed method and model under high-mobility and high-load experimental scenarios.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108369"},"PeriodicalIF":6.2,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145947955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A coclustering and computational intelligence-based approach for internet-of-things services composition 物联网服务组合的聚类和基于计算智能的方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-07-01 Epub Date: 2026-01-19 DOI: 10.1016/j.future.2026.108381
Nawel Atmani , Mohamed Essaid Khanouche , Ahror Belaid , Abdelghani Chibani
The Internet of Things (IoT) paradigm aims at interconnecting heterogeneous devices, called smart objects and seamlessly offering a multitude of services tailored to the user requirements. With the extremely rapid growth of the number of connected objects, the IoT services composition process becomes an NP-hard challenge due to the very high increase of the number of services offering similar functionalities but that may differ in their Quality of Service (QoS) parameter values. Various approaches have been proposed in the literature to obtain compositions with suboptimal QoS in a reasonable computation time. However, when the number of services and QoS parameters increases, the performance of these approaches is limited in terms of the composition time and/or the QoS utility of the composition. To address these limitations, a coclustering-based approach for QoS-constrained services composition (CoQSC) is proposed to reduce the composition space and improve the composition time as well as the composition utility. Unlike existing services composition algorithms where the composition space is reduced only in terms of the number of candidate services, the CoQSC approach exploits a coclustering method to reduce both the number of candidate services and the number of QoS parameters to be considered in the composition process. This reduction allows the composition process to find suboptimal compositions in a reduced computation time using eight among the most representative and recent computational intelligence (CI) techniques in the literature separately. The formulation of the CoQSC approach is complemented by a complexity analysis. Simulation scenarios show that the CoQSC approach significantly improves the QoS utility of composition and substantially decreases the composition time compared to recent and representative state-of-the-art composition approaches, making it suitable for large-scale IoT service environments.
物联网(IoT)范例旨在将称为智能对象的异构设备互连起来,并无缝地提供根据用户需求量身定制的多种服务。随着连接对象数量的快速增长,物联网服务组合过程成为一个np困难的挑战,因为提供类似功能但服务质量(QoS)参数值可能不同的服务数量急剧增加。文献中已经提出了各种方法来在合理的计算时间内获得次优QoS的组合。然而,当服务和QoS参数的数量增加时,这些方法的性能受到组合时间和/或组合的QoS效用的限制。为了解决这些限制,提出了一种基于协聚的qos约束服务组合(CoQSC)方法,以减少组合空间,提高组合时间和组合效用。现有的服务组合算法仅根据候选服务的数量来减少组合空间,而CoQSC方法利用共聚方法来减少组合过程中要考虑的候选服务的数量和QoS参数的数量。这种简化允许组合过程在减少的计算时间内找到次优组合,分别使用文献中最具代表性和最新的计算智能(CI)技术中的八种。CoQSC方法的制定是由复杂性分析补充的。仿真场景表明,与最新代表性的最先进组合方法相比,CoQSC方法显着提高了组合的QoS效用,并大大缩短了组合时间,使其适用于大规模物联网服务环境。
{"title":"A coclustering and computational intelligence-based approach for internet-of-things services composition","authors":"Nawel Atmani ,&nbsp;Mohamed Essaid Khanouche ,&nbsp;Ahror Belaid ,&nbsp;Abdelghani Chibani","doi":"10.1016/j.future.2026.108381","DOIUrl":"10.1016/j.future.2026.108381","url":null,"abstract":"<div><div>The Internet of Things (IoT) paradigm aims at interconnecting heterogeneous devices, called <em>smart objects</em> and seamlessly offering a multitude of services tailored to the user requirements. With the extremely rapid growth of the number of connected objects, the IoT services composition process becomes an NP-hard challenge due to the very high increase of the number of services offering similar functionalities but that may differ in their Quality of Service (QoS) parameter values. Various approaches have been proposed in the literature to obtain compositions with suboptimal QoS in a reasonable computation time. However, when the number of services and QoS parameters increases, the performance of these approaches is limited in terms of the composition time and/or the QoS utility of the composition. To address these limitations, a coclustering-based approach for QoS-constrained services composition (CoQSC) is proposed to reduce the composition space and improve the composition time as well as the composition utility. Unlike existing services composition algorithms where the composition space is reduced only in terms of the number of candidate services, the CoQSC approach exploits a coclustering method to reduce both the number of candidate services and the number of QoS parameters to be considered in the composition process. This reduction allows the composition process to find suboptimal compositions in a reduced computation time using eight among the most representative and recent computational intelligence (CI) techniques in the literature separately. The formulation of the CoQSC approach is complemented by a complexity analysis. Simulation scenarios show that the CoQSC approach significantly improves the QoS utility of composition and substantially decreases the composition time compared to recent and representative state-of-the-art composition approaches, making it suitable for large-scale IoT service environments.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108381"},"PeriodicalIF":6.2,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146000901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum-resistant blockchain architecture for secure vehicular networks: A ML-KEM-enabled approach with PoA and PoP consensus 安全车联网的抗量子区块链架构:基于PoA和PoP共识的ml - kemm支持方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-07-01 Epub Date: 2026-01-22 DOI: 10.1016/j.future.2026.108391
Muhammad Asim , Wu Junsheng , Li Weigang , Lin Zhijun , Zhang Peng , He Hao , Wei Dong , Ghulam Mohi-ud-Din
The increasing interconnectivity within modern transportation ecosystems, a cornerstone of Intelligent Transportation Systems (ITS), creates critical vulnerabilities, demanding stronger security measures to prevent unauthorized access to vehicles and private data. Existing blockchain implementations for Vehicular Ad Hoc Networks (VANETs) are fundamentally flawed, exhibiting inefficiency with traditional consensus mechanisms, vulnerability to quantum attacks, or often both. To overcome these critical limitations, this study introduces a novel Quantum-Resistant Blockchain Architecture. The core objectives are to achieve highly efficient vehicular data storage, ensure robust confidentiality through post-quantum cryptography, and automate secure transactions. The proposed methodology employs a dual-blockchain structure: a Registration Blockchain (RBC) using Proof-of-Authority (PoA) for secure identity management, and a Message Blockchain (MBC) using Proof-of-Position (PoP) for low-latency message dissemination. A key innovation is the integration of smart contracts with the NIST-approved Module Lattice-Based Key Encapsulation Mechanism (ML-KEM) to automate and secure all processes. The framework is rigorously evaluated using a realistic 5G-VANET Multi-access Edge Computing(MEC) dataset, which includes key parameters like vehicle ID, speed, and location. The results are compelling, demonstrating an Average Block Processing Time of 0.0326 s and a Transactional Throughput of 30.64 TPS, significantly outperforming RSA and AES benchmarks. This research’s primary contribution is a comprehensive framework that substantially improves data security and scalability while future-proofing VANETs against the imminent and evolving threat of quantum computing.
作为智能交通系统(ITS)的基石,现代交通生态系统的互联性日益增强,也带来了严重的漏洞,需要更强有力的安全措施来防止对车辆和私人数据的未经授权访问。现有的车载自组织网络(vanet)的区块链实现从根本上存在缺陷,与传统的共识机制相比效率低下,容易受到量子攻击,或者两者兼有。为了克服这些关键的限制,本研究引入了一种新的抗量子区块链架构。核心目标是实现高效的车载数据存储,通过后量子加密确保强大的机密性,并自动化安全交易。所提出的方法采用双区块链结构:使用权威证明(PoA)进行安全身份管理的注册区块链(RBC)和使用位置证明(PoP)进行低延迟消息传播的消息区块链(MBC)。一个关键的创新是将智能合约与nist批准的基于模块格的密钥封装机制(ML-KEM)集成在一起,以实现所有流程的自动化和安全。该框架使用现实的5G-VANET多接入边缘计算(MEC)数据集进行严格评估,其中包括车辆ID、速度和位置等关键参数。结果令人信服,平均块处理时间为0.0326秒,事务吞吐量为30.64 TPS,显著优于RSA和AES基准。这项研究的主要贡献是一个全面的框架,大大提高了数据的安全性和可扩展性,同时使vanet能够抵御量子计算迫在眉睫和不断发展的威胁。
{"title":"Quantum-resistant blockchain architecture for secure vehicular networks: A ML-KEM-enabled approach with PoA and PoP consensus","authors":"Muhammad Asim ,&nbsp;Wu Junsheng ,&nbsp;Li Weigang ,&nbsp;Lin Zhijun ,&nbsp;Zhang Peng ,&nbsp;He Hao ,&nbsp;Wei Dong ,&nbsp;Ghulam Mohi-ud-Din","doi":"10.1016/j.future.2026.108391","DOIUrl":"10.1016/j.future.2026.108391","url":null,"abstract":"<div><div>The increasing interconnectivity within modern transportation ecosystems, a cornerstone of Intelligent Transportation Systems (ITS), creates critical vulnerabilities, demanding stronger security measures to prevent unauthorized access to vehicles and private data. Existing blockchain implementations for Vehicular Ad Hoc Networks (VANETs) are fundamentally flawed, exhibiting inefficiency with traditional consensus mechanisms, vulnerability to quantum attacks, or often both. To overcome these critical limitations, this study introduces a novel Quantum-Resistant Blockchain Architecture. The core objectives are to achieve highly efficient vehicular data storage, ensure robust confidentiality through post-quantum cryptography, and automate secure transactions. The proposed methodology employs a dual-blockchain structure: a Registration Blockchain (RBC) using Proof-of-Authority (PoA) for secure identity management, and a Message Blockchain (MBC) using Proof-of-Position (PoP) for low-latency message dissemination. A key innovation is the integration of smart contracts with the NIST-approved Module Lattice-Based Key Encapsulation Mechanism (ML-KEM) to automate and secure all processes. The framework is rigorously evaluated using a realistic 5G-VANET Multi-access Edge Computing(MEC) dataset, which includes key parameters like vehicle ID, speed, and location. The results are compelling, demonstrating an Average Block Processing Time of 0.0326 s and a Transactional Throughput of 30.64 TPS, significantly outperforming RSA and AES benchmarks. This research’s primary contribution is a comprehensive framework that substantially improves data security and scalability while future-proofing VANETs against the imminent and evolving threat of quantum computing.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108391"},"PeriodicalIF":6.2,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1