首页 > 最新文献

Ad Hoc Networks最新文献

英文 中文
qIoV: A quantum-driven approach for environmental monitoring and rapid response systems using internet of vehicles qIoV:一种量子驱动的方法,用于环境监测和使用车联网的快速反应系统
IF 4.8 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-02 DOI: 10.1016/j.adhoc.2026.104158
Ankur Nahar , Koustav Kumar Mondal , Debasis Das , Rajkumar Buyya
This paper addresses the critical demand for advanced rapid response mechanisms in managing a wide array of environmental hazards, including urban pipeline leaks, industrial gas discharges, methane emissions from landfills, chlorine leaks from water treatment plants, and residential carbon monoxide releases. Conventional sensing and alert systems often struggle with the timely analysis of high-dimensional sensor data and suffer delays as data volume increases. We propose a novel framework, qIoV, which integrates quantum computing with the Internet of Vehicles (IoVs) to leverage the computational efficiency, parallelism, and entanglement properties inherent in quantum mechanics. The qIoV framework utilizes vehicular-mounted environmental sensors for highly accurate air quality assessments, where quantum principles enhance both sensitivity and precision. A core innovation is the Quantum Mesh Network Fabric (QMF), which dynamically adapts the quantum network topology to vehicular movement, maintaining quantum state integrity among environmental and vehicular disruptions, thereby ensuring robust data transmission. Furthermore, we implement a variational quantum classifier (VQC) with advanced entanglement techniques, significantly reducing latency in hazard alerts and facilitating rapid communication with emergency response teams and the public. Our experimental evaluations using the IBM OpenQASM 3 platform with a 127-qubit system achieved over 90% precision, recall, and F1-score in pair plot analysis, alongside an 83% increase in toxic gas detection speed compared to conventional methods. Theoretical analysis further substantiates the efficiency of quantum rotation, teleportation protocols, and the fidelity of quantum entanglement, highlighting the potential of quantum computing in environmental hazard management.
本文解决了在管理各种环境危害方面对先进快速反应机制的关键需求,包括城市管道泄漏、工业气体排放、垃圾填埋场甲烷排放、水处理厂氯泄漏和住宅一氧化碳排放。传统的传感和警报系统往往难以及时分析高维传感器数据,并且随着数据量的增加而遭受延迟。我们提出了一个新的框架,qIoV,它将量子计算与车联网(IoVs)集成在一起,以利用量子力学固有的计算效率、并行性和纠缠特性。qIoV框架利用车载环境传感器进行高精度的空气质量评估,其中量子原理提高了灵敏度和精度。核心创新是量子网状网络结构(QMF),它动态调整量子网络拓扑以适应车辆运动,在环境和车辆中断中保持量子态完整性,从而确保稳健的数据传输。此外,我们采用先进的纠缠技术实现了变分量子分类器(VQC),大大减少了危险警报的延迟,并促进了与应急响应团队和公众的快速沟通。我们使用IBM OpenQASM 3平台和127量子位系统进行实验评估,在对图分析中实现了超过90%的精度,召回率和f1得分,同时与传统方法相比,有毒气体检测速度提高了83%。理论分析进一步证实了量子旋转、隐形传态协议和量子纠缠的保真度的效率,突出了量子计算在环境危害管理中的潜力。
{"title":"qIoV: A quantum-driven approach for environmental monitoring and rapid response systems using internet of vehicles","authors":"Ankur Nahar ,&nbsp;Koustav Kumar Mondal ,&nbsp;Debasis Das ,&nbsp;Rajkumar Buyya","doi":"10.1016/j.adhoc.2026.104158","DOIUrl":"10.1016/j.adhoc.2026.104158","url":null,"abstract":"<div><div>This paper addresses the critical demand for advanced rapid response mechanisms in managing a wide array of environmental hazards, including urban pipeline leaks, industrial gas discharges, methane emissions from landfills, chlorine leaks from water treatment plants, and residential carbon monoxide releases. Conventional sensing and alert systems often struggle with the timely analysis of high-dimensional sensor data and suffer delays as data volume increases. We propose a novel framework, <em>qIoV</em>, which integrates quantum computing with the Internet of Vehicles (IoVs) to leverage the computational efficiency, parallelism, and entanglement properties inherent in quantum mechanics. The qIoV framework utilizes vehicular-mounted environmental sensors for highly accurate air quality assessments, where quantum principles enhance both sensitivity and precision. A core innovation is the Quantum Mesh Network Fabric (QMF), which dynamically adapts the quantum network topology to vehicular movement, maintaining quantum state integrity among environmental and vehicular disruptions, thereby ensuring robust data transmission. Furthermore, we implement a variational quantum classifier (VQC) with advanced entanglement techniques, significantly reducing latency in hazard alerts and facilitating rapid communication with emergency response teams and the public. Our experimental evaluations using the IBM OpenQASM 3 platform with a 127-qubit system achieved over 90% precision, recall, and F1-score in pair plot analysis, alongside an 83% increase in toxic gas detection speed compared to conventional methods. Theoretical analysis further substantiates the efficiency of quantum rotation, teleportation protocols, and the fidelity of quantum entanglement, highlighting the potential of quantum computing in environmental hazard management.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"185 ","pages":"Article 104158"},"PeriodicalIF":4.8,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HetTraffic: Multi-link traffic prediction and allocation for 6G heterogeneous networks HetTraffic:针对6G异构网络的多链路流量预测与分配
IF 4.8 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-31 DOI: 10.1016/j.adhoc.2026.104153
Yali Lv , Jian Huang , Jingpu Duan , Yaping Sun , Xiong Li
The rapid evolution of wireless communication necessitates advanced solutions beyond current 5G capabilities to realize the ambitious vision of 6G. The forthcoming 6G era will witness an unprecedented scale of device connectivity, challenging conventional resource allocation paradigms with its inherent heterogeneity and dynamic nature. A key issue involves intelligently and dynamically assigning diverse user traffic to highly heterogeneous links, while still satisfying Quality of Service (QoS) requirements. Moreover, resource management strategies that rely solely on reactive real-time measurements often lead to suboptimal performance. To overcome these limitations, this paper proposes HetTraffic, a novel comprehensive framework for joint traffic prediction and allocation in 6G heterogeneous networks. HetTraffic first introduces a novel link-level traffic prediction method leveraging a hybrid Graph Attention Network (GAT) and Long Short-Term Memory (LSTM) architecture. This approach effectively captures both the complex spatial dependencies from user mobility and the temporal fluctuations within traffic data. Building upon these predictions, we develop a multi-agent reinforcement learning-based allocation strategy utilizing the Multi-Agent Proximal Policy Optimization (MAPPO) algorithm. This is designed for efficient, decentralized resource optimization across heterogeneous links, proactively accounting for real-time conditions, QoS demands, and predicted traffic. Comprehensive experiments conducted on a dedicated 6G heterogeneous network testbed, utilizing a curated link-level traffic dataset, demonstrate the significant advantages and superior performance of our proposed traffic prediction and allocation methods compared to existing state-of-the-art approaches.
无线通信的快速发展需要超越当前5G功能的先进解决方案来实现6G的宏伟愿景。即将到来的6G时代将见证前所未有的设备连接规模,以其固有的异质性和动态性挑战传统的资源分配模式。一个关键问题涉及智能和动态地将不同的用户流量分配到高度异构的链路,同时仍然满足服务质量(QoS)要求。此外,仅依赖于响应式实时度量的资源管理策略通常会导致次优性能。为了克服这些限制,本文提出了一种新的综合框架HetTraffic,用于6G异构网络的联合流量预测和分配。HetTraffic首先介绍了一种利用混合图注意网络(GAT)和长短期记忆(LSTM)架构的链路级流量预测方法。这种方法有效地捕获了用户移动的复杂空间依赖性和交通数据中的时间波动。在这些预测的基础上,我们利用多智能体近端策略优化(MAPPO)算法开发了一种基于多智能体强化学习的分配策略。它旨在跨异构链路进行高效、分散的资源优化,主动考虑实时条件、QoS需求和预测流量。在专用的6G异构网络测试平台上进行的综合实验,利用精心策划的链路级流量数据集,与现有的最先进的方法相比,我们提出的流量预测和分配方法具有显著的优势和卓越的性能。
{"title":"HetTraffic: Multi-link traffic prediction and allocation for 6G heterogeneous networks","authors":"Yali Lv ,&nbsp;Jian Huang ,&nbsp;Jingpu Duan ,&nbsp;Yaping Sun ,&nbsp;Xiong Li","doi":"10.1016/j.adhoc.2026.104153","DOIUrl":"10.1016/j.adhoc.2026.104153","url":null,"abstract":"<div><div>The rapid evolution of wireless communication necessitates advanced solutions beyond current 5G capabilities to realize the ambitious vision of 6G. The forthcoming 6G era will witness an unprecedented scale of device connectivity, challenging conventional resource allocation paradigms with its inherent heterogeneity and dynamic nature. A key issue involves intelligently and dynamically assigning diverse user traffic to highly heterogeneous links, while still satisfying Quality of Service (QoS) requirements. Moreover, resource management strategies that rely solely on reactive real-time measurements often lead to suboptimal performance. To overcome these limitations, this paper proposes <em>HetTraffic</em>, a novel comprehensive framework for joint traffic prediction and allocation in 6G heterogeneous networks. <em>HetTraffic</em> first introduces a novel link-level traffic prediction method leveraging a hybrid Graph Attention Network (GAT) and Long Short-Term Memory (LSTM) architecture. This approach effectively captures both the complex spatial dependencies from user mobility and the temporal fluctuations within traffic data. Building upon these predictions, we develop a multi-agent reinforcement learning-based allocation strategy utilizing the Multi-Agent Proximal Policy Optimization (MAPPO) algorithm. This is designed for efficient, decentralized resource optimization across heterogeneous links, proactively accounting for real-time conditions, QoS demands, and predicted traffic. Comprehensive experiments conducted on a dedicated 6G heterogeneous network testbed, utilizing a curated link-level traffic dataset, demonstrate the significant advantages and superior performance of our proposed traffic prediction and allocation methods compared to existing state-of-the-art approaches.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"185 ","pages":"Article 104153"},"PeriodicalIF":4.8,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of MPTCP over emulated LEO satellite networks with ECMP routing 基于ECMP路由的模拟LEO卫星网络MPTCP性能研究
IF 4.8 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-30 DOI: 10.1016/j.adhoc.2026.104161
Juan Pedro Muñoz-Gea, Josemaria Malgosa-Sanahuja, Pilar Manzanares-Lopez
Low Earth Orbit (LEO) satellite networks are gaining prominence in 6G and large-scale IoT infrastructures, where traffic patterns range from low-rate telemetry to bandwidth-intensive applications such as high-resolution sensing, edge-assisted computation, and real-time vehicular services. Next-generation LEO constellations are designed to support high-capacity connectivity and heterogeneous IoT/6G services, some of which demand sustained high throughput and strict reliability guarantees. Multipath TCP (MPTCP) offers a transport-layer solution by enabling simultaneous use of multiple paths through concurrent subflows. This paper evaluates MPTCP version 1 in a LEO environment using bLEO, a new emulation tool for LEO networks specifically developed by the authors for this work. bLEO is an eBPF-based emulator capable of handling heavily loaded, large-scale constellations while providing efficient real-time control of link delay and state dynamics. We demonstrate MPTCP integration in Linux, including default scheduler behavior and subflow configuration. Additionally, we leverage Equal-Cost Multi-Path (ECMP) routing via OSPF using FRRouting to distribute MPTCP subflows across multiple network paths. The experimental results reveal a fundamental trade-off between robustness and scalability in the Linux MPTCP scheduler: although it maintains session continuity under handovers and failures, it underutilizes available paths unless load conditions force progressive subflow activation, reflecting a conservative design that prioritizes reordering avoidance and performance stability over full multipath utilization. The proposed platform serves as a flexible foundation for evaluating transport-layer protocols in dynamic satellite scenarios.
低地球轨道(LEO)卫星网络在6G和大规模物联网基础设施中日益突出,其流量模式从低速率遥测到高分辨率传感、边缘辅助计算和实时车载服务等带宽密集型应用。下一代LEO星座旨在支持高容量连接和异构IoT/6G服务,其中一些服务需要持续的高吞吐量和严格的可靠性保证。多路径TCP (MPTCP)提供了一种传输层解决方案,允许通过并发子流同时使用多条路径。本文使用bLEO在LEO环境中评估MPTCP版本1,bLEO是作者专门为这项工作开发的一种新的LEO网络仿真工具。bLEO是一种基于ebpf的仿真器,能够处理重载、大规模的星座,同时提供有效的链路延迟和状态动态实时控制。我们演示了Linux中的MPTCP集成,包括默认调度器行为和子流配置。此外,我们利用等价多路径(ECMP)路由通过OSPF使用FRRouting在多个网络路径上分发MPTCP子流。实验结果揭示了Linux MPTCP调度器中鲁棒性和可伸缩性之间的基本权衡:尽管它在切换和故障下保持会话连续性,但除非负载条件强制渐进子流激活,否则它会充分利用可用路径,这反映了一种保守的设计,优先考虑避免重新排序和性能稳定性,而不是完全多路径利用率。该平台为动态卫星场景下的传输层协议评估提供了一个灵活的基础。
{"title":"Performance of MPTCP over emulated LEO satellite networks with ECMP routing","authors":"Juan Pedro Muñoz-Gea,&nbsp;Josemaria Malgosa-Sanahuja,&nbsp;Pilar Manzanares-Lopez","doi":"10.1016/j.adhoc.2026.104161","DOIUrl":"10.1016/j.adhoc.2026.104161","url":null,"abstract":"<div><div>Low Earth Orbit (LEO) satellite networks are gaining prominence in 6G and large-scale IoT infrastructures, where traffic patterns range from low-rate telemetry to bandwidth-intensive applications such as high-resolution sensing, edge-assisted computation, and real-time vehicular services. Next-generation LEO constellations are designed to support high-capacity connectivity and heterogeneous IoT/6G services, some of which demand sustained high throughput and strict reliability guarantees. Multipath TCP (MPTCP) offers a transport-layer solution by enabling simultaneous use of multiple paths through concurrent subflows. This paper evaluates MPTCP version 1 in a LEO environment using <em>bLEO</em>, a new emulation tool for LEO networks specifically developed by the authors for this work. <em>bLEO</em> is an eBPF-based emulator capable of handling heavily loaded, large-scale constellations while providing efficient real-time control of link delay and state dynamics. We demonstrate MPTCP integration in Linux, including default scheduler behavior and subflow configuration. Additionally, we leverage Equal-Cost Multi-Path (ECMP) routing via OSPF using FRRouting to distribute MPTCP subflows across multiple network paths. The experimental results reveal a fundamental trade-off between robustness and scalability in the Linux MPTCP scheduler: although it maintains session continuity under handovers and failures, it underutilizes available paths unless load conditions force progressive subflow activation, reflecting a conservative design that prioritizes reordering avoidance and performance stability over full multipath utilization. The proposed platform serves as a flexible foundation for evaluating transport-layer protocols in dynamic satellite scenarios.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"184 ","pages":"Article 104161"},"PeriodicalIF":4.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-layer joint optimization for semantic communication-driven MEC systems via deep reinforcement learning 基于深度强化学习的语义通信驱动MEC系统跨层联合优化
IF 4.8 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-29 DOI: 10.1016/j.adhoc.2026.104159
Meiyao Wen, Linyu Huang, Qian Ning
The integration of semantic communication (SemCom) with mobile edge computing (MEC) has opened new avenues to improve task execution efficiency in intelligent networks. This paper proposes a cross-layer joint optimization framework for SemCom-driven MEC systems, aiming to minimize the weighted sum of task completion time and user energy consumption. Specifically, the framework jointly optimizes the semantic extraction factor at the application layer, task offloading decisions at the control layer, and communication and computational resource allocation at the network and physical layers. To address the non-convex and mixed-integer nature of the problem, a Deep Deterministic Policy Gradient (DDPG)-based algorithm was employed to efficiently search for solutions. The simulation results validate the effectiveness of the proposed approach and demonstrate that the integration of SemCom into MEC significantly improves the system performance. The findings offer practical insights for system engineers to design efficient MEC systems, reducing transmission overhead and energy consumption, especially in latency-sensitive applications such as autonomous driving and industrial Internet of Things.
语义通信(SemCom)与移动边缘计算(MEC)的融合为提高智能网络中的任务执行效率开辟了新的途径。针对semcom驱动的MEC系统,提出了一种以任务完成时间和用户能耗加权和最小为目标的跨层联合优化框架。具体而言,该框架共同优化了应用层的语义提取因子、控制层的任务卸载决策以及网络层和物理层的通信和计算资源分配。为了解决该问题的非凸和混合整数性质,采用基于深度确定性策略梯度(Deep Deterministic Policy Gradient, DDPG)的算法高效地搜索解。仿真结果验证了该方法的有效性,并表明将SemCom集成到MEC中可以显著提高系统性能。这些发现为系统工程师设计高效的MEC系统提供了实用的见解,降低了传输开销和能耗,特别是在自动驾驶和工业物联网等对延迟敏感的应用中。
{"title":"Cross-layer joint optimization for semantic communication-driven MEC systems via deep reinforcement learning","authors":"Meiyao Wen,&nbsp;Linyu Huang,&nbsp;Qian Ning","doi":"10.1016/j.adhoc.2026.104159","DOIUrl":"10.1016/j.adhoc.2026.104159","url":null,"abstract":"<div><div>The integration of semantic communication (SemCom) with mobile edge computing (MEC) has opened new avenues to improve task execution efficiency in intelligent networks. This paper proposes a cross-layer joint optimization framework for SemCom-driven MEC systems, aiming to minimize the weighted sum of task completion time and user energy consumption. Specifically, the framework jointly optimizes the semantic extraction factor at the application layer, task offloading decisions at the control layer, and communication and computational resource allocation at the network and physical layers. To address the non-convex and mixed-integer nature of the problem, a Deep Deterministic Policy Gradient (DDPG)-based algorithm was employed to efficiently search for solutions. The simulation results validate the effectiveness of the proposed approach and demonstrate that the integration of SemCom into MEC significantly improves the system performance. The findings offer practical insights for system engineers to design efficient MEC systems, reducing transmission overhead and energy consumption, especially in latency-sensitive applications such as autonomous driving and industrial Internet of Things.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"185 ","pages":"Article 104159"},"PeriodicalIF":4.8,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Offloading and computational resources allocation of tasks with dependency in MEC environment based on deep reinforcement learning 基于深度强化学习的MEC环境中具有依赖关系的任务卸载与计算资源分配
IF 4.8 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-27 DOI: 10.1016/j.adhoc.2026.104136
Yifan Liu , Erdong Xue , Wenlei Chai , Yi Liu , Fan Feng
Mobile edge computing (MEC) has emerged as a key technology for handling computation-intensive tasks generated by multiple terminal devices. However, due to task dependencies within applications, offloading decisions must consider not only the current task but also the remaining tasks, significantly increasing the complexity of task management. To address this challenge, we propose a priority-sensitive type (PST) scheme for joint tasks offloading and computational resources allocation, with the aim of minimizing the overall execution urgency of applications. A mixed integer optimization model is formulated, where task dependencies are represented by a directed acyclic graph (DAG), and a novel definition of task execution urgency is introduced. To solve the tasks offloading and access point (AP) selection problem, we adopt a multi-agent deep reinforcement learning (MADRL) framework, leveraging the proximal policy optimization (PPO) algorithm based on the actor-critic architecture. In addition, a greedy-based algorithm is designed to allocate computational resources of edge servers by considering task dependencies and refining offloading decisions when necessary. The simulation results demonstrate that the proposed PPO-PST approach significantly outperforms existing methods in terms of long-term execution efficiency and resource utilization across various application scenarios, highlighting its practicality and effectiveness.
移动边缘计算(MEC)已经成为处理由多个终端设备生成的计算密集型任务的关键技术。然而,由于应用程序中的任务依赖性,卸载决策不仅要考虑当前任务,还要考虑剩余的任务,这大大增加了任务管理的复杂性。为了解决这一挑战,我们提出了一种优先级敏感型(PST)方案,用于联合任务卸载和计算资源分配,目的是最大限度地降低应用程序的整体执行紧迫性。提出了一种混合整数优化模型,其中任务依赖关系用有向无环图(DAG)表示,并引入了任务执行紧迫性的新定义。为了解决任务卸载和接入点(AP)选择问题,我们采用了多智能体深度强化学习(MADRL)框架,利用基于actor-critic架构的近端策略优化(PPO)算法。此外,还设计了一种基于贪婪的算法,通过考虑任务依赖关系并在必要时优化卸载决策来分配边缘服务器的计算资源。仿真结果表明,所提出的PPO-PST方法在各种应用场景下的长期执行效率和资源利用率均显著优于现有方法,突出了其实用性和有效性。
{"title":"Offloading and computational resources allocation of tasks with dependency in MEC environment based on deep reinforcement learning","authors":"Yifan Liu ,&nbsp;Erdong Xue ,&nbsp;Wenlei Chai ,&nbsp;Yi Liu ,&nbsp;Fan Feng","doi":"10.1016/j.adhoc.2026.104136","DOIUrl":"10.1016/j.adhoc.2026.104136","url":null,"abstract":"<div><div>Mobile edge computing (MEC) has emerged as a key technology for handling computation-intensive tasks generated by multiple terminal devices. However, due to task dependencies within applications, offloading decisions must consider not only the current task but also the remaining tasks, significantly increasing the complexity of task management. To address this challenge, we propose a priority-sensitive type (PST) scheme for joint tasks offloading and computational resources allocation, with the aim of minimizing the overall execution urgency of applications. A mixed integer optimization model is formulated, where task dependencies are represented by a directed acyclic graph (DAG), and a novel definition of task execution urgency is introduced. To solve the tasks offloading and access point (AP) selection problem, we adopt a multi-agent deep reinforcement learning (MADRL) framework, leveraging the proximal policy optimization (PPO) algorithm based on the actor-critic architecture. In addition, a greedy-based algorithm is designed to allocate computational resources of edge servers by considering task dependencies and refining offloading decisions when necessary. The simulation results demonstrate that the proposed PPO-PST approach significantly outperforms existing methods in terms of long-term execution efficiency and resource utilization across various application scenarios, highlighting its practicality and effectiveness.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"184 ","pages":"Article 104136"},"PeriodicalIF":4.8,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QPSO-driven deep hybrid network for robust CSI localization in disaster scenarios 灾难场景下基于qpso驱动的深度混合网络的鲁棒CSI定位
IF 4.8 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-27 DOI: 10.1016/j.adhoc.2026.104157
Bidyarani Langpoklakpam, Lithungo K Murry
Rapid and accurate victim localization is essential for efficient emergency response and rescue operations, as natural disasters continue to occur with increasing frequency. Channel State Information (CSI) plays a pivotal role in positioning in complex environments because it can provide detailed, real-time propagation characteristics. This paper proposes a hybrid deep learning framework for CSI localization, where residual learning (ResNet) and Swin Transformer are jointly combined to capture both local and global CSI characteristics. The extracted features are mapped to spatial coordinates using a Backpropagation Neural Network regression model, in which key hyperparameters are optimized using a Quantum-inspired Particle Swarm Optimization (QPSO) strategy to enhance convergence stability and localization accuracy. The proposed system is evaluated through cross-environment testing across three real-world scenarios exhibiting diverse multipath propagation, obstructions, and interference patterns, emulating realistic post-disaster wireless conditions. Along with RMSE and MAE, Node Localization Efficiency (NLE) is employed to assess effective node coverage, and comparative results against existing methods highlight the superiority of the proposed approach. Across three complex real-world scenarios, the proposed method achieves RMSE values of 0.5292 m, 0.6084 m, and 0.7231 m, while achieving up to 37.5 % improvement in NLE over state-of-the-art approaches.
由于自然灾害的发生频率不断增加,迅速和准确地确定受害者位置对于有效的应急反应和救援行动至关重要。信道状态信息(CSI)在复杂环境下的定位中起着关键作用,因为它可以提供详细的实时传播特性。本文提出了一种用于CSI定位的混合深度学习框架,将残差学习(ResNet)和Swin Transformer相结合,以捕获局部和全局CSI特征。利用反向传播神经网络回归模型将提取的特征映射到空间坐标,并利用量子启发粒子群优化(QPSO)策略对关键超参数进行优化,以提高收敛稳定性和定位精度。该系统通过三种真实场景的跨环境测试进行评估,展示了不同的多径传播、障碍物和干扰模式,模拟了现实的灾后无线条件。节点定位效率(NLE)与RMSE和MAE一起用于评估有效节点覆盖,与现有方法的比较结果突出了该方法的优越性。在三种复杂的现实场景中,该方法的RMSE值分别为0.5292 m、0.6084 m和0.7231 m,与最先进的方法相比,NLE提高了37.5%。
{"title":"QPSO-driven deep hybrid network for robust CSI localization in disaster scenarios","authors":"Bidyarani Langpoklakpam,&nbsp;Lithungo K Murry","doi":"10.1016/j.adhoc.2026.104157","DOIUrl":"10.1016/j.adhoc.2026.104157","url":null,"abstract":"<div><div>Rapid and accurate victim localization is essential for efficient emergency response and rescue operations, as natural disasters continue to occur with increasing frequency. Channel State Information (CSI) plays a pivotal role in positioning in complex environments because it can provide detailed, real-time propagation characteristics. This paper proposes a hybrid deep learning framework for CSI localization, where residual learning (ResNet) and Swin Transformer are jointly combined to capture both local and global CSI characteristics. The extracted features are mapped to spatial coordinates using a Backpropagation Neural Network regression model, in which key hyperparameters are optimized using a Quantum-inspired Particle Swarm Optimization (QPSO) strategy to enhance convergence stability and localization accuracy. The proposed system is evaluated through cross-environment testing across three real-world scenarios exhibiting diverse multipath propagation, obstructions, and interference patterns, emulating realistic post-disaster wireless conditions. Along with RMSE and MAE, Node Localization Efficiency (NLE) is employed to assess effective node coverage, and comparative results against existing methods highlight the superiority of the proposed approach. Across three complex real-world scenarios, the proposed method achieves RMSE values of 0.5292 m, 0.6084 m, and 0.7231 m, while achieving up to 37.5 % improvement in NLE over state-of-the-art approaches.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"184 ","pages":"Article 104157"},"PeriodicalIF":4.8,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A transformer-based method for radio-frequency fingerprinting of IoT devices 一种基于变压器的物联网设备射频指纹识别方法
IF 4.8 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-24 DOI: 10.1016/j.adhoc.2026.104155
Carlos Herrera-Loera , Carolina Del-Valle-Soto , Leonardo J. Valdivia , Miguel Bazdresch , Carlos Mex-Perera
Radio Frequency Fingerprint Identification (RFFI) is a technique used to classify wireless devices by examining the unique signal distortions that arise from hardware imperfections inherent to each device. This method allows a wireless receiver to identify one or more transmitters accurately. Previous studies have presented RFFI results with wireless modulations such as LoRa, ZigBee, and Bluetooth Low Energy (BLE). This paper presents a method for RFFI in Internet of Things (IoT) devices using Gaussian Frequency Shift Keying (GFSK) modulation. The proposed RFFI method is based on an encoder module of a Vision Transformer (ViT); the results are compared with those obtained using a convolutional neural network (CNN). We present an analysis of the method’s accuracy in several propagation scenarios. We also analyze the effect of various parameters on the method’s accuracy, including training dataset size, training vector length, the portion of the transmitted packet to train on, and the number of epochs for training.
Experimental findings indicate that the transformer-based classifier performs slightly better than a CNN (up to 5% better accuracy) in non-line-of-sight (NLOS) propagation conditions. More significantly, the transformer-based classifier requires only half the training epochs compared to the CNN.
射频指纹识别(RFFI)是一种通过检查由每个设备固有的硬件缺陷引起的独特信号失真来对无线设备进行分类的技术。这种方法允许无线接收器准确地识别一个或多个发射器。以前的研究已经展示了无线调制如LoRa、ZigBee和低功耗蓝牙(BLE)的RFFI结果。本文提出了一种基于高斯频移键控(GFSK)调制的物联网(IoT)设备RFFI方法。提出的RFFI方法基于视觉变压器(Vision Transformer, ViT)的编码器模块;结果与卷积神经网络(CNN)的结果进行了比较。我们分析了该方法在几种传播情况下的准确性。我们还分析了各种参数对方法准确性的影响,包括训练数据集大小、训练向量长度、要训练的传输数据包的部分以及训练的epoch数。实验结果表明,基于变压器的分类器在非视距(NLOS)传播条件下的表现略好于CNN(准确率提高5%)。更重要的是,与CNN相比,基于变压器的分类器只需要一半的训练时间。
{"title":"A transformer-based method for radio-frequency fingerprinting of IoT devices","authors":"Carlos Herrera-Loera ,&nbsp;Carolina Del-Valle-Soto ,&nbsp;Leonardo J. Valdivia ,&nbsp;Miguel Bazdresch ,&nbsp;Carlos Mex-Perera","doi":"10.1016/j.adhoc.2026.104155","DOIUrl":"10.1016/j.adhoc.2026.104155","url":null,"abstract":"<div><div>Radio Frequency Fingerprint Identification (RFFI) is a technique used to classify wireless devices by examining the unique signal distortions that arise from hardware imperfections inherent to each device. This method allows a wireless receiver to identify one or more transmitters accurately. Previous studies have presented RFFI results with wireless modulations such as LoRa, ZigBee, and Bluetooth Low Energy (BLE). This paper presents a method for RFFI in Internet of Things (IoT) devices using Gaussian Frequency Shift Keying (GFSK) modulation. The proposed RFFI method is based on an encoder module of a Vision Transformer (ViT); the results are compared with those obtained using a convolutional neural network (CNN). We present an analysis of the method’s accuracy in several propagation scenarios. We also analyze the effect of various parameters on the method’s accuracy, including training dataset size, training vector length, the portion of the transmitted packet to train on, and the number of epochs for training.</div><div>Experimental findings indicate that the transformer-based classifier performs slightly better than a CNN (up to 5% better accuracy) in non-line-of-sight (NLOS) propagation conditions. More significantly, the transformer-based classifier requires only half the training epochs compared to the CNN.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"184 ","pages":"Article 104155"},"PeriodicalIF":4.8,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint optimization of resource allocation and position deployment in UAV swarm emergency communication networks under interference 干扰下无人机群应急通信网络资源配置与位置部署联合优化
IF 4.8 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-21 DOI: 10.1016/j.adhoc.2026.104152
Fulai Liu , Yajie Gao , Ruiyan Du
Unmanned aerial vehicle (UAV) swarms serve as airborne mobile base stations in post-disaster emergency communications but are highly susceptible to both internal interference and external malicious attacks. In addition, the increasing number of UAVs leads to high-dimensional inputs, thereby slowing the convergence of anti-interference algorithms. To address these challenges, this paper proposes an attention-enhanced multi-agent proximal policy optimization (AEMAPPO) framework that jointly optimizes resource allocation and position deployment. A constructed RNN–Bahdanau module is integrated into MAPPO to replace traditional linear feature extraction with attention-based interference feature learning. This enables the decoder to focus on the most relevant components within the high-dimensional interference sequence, alleviating the dimensionality-explosion problem and improving convergence speed. Simulation results demonstrate that, compared with benchmark methods, AEMAPPO significantly enhances training efficiency, accelerates policy convergence, and achieves superior anti-interference performance.
无人机(UAV)群在灾后应急通信中作为机载移动基站,但极易受到内部干扰和外部恶意攻击。此外,无人机数量的增加导致了高维输入,从而减缓了抗干扰算法的收敛速度。为了解决这些挑战,本文提出了一个注意力增强的多智能体近端策略优化(AEMAPPO)框架,该框架共同优化资源分配和位置部署。将构建的RNN-Bahdanau模块集成到MAPPO中,以基于注意力的干扰特征学习取代传统的线性特征提取。这使得解码器能够专注于高维干扰序列中最相关的组件,从而缓解了维数爆炸问题并提高了收敛速度。仿真结果表明,与基准方法相比,AEMAPPO算法显著提高了训练效率,加快了策略收敛速度,取得了优异的抗干扰性能。
{"title":"Joint optimization of resource allocation and position deployment in UAV swarm emergency communication networks under interference","authors":"Fulai Liu ,&nbsp;Yajie Gao ,&nbsp;Ruiyan Du","doi":"10.1016/j.adhoc.2026.104152","DOIUrl":"10.1016/j.adhoc.2026.104152","url":null,"abstract":"<div><div>Unmanned aerial vehicle (UAV) swarms serve as airborne mobile base stations in post-disaster emergency communications but are highly susceptible to both internal interference and external malicious attacks. In addition, the increasing number of UAVs leads to high-dimensional inputs, thereby slowing the convergence of anti-interference algorithms. To address these challenges, this paper proposes an attention-enhanced multi-agent proximal policy optimization (AEMAPPO) framework that jointly optimizes resource allocation and position deployment. A constructed RNN–Bahdanau module is integrated into MAPPO to replace traditional linear feature extraction with attention-based interference feature learning. This enables the decoder to focus on the most relevant components within the high-dimensional interference sequence, alleviating the dimensionality-explosion problem and improving convergence speed. Simulation results demonstrate that, compared with benchmark methods, AEMAPPO significantly enhances training efficiency, accelerates policy convergence, and achieves superior anti-interference performance.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"184 ","pages":"Article 104152"},"PeriodicalIF":4.8,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An integrated tracking technique for underwater navigation using acoustic and IMU measurements 利用声学和IMU测量的水下导航综合跟踪技术
IF 4.8 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-21 DOI: 10.1016/j.adhoc.2026.104151
Mainul Islam Chowdhury, Quoc Viet Phung, Iftekhar Ahmed, Daryoush Habibi
Accurate navigation is essential for underwater vehicles like AUVs, which often operate in deep or remote areas. However, complex ocean dynamics, cumulative inertial-measurement unit (IMU) drift, and diverse noise sources often result in erratic and unreliable position estimates. To overcome these challenges, we proposed a method that combines underwater acoustic signals with onboard motion sensor data to improve the underwater position tracking system. The proposed system uses a long baseline (LBL) acoustic array of surface buoys to capture the Time-difference-of-Arrival (TDoA) of a multi-pulse beacon. We extract arrival times using a superimposed-envelope-spectrum (SES) detector, which exploits the beacon’s periodic structure to stay reliable even in heavy noise. These acoustic measurements are fused with six-degree-of-freedom IMU data using a particle filter (PF). The filter suppresses IMU drift and reveals how long dead-reckoning remains reliable before an acoustic update becomes essential. Simulation results demonstrated that our PF-TDoA fusion method achieved up to 40% reduction in mean localization error compared to traditional fusion filters and optimization method. In the experiment, we compared the simulated IMU prediction with real-world acoustic measurements, and the resulting fused position estimated remained within 3m of Global Positioning System (GPS)-reported trajectory, demonstrating robust performance under operational conditions.
像auv这样的水下航行器经常在深海或偏远地区作业,精确的导航是必不可少的。然而,复杂的海洋动力学、累积惯性测量单元(IMU)漂移和各种噪声源往往导致位置估计不稳定和不可靠。为了克服这些挑战,我们提出了一种将水声信号与机载运动传感器数据相结合的方法来改进水下位置跟踪系统。该系统使用长基线(LBL)水面浮标声阵列来捕获多脉冲信标的到达时间差(TDoA)。我们使用叠加包络频谱(SES)探测器提取到达时间,该探测器利用信标的周期性结构,即使在强噪声中也能保持可靠。使用粒子滤波器(PF)将这些声学测量结果与六自由度IMU数据融合。该滤波器抑制IMU漂移,并揭示在声学更新变得必要之前,航位推算保持可靠的时间。仿真结果表明,与传统的融合滤波器和优化方法相比,我们的PF-TDoA融合方法的平均定位误差降低了40%。在实验中,我们将模拟的IMU预测与实际声学测量结果进行了比较,结果表明,所得到的融合位置估计保持在全球定位系统(GPS)报告轨迹的3米以内,在操作条件下表现出稳健的性能。
{"title":"An integrated tracking technique for underwater navigation using acoustic and IMU measurements","authors":"Mainul Islam Chowdhury,&nbsp;Quoc Viet Phung,&nbsp;Iftekhar Ahmed,&nbsp;Daryoush Habibi","doi":"10.1016/j.adhoc.2026.104151","DOIUrl":"10.1016/j.adhoc.2026.104151","url":null,"abstract":"<div><div>Accurate navigation is essential for underwater vehicles like AUVs, which often operate in deep or remote areas. However, complex ocean dynamics, cumulative inertial-measurement unit (IMU) drift, and diverse noise sources often result in erratic and unreliable position estimates. To overcome these challenges, we proposed a method that combines underwater acoustic signals with onboard motion sensor data to improve the underwater position tracking system. The proposed system uses a long baseline (LBL) acoustic array of surface buoys to capture the Time-difference-of-Arrival (TDoA) of a multi-pulse beacon. We extract arrival times using a superimposed-envelope-spectrum (SES) detector, which exploits the beacon’s periodic structure to stay reliable even in heavy noise. These acoustic measurements are fused with six-degree-of-freedom IMU data using a particle filter (PF). The filter suppresses IMU drift and reveals how long dead-reckoning remains reliable before an acoustic update becomes essential. Simulation results demonstrated that our PF-TDoA fusion method achieved up to 40% reduction in mean localization error compared to traditional fusion filters and optimization method. In the experiment, we compared the simulated IMU prediction with real-world acoustic measurements, and the resulting fused position estimated remained within <span><math><mrow><mn>3</mn><mi>m</mi></mrow></math></span> of Global Positioning System (GPS)-reported trajectory, demonstrating robust performance under operational conditions.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"184 ","pages":"Article 104151"},"PeriodicalIF":4.8,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LSGRS: A geolocation and reputation-aware dynamic dual-layer sharding scheme for scalable vehicular blockchain networks LSGRS:一种用于可扩展车载区块链网络的地理位置和声誉感知的动态双层分片方案
IF 4.8 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-19 DOI: 10.1016/j.adhoc.2026.104150
Zhiqiang Du , Qingyu Jiao , Xiaopeng Zhang , Muhong Huang , Siqi Zheng , Wendong Zhang , Yanfang Fu , Alwyn Hoffman
In vehicular networks, the massive influx of connected vehicles and high-frequency data transmissions expose scalability bottlenecks in existing architectures. Sharded blockchains enable parallel processing via network partitioning. However, conventional sharding schemes struggle to cope with the high mobility of vehicles and the dynamic nature of node states—particularly in terms of adapting shard strategies.
To address these limitations, this paper proposes a dynamic dual-layer sharding mechanism, termed LSGRS, specifically tailored for highly mobile vehicular networks. LSGRS incorporates a dynamic network sharding mechanism that optimizes node selection based on reputation scores, physical proximity, and predicted vehicle trajectories. This approach reduces intra-shard communication latency and mitigates the risk of shard compromise by malicious nodes. A dual-layer hierarchical architecture is designed to separate mining and packaging tasks across two distinct layers, effectively alleviating the performance bottlenecks. In addition, the proposed Dynamic Node Join and Exit Protocol (DNJEP) ensures real-time adaptation to node failures or adversarial behaviors without disrupting ongoing services. Finally, the proposed scheme is evaluated through experiments in a simulated urban environment built with Veins and SUMO. The results demonstrate that LSGRS outperforms baseline approaches in terms of communication overhead, transactions per second (TPS), and consensus latency.
在车载网络中,大量联网车辆的涌入和高频数据传输暴露了现有架构的可扩展性瓶颈。分片区块链通过网络分区实现并行处理。然而,传统的分片方案难以应对车辆的高移动性和节点状态的动态性,特别是在适应分片策略方面。为了解决这些限制,本文提出了一种动态双层分片机制,称为LSGRS,专门为高度移动的车辆网络量身定制。LSGRS集成了一个动态网络分片机制,该机制基于声誉评分、物理接近度和预测的车辆轨迹来优化节点选择。这种方法减少了分片内通信延迟,降低了分片被恶意节点破坏的风险。双层分层架构旨在将挖掘和封装任务跨两个不同的层分开,有效缓解性能瓶颈。此外,提出的动态节点加入和退出协议(DNJEP)确保在不中断正在进行的服务的情况下实时适应节点故障或敌对行为。最后,在一个由vein和SUMO构建的模拟城市环境中,通过实验对该方案进行了评估。结果表明,LSGRS在通信开销、每秒事务数(TPS)和共识延迟方面优于基线方法。
{"title":"LSGRS: A geolocation and reputation-aware dynamic dual-layer sharding scheme for scalable vehicular blockchain networks","authors":"Zhiqiang Du ,&nbsp;Qingyu Jiao ,&nbsp;Xiaopeng Zhang ,&nbsp;Muhong Huang ,&nbsp;Siqi Zheng ,&nbsp;Wendong Zhang ,&nbsp;Yanfang Fu ,&nbsp;Alwyn Hoffman","doi":"10.1016/j.adhoc.2026.104150","DOIUrl":"10.1016/j.adhoc.2026.104150","url":null,"abstract":"<div><div>In vehicular networks, the massive influx of connected vehicles and high-frequency data transmissions expose scalability bottlenecks in existing architectures. Sharded blockchains enable parallel processing via network partitioning. However, conventional sharding schemes struggle to cope with the high mobility of vehicles and the dynamic nature of node states—particularly in terms of adapting shard strategies.</div><div>To address these limitations, this paper proposes a dynamic dual-layer sharding mechanism, termed LSGRS, specifically tailored for highly mobile vehicular networks. LSGRS incorporates a dynamic network sharding mechanism that optimizes node selection based on reputation scores, physical proximity, and predicted vehicle trajectories. This approach reduces intra-shard communication latency and mitigates the risk of shard compromise by malicious nodes. A dual-layer hierarchical architecture is designed to separate mining and packaging tasks across two distinct layers, effectively alleviating the performance bottlenecks. In addition, the proposed Dynamic Node Join and Exit Protocol (DNJEP) ensures real-time adaptation to node failures or adversarial behaviors without disrupting ongoing services. Finally, the proposed scheme is evaluated through experiments in a simulated urban environment built with Veins and SUMO. The results demonstrate that LSGRS outperforms baseline approaches in terms of communication overhead, transactions per second (TPS), and consensus latency.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"184 ","pages":"Article 104150"},"PeriodicalIF":4.8,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Ad Hoc Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1