首页 > 最新文献

Computer Networks最新文献

英文 中文
Vertex-independent spanning trees in data center network BCDC 数据中心网络BCDC中与顶点无关的生成树
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-30 DOI: 10.1016/j.comnet.2025.111981
Jiakang Ma , Baolei Cheng , Yan Wang , Jianxi Fan , Junkai Zhu
The performance of data center networks largely determines cloud computing efficiency. BCDC is a high-performance data center network whose logical graph is exactly the line graph of the n-dimensional crossed cube (CQn). However, there are few studies on its vertex-independent spanning trees (VISTs). Until now, constructing VISTs rooted at an arbitrary vertex in BCDC remains an open question. In this paper, an algorithm is proposed to construct the VISTs in BCDC. Firstly, a parallel algorithm is adopted to construct n1 trees in CQn. Then, we transform these trees into 2n2 mutually independent trees in the BCDC. Subsequently, by hanging vertices on these trees, 2n2 VISTs rooted at an arbitrary vertex in BCDC are obtained. Finally, we used Python’s Matplotlib and NumPy packages for simulation and obtained results showing that the discrepancy between the average path length and the network diameter remains within 0.5, and the communication success rate stays above 60% even under a 30% vertex failure rate, which verifies the high efficiency and strong security of the network in data transmission.
数据中心网络的性能在很大程度上决定了云计算的效率。BCDC是一种高性能数据中心网络,其逻辑图正是n维交叉立方体(CQn)的线形图。然而,对其不依赖于顶点的生成树(vist)的研究却很少。到目前为止,构造基于BCDC任意顶点的vist仍然是一个悬而未决的问题。本文提出了一种构造BCDC中vist的算法。首先,采用并行算法在CQn中构造n−1棵树。然后,我们将这些树在BCDC中转化为2n−2个相互独立的树。然后,通过在这些树上挂起顶点,得到2n−2根于BCDC中任意顶点的vist。最后,我们使用Python的Matplotlib和NumPy包进行仿真,得到的结果表明,平均路径长度与网络直径的差异保持在0.5以内,即使在30%的顶点失败率下,通信成功率也保持在60%以上,验证了网络在数据传输方面的高效率和强安全性。
{"title":"Vertex-independent spanning trees in data center network BCDC","authors":"Jiakang Ma ,&nbsp;Baolei Cheng ,&nbsp;Yan Wang ,&nbsp;Jianxi Fan ,&nbsp;Junkai Zhu","doi":"10.1016/j.comnet.2025.111981","DOIUrl":"10.1016/j.comnet.2025.111981","url":null,"abstract":"<div><div>The performance of data center networks largely determines cloud computing efficiency. BCDC is a high-performance data center network whose logical graph is exactly the line graph of the <em>n</em>-dimensional crossed cube (<em>CQ<sub>n</sub></em>). However, there are few studies on its vertex-independent spanning trees (VISTs). Until now, constructing VISTs rooted at an arbitrary vertex in BCDC remains an open question. In this paper, an algorithm is proposed to construct the VISTs in BCDC. Firstly, a parallel algorithm is adopted to construct <span><math><mrow><mi>n</mi><mo>−</mo><mn>1</mn></mrow></math></span> trees in <em>CQ<sub>n</sub></em>. Then, we transform these trees into <span><math><mrow><mn>2</mn><mi>n</mi><mo>−</mo><mn>2</mn></mrow></math></span> mutually independent trees in the BCDC. Subsequently, by hanging vertices on these trees, <span><math><mrow><mn>2</mn><mi>n</mi><mo>−</mo><mn>2</mn></mrow></math></span> VISTs rooted at an arbitrary vertex in BCDC are obtained. Finally, we used Python’s Matplotlib and NumPy packages for simulation and obtained results showing that the discrepancy between the average path length and the network diameter remains within 0.5, and the communication success rate stays above 60% even under a 30% vertex failure rate, which verifies the high efficiency and strong security of the network in data transmission.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111981"},"PeriodicalIF":4.6,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Closed-Form Analytics of Multicell Massive MIMO System Using M-MMSE and TPE Techniques in Correlated Environment 相关环境下基于M-MMSE和TPE技术的多小区大规模MIMO系统封闭分析
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-29 DOI: 10.1016/j.comnet.2025.111965
Harleen Kaur, Ankush Kansal
This work computes the average ergodic user rate for the multicell massive Multiple-Input Multiple-Output (mMIMO) system based on Multicell Minimum Mean Squared Error (M-MMSE) and Truncated Polynomial Expansion (TPE) techniques. By applying Random Matrix Theory (RMT) and large system analysis, the deterministic expression for the system's Signal-to-Interference plus Noise Ratio (SINR) with the M-MMSE scheme in uplink and downlink mode is computed, leading to the system's average user rate calculation. The M-MMSE scheme involves gram matrix inversion, increasing the system's lag and complexity. Therefore, the problem is solved by approximating the inverse of the matrix using TPE, which involves simple operations that can parallelize. Also, the complexity of the TPE technique depends only on the TPE order rather than the system's dimensions. Based on the RMT theory, the deterministic equivalents required for SINRs of the TPE scheme in uplink and downlink modes are derived. These deterministic equivalents for TPE SINRs are optimized to compute the average user rate for the system, matching the M-MMSE technique performance at a lower TPE order. In section 6, the system’s average user rate is validated by varying it with different parameters. The comparison between the M-MMSE and the TPE scheme shows that the TPE scheme achieves the required performance at J=3 TPE order. The theoretical results show the accuracy of derived deterministic equivalents.
本文基于多单元最小均方误差(M-MMSE)和截断多项式展开(TPE)技术计算了多单元大规模多输入多输出(mMIMO)系统的平均遍历用户速率。应用随机矩阵理论(RMT)和大系统分析,计算了M-MMSE方案下上行和下行模式下系统信噪比(SINR)的确定性表达式,从而计算出系统的平均用户速率。M-MMSE方案涉及克矩阵反演,增加了系统的滞后和复杂性。因此,这个问题是通过使用TPE近似矩阵的逆来解决的,这涉及到可以并行化的简单操作。此外,TPE技术的复杂性仅取决于TPE的顺序,而不是系统的维度。基于RMT理论,推导了上行和下行模式下TPE方案sinr所需的确定性等效。TPE sinr的这些确定性等效被优化为计算系统的平均用户速率,在较低的TPE阶上匹配M-MMSE技术性能。在第6节中,通过使用不同的参数改变系统的平均用户率来验证系统的平均用户率。M-MMSE与TPE方案的比较表明,TPE方案在J=3 TPE阶时达到了要求的性能。理论结果表明了所导出的确定性等效的准确性。
{"title":"Closed-Form Analytics of Multicell Massive MIMO System Using M-MMSE and TPE Techniques in Correlated Environment","authors":"Harleen Kaur,&nbsp;Ankush Kansal","doi":"10.1016/j.comnet.2025.111965","DOIUrl":"10.1016/j.comnet.2025.111965","url":null,"abstract":"<div><div>This work computes the average ergodic user rate for the multicell massive Multiple-Input Multiple-Output (mMIMO) system based on Multicell Minimum Mean Squared Error (M-MMSE) and Truncated Polynomial Expansion (TPE) techniques. By applying Random Matrix Theory (RMT) and large system analysis, the deterministic expression for the system's Signal-to-Interference plus Noise Ratio (SINR) with the M-MMSE scheme in uplink and downlink mode is computed, leading to the system's average user rate calculation. The M-MMSE scheme involves gram matrix inversion, increasing the system's lag and complexity. Therefore, the problem is solved by approximating the inverse of the matrix using TPE, which involves simple operations that can parallelize. Also, the complexity of the TPE technique depends only on the TPE order rather than the system's dimensions. Based on the RMT theory, the deterministic equivalents required for SINRs of the TPE scheme in uplink and downlink modes are derived. These deterministic equivalents for TPE SINRs are optimized to compute the average user rate for the system, matching the M-MMSE technique performance at a lower TPE order. In section 6, the system’s average user rate is validated by varying it with different parameters. The comparison between the M-MMSE and the TPE scheme shows that the TPE scheme achieves the required performance at J=3 TPE order. The theoretical results show the accuracy of derived deterministic equivalents.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 111965"},"PeriodicalIF":4.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative multi-task offloading in multi-edge system for AI-generated content service 基于ai生成内容服务的多边缘系统协同多任务卸载
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-29 DOI: 10.1016/j.comnet.2025.111979
Zhiyuan Li , Jie Sun
As artificial intelligence-generated content (AIGC) services become increasingly prevalent in edge networks, the demand for rapid and efficient processing in latency-sensitive applications continues to grow. Traditional task offloading strategies often struggle to coordinate heterogeneous resources, such as GPU and TPU clusters, resulting in imbalanced load distribution and underutilization of specialized accelerators. To overcome these limitations, we propose the adaptive multi-edge load balancing optimization (AMBO) algorithm, designed to optimize collaborative task scheduling among edge servers. AMBO utilizes an online reinforcement learning approach, decomposing the task offloading process into edge server selection and load balancing functions, which enables intelligent scheduling across nodes with varying computational capacities. Furthermore, by integrating the dueling Deep Q-Network (DQN) framework, AMBO enhances decision-making accuracy and stability in dynamic edge environments. Extensive experimental results demonstrate that AMBO significantly improves task offloading efficiency, reducing task completion time by 79.04% and achieving a task completion rate of 99.89%. These results highlight the algorithm’s strong adaptability and effectiveness in heterogeneous edge computing scenarios, making it well-suited for supporting the next generation of latency-sensitive AIGC services.
随着人工智能生成内容(AIGC)服务在边缘网络中越来越普遍,对延迟敏感型应用程序快速高效处理的需求持续增长。传统的任务卸载策略往往难以协调异构资源(如GPU和TPU集群),导致负载分布不平衡和专用加速器利用率不足。为了克服这些限制,我们提出了自适应多边缘负载平衡优化(AMBO)算法,旨在优化边缘服务器之间的协同任务调度。AMBO利用在线强化学习方法,将任务卸载过程分解为边缘服务器选择和负载平衡功能,从而实现不同计算能力节点之间的智能调度。此外,通过集成深度q -网络(DQN)框架,AMBO提高了动态边缘环境下决策的准确性和稳定性。大量实验结果表明,AMBO显著提高了任务卸载效率,将任务完成时间缩短79.04%,任务完成率达到99.89%。这些结果突出了该算法在异构边缘计算场景下的强适应性和有效性,使其非常适合支持下一代延迟敏感的AIGC服务。
{"title":"Collaborative multi-task offloading in multi-edge system for AI-generated content service","authors":"Zhiyuan Li ,&nbsp;Jie Sun","doi":"10.1016/j.comnet.2025.111979","DOIUrl":"10.1016/j.comnet.2025.111979","url":null,"abstract":"<div><div>As artificial intelligence-generated content (AIGC) services become increasingly prevalent in edge networks, the demand for rapid and efficient processing in latency-sensitive applications continues to grow. Traditional task offloading strategies often struggle to coordinate heterogeneous resources, such as GPU and TPU clusters, resulting in imbalanced load distribution and underutilization of specialized accelerators. To overcome these limitations, we propose the adaptive multi-edge load balancing optimization (AMBO) algorithm, designed to optimize collaborative task scheduling among edge servers. AMBO utilizes an online reinforcement learning approach, decomposing the task offloading process into edge server selection and load balancing functions, which enables intelligent scheduling across nodes with varying computational capacities. Furthermore, by integrating the dueling Deep Q-Network (DQN) framework, AMBO enhances decision-making accuracy and stability in dynamic edge environments. Extensive experimental results demonstrate that AMBO significantly improves task offloading efficiency, reducing task completion time by 79.04% and achieving a task completion rate of 99.89%. These results highlight the algorithm’s strong adaptability and effectiveness in heterogeneous edge computing scenarios, making it well-suited for supporting the next generation of latency-sensitive AIGC services.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111979"},"PeriodicalIF":4.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent task management via dynamic multi-region division in LEO satellite networks 基于低轨道卫星网络动态多区域划分的智能任务管理
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-28 DOI: 10.1016/j.comnet.2025.111976
Zixuan Song , Zhishu Shen , Xiaoyu Zheng , Qiushi Zheng , Zheng Lei , Jiong Jin
As a key complement to terrestrial networks and a fundamental component of future 6G systems, Low Earth Orbit (LEO) satellite networks are expected to provide high-quality communication services when integrated with ground-based infrastructure, thereby attracting significant research interest. However, the limited satellite onboard resources and the uneven distribution of computational workloads often result in congestion along inter-satellite links (ISLs) that degrades task processing efficiency. Effectively managing the dynamic and large-scale topology of LEO networks to ensure balanced task distribution remains a critical challenge. To this end, we propose a dynamic multi-region division framework for intelligent task management in LEO satellite networks. This framework optimizes both intra- and inter-region routing to minimize task delay while balancing the utilization of computational and communication resources. Based on this framework, we propose a dynamic multi-region division algorithm based on the Genetic Algorithm (GA), which adaptively adjusts the size of each region based on the workload status of individual satellites. Additionally, we incorporate an adaptive routing algorithm and a task splitting and offloading scheme based on Multi-Agent Deep Deterministic Policy Gradient (MA-DDPG) to effectively accommodate the arriving tasks. Simulation results show that the proposed framework outperforms existing methods by improving the task completion rate by up to 5.78%, reducing the average task delay by up to 330.5 ms, and lowering energy consumption per task by up to 0.165 J, demonstrating its effectiveness and scalability for large-scale LEO satellite networks.
作为地面网络的关键补充和未来6G系统的基本组成部分,低地球轨道(LEO)卫星网络有望在与地面基础设施集成后提供高质量的通信服务,从而吸引了重要的研究兴趣。然而,有限的星载资源和计算工作负载的不均匀分布往往导致星间链路(isl)上的拥塞,从而降低任务处理效率。有效地管理LEO网络的动态和大规模拓扑结构,以确保任务分配的平衡仍然是一个关键的挑战。为此,我们提出了一个动态多区域划分框架,用于低轨道卫星网络的智能任务管理。该框架优化了区域内和区域间路由,以最小化任务延迟,同时平衡了计算和通信资源的利用。在此基础上,提出了一种基于遗传算法(GA)的动态多区域划分算法,该算法根据单个卫星的工作负载状态自适应调整每个区域的大小。此外,我们还结合了自适应路由算法和基于多智能体深度确定性策略梯度(MA-DDPG)的任务分割和卸载方案,以有效地容纳到达的任务。仿真结果表明,该框架比现有方法提高了5.78%的任务完成率,降低了330.5 ms的平均任务延迟,降低了0.165 J的单任务能耗,证明了该框架在大规模LEO卫星网络中的有效性和可扩展性。
{"title":"Intelligent task management via dynamic multi-region division in LEO satellite networks","authors":"Zixuan Song ,&nbsp;Zhishu Shen ,&nbsp;Xiaoyu Zheng ,&nbsp;Qiushi Zheng ,&nbsp;Zheng Lei ,&nbsp;Jiong Jin","doi":"10.1016/j.comnet.2025.111976","DOIUrl":"10.1016/j.comnet.2025.111976","url":null,"abstract":"<div><div>As a key complement to terrestrial networks and a fundamental component of future 6G systems, Low Earth Orbit (LEO) satellite networks are expected to provide high-quality communication services when integrated with ground-based infrastructure, thereby attracting significant research interest. However, the limited satellite onboard resources and the uneven distribution of computational workloads often result in congestion along inter-satellite links (ISLs) that degrades task processing efficiency. Effectively managing the dynamic and large-scale topology of LEO networks to ensure balanced task distribution remains a critical challenge. To this end, we propose a dynamic multi-region division framework for intelligent task management in LEO satellite networks. This framework optimizes both intra- and inter-region routing to minimize task delay while balancing the utilization of computational and communication resources. Based on this framework, we propose a dynamic multi-region division algorithm based on the Genetic Algorithm (GA), which adaptively adjusts the size of each region based on the workload status of individual satellites. Additionally, we incorporate an adaptive routing algorithm and a task splitting and offloading scheme based on Multi-Agent Deep Deterministic Policy Gradient (MA-DDPG) to effectively accommodate the arriving tasks. Simulation results show that the proposed framework outperforms existing methods by improving the task completion rate by up to 5.78%, reducing the average task delay by up to 330.5 ms, and lowering energy consumption per task by up to 0.165 J, demonstrating its effectiveness and scalability for large-scale LEO satellite networks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111976"},"PeriodicalIF":4.6,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a robust transport network with self-adaptive network digital twin 实现具有自适应网络数字孪生的鲁棒传输网络
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-27 DOI: 10.1016/j.comnet.2025.111967
Cláudio Modesto , João Borges , Cleverson Nahum , Lucas Matni , Cristiano Bonato Both , Kleber Cardoso , Glauco Gonçalves , Ilan Correa , Silvia Lins , Andrey Silva , Aldebaro Klautau
The ability of the Network digital twin (NDT) to remain aware of changes in its physical counterpart, known as the physical twin (PTwin), is a fundamental condition to enable timely synchronization, also referred to as twinning. In this way, considering a transport network, a key requirement is to handle unexpected traffic variability and dynamically adapt to maintain optimal performance in the associated virtual model, known as the virtual twin (VTwin). In this context, we propose a self-adaptive implementation of a novel NDT architecture designed to provide accurate delay predictions, even under fluctuating traffic conditions. This architecture addresses an essential challenge, underexplored in the literature: improving the resilience of data-driven NDT platforms against traffic variability and improving synchronization between the VTwin and its physical counterpart. Therefore, the contributions of this article rely on NDT lifecycle by focusing on the operational phase, where telemetry modules are used to monitor incoming traffic, and concept drift detection techniques guide retraining decisions aimed at updating and redeploying the VTwin when necessary. We validate our architecture with a network management use case, across various emulated network topologies, and diverse traffic patterns to demonstrate its effectiveness in preserving acceptable performance and predicting quality of service (QoS) metrics under unexpected traffic variation, such as delay and jitter. The results in all tested topologies, using the normalized mean square error as the evaluation metric, demonstrate that our proposed architecture, after a traffic concept drift, achieves a performance improvement in per-flow delay and jitter prediction of at least 64% and 21%, respectively, compared to a configuration without NDT synchronization.
网络数字孪生体(NDT)保持对其物理对等体(称为物理孪生体(PTwin))变化的感知能力是实现及时同步(也称为孪生)的基本条件。这样,考虑到传输网络,一个关键需求是处理意外的流量可变性,并动态适应以保持相关虚拟模型(称为虚拟孪生(VTwin))的最佳性能。在这种情况下,我们提出了一种新的无损检测架构的自适应实现,旨在提供准确的延迟预测,即使在波动的交通条件下。该架构解决了文献中未充分探讨的基本挑战:提高数据驱动的无损检测平台对流量变异性的弹性,并改善VTwin与其物理对应物之间的同步。因此,本文的贡献依赖于无损检测生命周期,重点关注操作阶段,其中遥测模块用于监控传入流量,概念漂移检测技术指导重新培训决策,旨在在必要时更新和重新部署VTwin。我们用网络管理用例验证了我们的架构,跨各种模拟网络拓扑和各种流量模式,以证明其在保持可接受的性能和预测意外流量变化(如延迟和抖动)下的服务质量(QoS)指标方面的有效性。在所有测试的拓扑中,使用归一化均方误差作为评估指标,结果表明,与没有NDT同步的配置相比,我们提出的架构在流量概念漂移后,在每流延迟和抖动预测方面的性能分别提高了至少64%和21%。
{"title":"Towards a robust transport network with self-adaptive network digital twin","authors":"Cláudio Modesto ,&nbsp;João Borges ,&nbsp;Cleverson Nahum ,&nbsp;Lucas Matni ,&nbsp;Cristiano Bonato Both ,&nbsp;Kleber Cardoso ,&nbsp;Glauco Gonçalves ,&nbsp;Ilan Correa ,&nbsp;Silvia Lins ,&nbsp;Andrey Silva ,&nbsp;Aldebaro Klautau","doi":"10.1016/j.comnet.2025.111967","DOIUrl":"10.1016/j.comnet.2025.111967","url":null,"abstract":"<div><div>The ability of the Network digital twin (NDT) to remain aware of changes in its physical counterpart, known as the physical twin (PTwin), is a fundamental condition to enable timely synchronization, also referred to as <em>twinning</em>. In this way, considering a transport network, a key requirement is to handle unexpected traffic variability and dynamically adapt to maintain optimal performance in the associated virtual model, known as the virtual twin (VTwin). In this context, we propose a self-adaptive implementation of a novel NDT architecture designed to provide accurate delay predictions, even under fluctuating traffic conditions. This architecture addresses an essential challenge, underexplored in the literature: improving the resilience of data-driven NDT platforms against traffic variability and improving synchronization between the VTwin and its physical counterpart. Therefore, the contributions of this article rely on NDT lifecycle by focusing on the operational phase, where telemetry modules are used to monitor incoming traffic, and concept drift detection techniques guide retraining decisions aimed at updating and redeploying the VTwin when necessary. We validate our architecture with a network management use case, across various emulated network topologies, and diverse traffic patterns to demonstrate its effectiveness in preserving acceptable performance and predicting quality of service (QoS) metrics under unexpected traffic variation, such as delay and jitter. The results in all tested topologies, using the normalized mean square error as the evaluation metric, demonstrate that our proposed architecture, after a traffic concept drift, achieves a performance improvement in per-flow delay and jitter prediction of at least 64% and 21%, respectively, compared to a configuration without NDT synchronization.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111967"},"PeriodicalIF":4.6,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HAT: Leveraging hierarchical attention and temporal modeling for API-based malware detection HAT:利用分层关注和时间建模来进行基于api的恶意软件检测
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-27 DOI: 10.1016/j.comnet.2025.111971
Zhengyu Zhu , Shan Liao , Lei Zhang , Liang Liu
While runtime parameters have been incorporated to enhance API-based malware detection, existing approaches still fall short in fully capturing the structural and temporal characteristics of API call sequences, thereby limiting their generalization capability. In this paper, we propose HAT, a novel detection method that jointly models API sequences from both structural and temporal perspectives. HAT leverages a hierarchical attention mechanism to learn the varying importance of API names and their parameters, and integrates two complementary temporal modules to uncover execution patterns of malware that are underexplored in prior work. Extensive experiments on multiple datasets demonstrate that HAT consistently outperforms existing methods. Compared to approaches relying only on API names, HAT improves the F1-score by 5.50% to 30.87%. Compared to parameter-augmented approaches, it achieves superior detection and generalization, with F1-score improvements of 4.10% to 7.07%, benefiting from its unified modeling of structural and temporal aspects.
虽然运行时参数已经被纳入到增强基于API的恶意软件检测中,但现有的方法仍然无法完全捕获API调用序列的结构和时间特征,从而限制了它们的泛化能力。在本文中,我们提出了一种新的检测方法HAT,从结构和时间的角度对API序列进行联合建模。HAT利用分层注意机制来学习API名称及其参数的不同重要性,并集成两个互补的时间模块来发现恶意软件的执行模式,这些模式在以前的工作中未被充分探索。在多个数据集上进行的大量实验表明,HAT始终优于现有方法。与仅依赖API名称的方法相比,HAT将f1得分提高了5.50%,达到30.87%。与参数增强方法相比,该方法的检测和泛化效果更好,f1得分提高了4.10% ~ 7.07%,这得益于其对结构和时间方面的统一建模。
{"title":"HAT: Leveraging hierarchical attention and temporal modeling for API-based malware detection","authors":"Zhengyu Zhu ,&nbsp;Shan Liao ,&nbsp;Lei Zhang ,&nbsp;Liang Liu","doi":"10.1016/j.comnet.2025.111971","DOIUrl":"10.1016/j.comnet.2025.111971","url":null,"abstract":"<div><div>While runtime parameters have been incorporated to enhance API-based malware detection, existing approaches still fall short in fully capturing the structural and temporal characteristics of API call sequences, thereby limiting their generalization capability. In this paper, we propose <strong>HAT</strong>, a novel detection method that jointly models API sequences from both structural and temporal perspectives. HAT leverages a hierarchical attention mechanism to learn the varying importance of API names and their parameters, and integrates two complementary temporal modules to uncover execution patterns of malware that are underexplored in prior work. Extensive experiments on multiple datasets demonstrate that HAT consistently outperforms existing methods. Compared to approaches relying only on API names, HAT improves the F1-score by 5.50% to 30.87%. Compared to parameter-augmented approaches, it achieves superior detection and generalization, with F1-score improvements of 4.10% to 7.07%, benefiting from its unified modeling of structural and temporal aspects.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111971"},"PeriodicalIF":4.6,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-efficient online knowledge distillation for mobile video inference 面向移动视频推理的高效在线知识蒸馏
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-26 DOI: 10.1016/j.comnet.2025.111962
Guangfeng Guo , Junxing Zhang , Baowei Liu
Wearable devices can assist users in cognitive decline through context-aware scene interpretation. They should function in real time with sufficient functionality, performance, and usability. However, high-accuracy and low-delay scene interpretation rely on the Deep Neural Network (DNN) inference of continuous video streams, which poses significant challenges to wearable devices due to their tight energy budget and unpredictable delay impact. In this paper, we propose a novel framework, EEOKD (Energy-Efficient Online Knowledge Distillation). The framework specializes in a high-accuracy and low-cost object detection model that automatically adapts to the target video, utilizes minimal bandwidth, and tolerates variations in network delay. First, we formalize the online knowledge distillation problem and introduce a metric for choosing the timing of online training based on the concept drift theory. Second, we propose efficient asynchronous distributed algorithms that leverage the loss gradient to alleviate the impact of delay changes. Third, we propose a novel online knowledge distillation scheme that incorporates freshness-based importance sampling and batch training to enhance the student model’s generalization ability while minimizing the number of training samples and reducing the frequency of weight updates. The novel method enhances energy efficiency by accelerating model convergence and maintains good detection performance even when network delays change considerably. Finally, we implement a system prototype and evaluate its performance and energy efficiency. Experimental results demonstrate that our EEOKD framework achieves a 13% increase in energy efficiency, approximately 60% lower network bandwidth usage, and an average 4% improvement in detection accuracy compared to existing methods.
可穿戴设备可以通过情境感知的场景解读来帮助认知衰退的用户。它们应该实时运行,具有足够的功能、性能和可用性。然而,高精度和低延迟的场景解释依赖于连续视频流的深度神经网络(DNN)推理,由于可穿戴设备的能量预算紧张和不可预测的延迟影响,这给可穿戴设备带来了重大挑战。在本文中,我们提出了一个新的框架,EEOKD(节能在线知识蒸馏)。该框架专门研究高精度和低成本的目标检测模型,该模型自动适应目标视频,利用最小的带宽,并容忍网络延迟的变化。首先,我们形式化了在线知识蒸馏问题,并引入了基于概念漂移理论的在线培训时间选择度量。其次,我们提出了高效的异步分布式算法,利用损失梯度来减轻延迟变化的影响。第三,我们提出了一种新的在线知识蒸馏方案,该方案结合了基于新鲜度的重要性采样和批量训练,以提高学生模型的泛化能力,同时最小化训练样本的数量和降低权重更新的频率。该方法通过加速模型收敛提高了能量效率,并且在网络延迟变化较大的情况下保持了良好的检测性能。最后,我们实现了一个系统原型,并对其性能和能效进行了评估。实验结果表明,与现有方法相比,EEOKD框架的能源效率提高了13%,网络带宽使用降低了约60%,检测精度平均提高了4%。
{"title":"Energy-efficient online knowledge distillation for mobile video inference","authors":"Guangfeng Guo ,&nbsp;Junxing Zhang ,&nbsp;Baowei Liu","doi":"10.1016/j.comnet.2025.111962","DOIUrl":"10.1016/j.comnet.2025.111962","url":null,"abstract":"<div><div>Wearable devices can assist users in cognitive decline through context-aware scene interpretation. They should function in real time with sufficient functionality, performance, and usability. However, high-accuracy and low-delay scene interpretation rely on the Deep Neural Network (DNN) inference of continuous video streams, which poses significant challenges to wearable devices due to their tight energy budget and unpredictable delay impact. In this paper, we propose a novel framework, EEOKD (Energy-Efficient Online Knowledge Distillation). The framework specializes in a high-accuracy and low-cost object detection model that automatically adapts to the target video, utilizes minimal bandwidth, and tolerates variations in network delay. First, we formalize the online knowledge distillation problem and introduce a metric for choosing the timing of online training based on the concept drift theory. Second, we propose efficient asynchronous distributed algorithms that leverage the loss gradient to alleviate the impact of delay changes. Third, we propose a novel online knowledge distillation scheme that incorporates freshness-based importance sampling and batch training to enhance the student model’s generalization ability while minimizing the number of training samples and reducing the frequency of weight updates. The novel method enhances energy efficiency by accelerating model convergence and maintains good detection performance even when network delays change considerably. Finally, we implement a system prototype and evaluate its performance and energy efficiency. Experimental results demonstrate that our EEOKD framework achieves a 13% increase in energy efficiency, approximately 60% lower network bandwidth usage, and an average 4% improvement in detection accuracy compared to existing methods.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111962"},"PeriodicalIF":4.6,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient and reliable mechanism for Wormhole detection in RPL based IoT networks 基于RPL的物联网网络中虫洞检测的高效可靠机制
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-26 DOI: 10.1016/j.comnet.2025.111968
Jawad Hassan , Muhammad Yousaf Ali Raza , Adnan Sohail , Muhammad Asim , Zeeshan Pervez
The Internet of Things (IoT) relies heavily on the Routing Protocol for Low Power and Lossy Networks (RPL) to support large scale, resource constrained deployments. However, RPL faces major research challenges, including its susceptibility to routing attacks, limited support for mutual authentication, and dynamic topology variations. In addition, the inefficiency of traditional heavy-weight cryptographic mechanisms, though provide secure communication but remain ineffective against insider routing attacks. These weaknesses allow adversaries to exploit routing control messages, leading to attacks such as Wormhole, Rank, and DAO Inconsistency. Among these, Wormhole attacks are particularly severe because they exploit colluding nodes to create deceptive low latency tunnels, misleading neighboring nodes, and disrupting the overall routing topology. Motivated by these challenges, this paper presents Efficient and Reliable Wormhole detection for IoT (AKA ERW-IoT), a lightweight path validation mechanism that ensures routing integrity with minimal overhead. Simulation results show that ERW-IoT improves the average packet delivery ratio by 5.5%, reduces energy consumption by 0.986%, optimizes memory utilization by nearly 1%, and achieves a 100% detection rate, demonstrating its practicality and effectiveness in securing RPL based IoT networks.
物联网(IoT)在很大程度上依赖于低功耗和有损网络路由协议(RPL)来支持大规模、资源受限的部署。然而,RPL面临着主要的研究挑战,包括对路由攻击的敏感性、对相互认证的有限支持以及动态拓扑变化。此外,传统重量级加密机制的低效率虽然提供了安全的通信,但对内部路由攻击仍然无效。这些弱点允许攻击者利用路由控制消息,导致虫洞、Rank和DAO不一致等攻击。其中,虫洞攻击尤其严重,因为它们利用串通节点创建具有欺骗性的低延迟隧道,误导相邻节点,并破坏整个路由拓扑。在这些挑战的激励下,本文提出了高效可靠的物联网虫洞检测(又名ERW-IoT),这是一种轻量级的路径验证机制,以最小的开销确保路由完整性。仿真结果表明,ERW-IoT平均数据包投递率提高5.5%,能耗降低0.986%,内存利用率优化近1%,检测率达到100%,证明了其在保护基于RPL的物联网网络安全方面的实用性和有效性。
{"title":"An efficient and reliable mechanism for Wormhole detection in RPL based IoT networks","authors":"Jawad Hassan ,&nbsp;Muhammad Yousaf Ali Raza ,&nbsp;Adnan Sohail ,&nbsp;Muhammad Asim ,&nbsp;Zeeshan Pervez","doi":"10.1016/j.comnet.2025.111968","DOIUrl":"10.1016/j.comnet.2025.111968","url":null,"abstract":"<div><div>The Internet of Things (IoT) relies heavily on the Routing Protocol for Low Power and Lossy Networks (RPL) to support large scale, resource constrained deployments. However, RPL faces major research challenges, including its susceptibility to routing attacks, limited support for mutual authentication, and dynamic topology variations. In addition, the inefficiency of traditional heavy-weight cryptographic mechanisms, though provide secure communication but remain ineffective against insider routing attacks. These weaknesses allow adversaries to exploit routing control messages, leading to attacks such as Wormhole, Rank, and DAO Inconsistency. Among these, Wormhole attacks are particularly severe because they exploit colluding nodes to create deceptive low latency tunnels, misleading neighboring nodes, and disrupting the overall routing topology. Motivated by these challenges, this paper presents Efficient and Reliable Wormhole detection for IoT (AKA ERW-IoT), a lightweight path validation mechanism that ensures routing integrity with minimal overhead. Simulation results show that ERW-IoT improves the average packet delivery ratio by 5.5%, reduces energy consumption by 0.986%, optimizes memory utilization by nearly 1%, and achieves a 100% detection rate, demonstrating its practicality and effectiveness in securing RPL based IoT networks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111968"},"PeriodicalIF":4.6,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HybridGuard: Enhancing minority-class intrusion detection in dew-enabled edge-of-things networks HybridGuard:在启用露水的物联网边缘网络中增强少数类入侵检测
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-25 DOI: 10.1016/j.comnet.2025.111966
Binayak Kar , Ujjwal Sahu , Ciza Thomas , Jyoti Prakash Sahoo
Securing networks in Dew-Enabled edge-of-things (EoT) networks from sophisticated intrusions is a challenge that is at once critical and challenging. This paper presents HybridGuard, a state-of-the-art framework that combines Machine Learning and Deep Learning to raise the bar for intrusion detection. HybridGuard addresses data imbalance by performing mutual information-based feature selection to ensure that the most important features are always considered to improve detection performance, especially for minority attacks. The proposed framework leverages Wasserstein Conditional Generative Adversarial Networks (WCGAN-GP) to alleviate class imbalance, hence enhancing the precision of detection. In the framework, a two-phase architecture named “DualNetShield” was integrated to introduce advanced network traffic analysis and anomaly detection techniques, enhancing the granular identification of threats within complex EoT environments. HybridGuard, tested on UNSW-NB15, CIC-IDS-2017, and IOTID20 datasets, demonstrates robust performance over a wide variety of attack scenarios, outperforming the existing solutions in adaptation to evolving cybersecurity threats. This innovative approach establishes HybridGuard as a powerful tool for safeguarding EoT networks against modern intrusions.
在启用露珠的物联网(EoT)网络中保护网络免受复杂入侵是一项既关键又具有挑战性的挑战。本文介绍了HybridGuard,这是一个结合了机器学习和深度学习的最先进框架,以提高入侵检测的标准。HybridGuard通过执行基于相互信息的特征选择来解决数据不平衡问题,以确保始终考虑最重要的特征以提高检测性能,特别是对于少数攻击。该框架利用Wasserstein条件生成对抗网络(WCGAN-GP)来缓解类别不平衡,从而提高检测精度。在该框架中,集成了名为“DualNetShield”的两阶段架构,引入了先进的网络流量分析和异常检测技术,增强了复杂EoT环境中威胁的粒度识别。在UNSW-NB15、CIC-IDS-2017和IOTID20数据集上进行的测试表明,HybridGuard在各种攻击场景下都表现出强大的性能,在适应不断变化的网络安全威胁方面优于现有解决方案。这种创新的方法使HybridGuard成为保护EoT网络免受现代入侵的强大工具。
{"title":"HybridGuard: Enhancing minority-class intrusion detection in dew-enabled edge-of-things networks","authors":"Binayak Kar ,&nbsp;Ujjwal Sahu ,&nbsp;Ciza Thomas ,&nbsp;Jyoti Prakash Sahoo","doi":"10.1016/j.comnet.2025.111966","DOIUrl":"10.1016/j.comnet.2025.111966","url":null,"abstract":"<div><div>Securing networks in Dew-Enabled edge-of-things (EoT) networks from sophisticated intrusions is a challenge that is at once critical and challenging. This paper presents HybridGuard, a state-of-the-art framework that combines Machine Learning and Deep Learning to raise the bar for intrusion detection. HybridGuard addresses data imbalance by performing mutual information-based feature selection to ensure that the most important features are always considered to improve detection performance, especially for minority attacks. The proposed framework leverages Wasserstein Conditional Generative Adversarial Networks (WCGAN-GP) to alleviate class imbalance, hence enhancing the precision of detection. In the framework, a two-phase architecture named “DualNetShield” was integrated to introduce advanced network traffic analysis and anomaly detection techniques, enhancing the granular identification of threats within complex EoT environments. HybridGuard, tested on UNSW-NB15, CIC-IDS-2017, and IOTID20 datasets, demonstrates robust performance over a wide variety of attack scenarios, outperforming the existing solutions in adaptation to evolving cybersecurity threats. This innovative approach establishes HybridGuard as a powerful tool for safeguarding EoT networks against modern intrusions.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111966"},"PeriodicalIF":4.6,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint spectrum allocation and power control for D2D communication and sensing in 6G networks using DRL-based hyper-heuristics 6G网络中基于drl的D2D通信与传感联合频谱分配与功率控制
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-25 DOI: 10.1016/j.comnet.2025.111969
Gabriel Pimenta de Freitas Cardoso, Paulo Henrique Portela De Carvalho, Paulo Roberto de Lira Gondim
The ongoing evolution of mobile communication systems, particularly toward the sixth generation (6G), has opened new frontiers in the integration of communication and sensing technologies. In this context, Industry 4.0 demands efficient and intelligent solutions for supporting a growing number of interconnected devices while ensuring low latency and high spectral efficiency.
This study addresses the complex problem of joint resource allocation in systems that integrate primary communications, device-to-device (D2D) communication, and sensing, with a special focus on power control and spectrum sharing.It proposes a novel hyper-heuristic (HH) strategy powered by Deep Reinforcement Learning (DRL) that dynamically allocates resources and optimizes spectral usage in a 6G-enabled environment. Unlike traditional heuristic-based approaches that rely on fixed rules, DRL-based HH learns from interactions with the environment and selects appropriate low-level heuristics (LLHs) for managing interference, meeting performance constraints, and improving D2D and sensor operations. A realistic simulation scenario inspired by industrial environments was modeled for evaluations of the strategy’s effectiveness.
The results show the method can effectively balance the competing demands of different system components, dynamically adapt to environmental changes, and maintain compliance with detection and transmission constraints. By extending existing models to including D2D communication, channel uncertainties, and spectrum reallocation over time, the study contributes with a scalable and intelligent solution for future wireless systems in complex industrial settings.
移动通信系统的持续发展,特别是向第六代(6G)发展,为通信和传感技术的集成开辟了新的领域。在这种背景下,工业4.0需要高效智能的解决方案来支持越来越多的互联设备,同时确保低延迟和高频谱效率。本研究解决了集成主通信、设备对设备(D2D)通信和传感系统中联合资源分配的复杂问题,特别关注功率控制和频谱共享。它提出了一种新的超启发式(HH)策略,该策略由深度强化学习(DRL)提供支持,可以在支持6g的环境中动态分配资源并优化频谱使用。与依赖固定规则的传统启发式方法不同,基于drl的HH从与环境的交互中学习,并选择适当的低级启发式(LLHs)来管理干扰,满足性能约束,改善D2D和传感器操作。为评估该策略的有效性,建立了一个受工业环境启发的现实仿真场景模型。结果表明,该方法能有效平衡系统各组成部分的竞争需求,动态适应环境变化,并保持对检测和传输约束的遵从性。通过将现有模型扩展到D2D通信、信道不确定性和频谱重新分配,该研究为复杂工业环境下的未来无线系统提供了可扩展的智能解决方案。
{"title":"Joint spectrum allocation and power control for D2D communication and sensing in 6G networks using DRL-based hyper-heuristics","authors":"Gabriel Pimenta de Freitas Cardoso,&nbsp;Paulo Henrique Portela De Carvalho,&nbsp;Paulo Roberto de Lira Gondim","doi":"10.1016/j.comnet.2025.111969","DOIUrl":"10.1016/j.comnet.2025.111969","url":null,"abstract":"<div><div>The ongoing evolution of mobile communication systems, particularly toward the sixth generation (6G), has opened new frontiers in the integration of communication and sensing technologies. In this context, Industry 4.0 demands efficient and intelligent solutions for supporting a growing number of interconnected devices while ensuring low latency and high spectral efficiency.</div><div>This study addresses the complex problem of joint resource allocation in systems that integrate primary communications, device-to-device (D2D) communication, and sensing, with a special focus on power control and spectrum sharing.It proposes a novel hyper-heuristic (HH) strategy powered by Deep Reinforcement Learning (DRL) that dynamically allocates resources and optimizes spectral usage in a 6G-enabled environment. Unlike traditional heuristic-based approaches that rely on fixed rules, DRL-based HH learns from interactions with the environment and selects appropriate low-level heuristics (LLHs) for managing interference, meeting performance constraints, and improving D2D and sensor operations. A realistic simulation scenario inspired by industrial environments was modeled for evaluations of the strategy’s effectiveness.</div><div>The results show the method can effectively balance the competing demands of different system components, dynamically adapt to environmental changes, and maintain compliance with detection and transmission constraints. By extending existing models to including D2D communication, channel uncertainties, and spectrum reallocation over time, the study contributes with a scalable and intelligent solution for future wireless systems in complex industrial settings.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"276 ","pages":"Article 111969"},"PeriodicalIF":4.6,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1