首页 > 最新文献

Computer Networks最新文献

英文 中文
Demand aggregation-based transmission in remote sensing satellite networks 基于需求聚合的遥感卫星网络传输
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-31 DOI: 10.1016/j.comnet.2026.112066
Jing Chen , Xiaoqiang Di , Yuming Jiang , Hui Qi , Jinyao Liu , Xu Yan
As remote sensing satellite networks develop, directly linking user terminals to satellites to access data is becoming a key trend. To meet growing user demands while managing limited transmission resources, this paper proposes a Demand Aggregation-based Network Utility Maximization Transmission Scheme(DANUMTS). It uses the NDN architecture and demand aggregation based on spatio-temporal attributes of remote sensing data to prevent redundant data transmission and resource waste. The scheme also designs a demand-link matching matrix for demand selection at each hop and establishes a cooperative rate control model between terminals and networks. By applying the Lagrangian dual method, the model is divided into two subproblems to simplify the optimization process and enable real-time decision-making. Simulation results demonstrate that DANUMTS outperforms existing methods in terms of demand completion time, data rate, network throughput, and the number of completed demands, with more significant improvements when demand aggregation opportunities arise.
随着遥感卫星网络的发展,用户终端与卫星直接对接获取数据已成为一个重要趋势。为了满足日益增长的用户需求,同时管理有限的传输资源,本文提出了一种基于需求聚合的网络效用最大化传输方案(DANUMTS)。采用基于遥感数据时空属性的NDN架构和需求聚合,避免了数据冗余传输和资源浪费。该方案还设计了需求-链路匹配矩阵,用于每一跳的需求选择,并建立了终端与网络之间的合作速率控制模型。采用拉格朗日对偶方法,将模型划分为两个子问题,简化优化过程,实现实时决策。仿真结果表明,DANUMTS在需求完成时间、数据速率、网络吞吐量和完成需求数量方面优于现有方法,当需求聚合机会出现时,改进效果更为显著。
{"title":"Demand aggregation-based transmission in remote sensing satellite networks","authors":"Jing Chen ,&nbsp;Xiaoqiang Di ,&nbsp;Yuming Jiang ,&nbsp;Hui Qi ,&nbsp;Jinyao Liu ,&nbsp;Xu Yan","doi":"10.1016/j.comnet.2026.112066","DOIUrl":"10.1016/j.comnet.2026.112066","url":null,"abstract":"<div><div>As remote sensing satellite networks develop, directly linking user terminals to satellites to access data is becoming a key trend. To meet growing user demands while managing limited transmission resources, this paper proposes a Demand Aggregation-based Network Utility Maximization Transmission Scheme(DANUMTS). It uses the NDN architecture and demand aggregation based on spatio-temporal attributes of remote sensing data to prevent redundant data transmission and resource waste. The scheme also designs a demand-link matching matrix for demand selection at each hop and establishes a cooperative rate control model between terminals and networks. By applying the Lagrangian dual method, the model is divided into two subproblems to simplify the optimization process and enable real-time decision-making. Simulation results demonstrate that DANUMTS outperforms existing methods in terms of demand completion time, data rate, network throughput, and the number of completed demands, with more significant improvements when demand aggregation opportunities arise.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"278 ","pages":"Article 112066"},"PeriodicalIF":4.6,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive ring-synchronized hierarchical routing for energy-efficient and congestion-aware data dissemination in mobile wireless sensor networks 移动无线传感器网络中节能和拥塞感知数据分发的自适应环同步分层路由
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-31 DOI: 10.1016/j.comnet.2026.112077
MingXing Lu , Zhao Wang , Hongbo Fan
A vital part of the Internet of Things (IoT) ecosystem, Mobile Wireless Sensor Networks (MWSNs) allow for intelligent monitoring in dynamic settings including smart cities, healthcare, and disaster management. Quality of Service (QoS) is harmed by the difficulties brought about by the mobility of sensor and sink nodes, such as unequal energy consumption, congestion close to mobile sinks, and unstable routing. This research suggests the Bio-Inspired Dynamic Swarm Routing (BDSR) protocol, an energy-adaptive and self-organizing routing architecture made for large-scale MWSNs, as a solution to these problems. To accomplish congestion-aware and energy-balanced communication, BDSR combines dynamic energy clustering, swarm-driven ring adaptation, and predictive pheromone learning. Based on local pheromone gradients, queue usage, and residual energy, the protocol automatically modifies cluster formation and routing weights. Compared to state-of-the-art techniques like SMEOR, AECR, MSHRP, and Hybrid GS-MBO, extensive NS-2 simulations demonstrate that BDSR increases throughput by 47%, lowers latency and end-to-end delay by 52%, and prolongs network lifetime by 38%. For high-mobility IoT applications that need real-time, energy-efficient, and congestion-tolerant data dissemination, the results validate the scalability and resilience of BDSR.
作为物联网(IoT)生态系统的重要组成部分,移动无线传感器网络(mwsn)允许在智能城市、医疗保健和灾害管理等动态环境中进行智能监控。传感器和汇聚节点的移动性带来的能量消耗不均、靠近移动汇聚节点的拥塞、路由不稳定等问题会影响服务质量(QoS)。本研究提出了一种针对大规模mwsn的能量自适应、自组织路由体系结构——仿生动态群路由(BDSR)协议来解决这些问题。为了实现拥塞感知和能量平衡通信,BDSR结合了动态能量聚类、群体驱动环适应和预测信息素学习。基于局部信息素梯度、队列使用率和剩余能量,协议自动修改簇的形成和路由权值。与最先进的技术(如SMEOR、AECR、MSHRP和Hybrid GS-MBO)相比,广泛的NS-2模拟表明,BDSR将吞吐量提高了47%,将延迟和端到端延迟降低了52%,并将网络寿命延长了38%。对于需要实时、节能和抗拥塞数据传播的高移动性物联网应用,结果验证了BDSR的可扩展性和弹性。
{"title":"Adaptive ring-synchronized hierarchical routing for energy-efficient and congestion-aware data dissemination in mobile wireless sensor networks","authors":"MingXing Lu ,&nbsp;Zhao Wang ,&nbsp;Hongbo Fan","doi":"10.1016/j.comnet.2026.112077","DOIUrl":"10.1016/j.comnet.2026.112077","url":null,"abstract":"<div><div>A vital part of the Internet of Things (IoT) ecosystem, Mobile Wireless Sensor Networks (MWSNs) allow for intelligent monitoring in dynamic settings including smart cities, healthcare, and disaster management. Quality of Service (QoS) is harmed by the difficulties brought about by the mobility of sensor and sink nodes, such as unequal energy consumption, congestion close to mobile sinks, and unstable routing. This research suggests the Bio-Inspired Dynamic Swarm Routing (BDSR) protocol, an energy-adaptive and self-organizing routing architecture made for large-scale MWSNs, as a solution to these problems. To accomplish congestion-aware and energy-balanced communication, BDSR combines dynamic energy clustering, swarm-driven ring adaptation, and predictive pheromone learning. Based on local pheromone gradients, queue usage, and residual energy, the protocol automatically modifies cluster formation and routing weights. Compared to state-of-the-art techniques like SMEOR, AECR, MSHRP, and Hybrid GS-MBO, extensive NS-2 simulations demonstrate that BDSR increases throughput by 47%, lowers latency and end-to-end delay by 52%, and prolongs network lifetime by 38%. For high-mobility IoT applications that need real-time, energy-efficient, and congestion-tolerant data dissemination, the results validate the scalability and resilience of BDSR.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"278 ","pages":"Article 112077"},"PeriodicalIF":4.6,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing handover decisions with skipping mechanisms in 5G mmWave UDNs using reinforcement learning 基于强化学习的5G毫米波udn跳跃机制切换决策优化
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-31 DOI: 10.1016/j.comnet.2026.112081
Abate Selamawit Chane, Harun Ur Rashid, Kamrul Hasan, Awoke Loret Abiy, Seong Ho Jeong
The rapid evolution of 5G and emerging technologies is reshaping cellular network architectures. In order to support the growing demands for these technologies, many network designs now incorporate Ultra-Dense Networks (UDNs), particularly in the millimeter wave (mmWave) operations, where dense base station layouts are utilized to overcome propagation challenges and improve capacity. However, such dense deployments significantly complicate mobility management by triggering more frequent handovers, leading to increased signaling overhead and frequent service disruption, as many of these handovers are redundant or offer minimal benefit. To minimize interruptions caused by frequent handovers (HOs), effective handover decision strategies are critical. Several existing schemes have been developed for low to medium mobility scenarios and typically rely on static decision policies, which fail to account for the dynamic nature of the network. Others apply reinforcement learning techniques, yet their evaluations are often restricted to limited mobility settings and lack validation under high-speed conditions. To address these limitations, we propose a handover decision framework based on deep reinforcement learning (DRL) to intelligently suppress unnecessary handovers in mmWave UDNs. The framework leverages the Advantage Actor-Critic (A2C) algorithm, which is well-suited for learning optimal policies in dynamic network environments. A handover skipping strategy is incorporated to improve mobility robustness. Performance is evaluated using handover rate and throughput as key metrics. Experimental results demonstrate that the proposed scheme effectively learns optimal handover behavior through extensive training and outperforms several benchmark approaches from prior studies. As user speed increases, the proposed approach exhibits the most stable handover performance, with only a 28.74% increase in handover rate and outperforms the baselines, which show increases ranging from 60.7% to 91.6%. It also demonstrates strong resilience to mobility-induced degradation, with just a 10% drop in throughput, significantly lower than the 21.3% to 57.1% drops observed in the baseline schemes. In high-speed scenarios, the integration of dynamic handover skipping further improves the algorithm’s performance, yielding an 82.1% increase in cumulative reward and a 39% improvement in throughput.
5G和新兴技术的快速发展正在重塑蜂窝网络架构。为了支持对这些技术不断增长的需求,现在许多网络设计都采用了超密集网络(udn),特别是在毫米波(mmWave)操作中,密集的基站布局被用来克服传播挑战并提高容量。然而,这种密集的部署会触发更频繁的移交,从而使移动性管理变得非常复杂,从而导致信号开销增加和频繁的服务中断,因为许多移交是冗余的,或者提供的好处很少。为了尽量减少频繁移交所造成的中断,有效的移交决策策略至关重要。现有的几种方案是针对低到中等移动性场景开发的,它们通常依赖于静态决策策略,无法考虑网络的动态性。其他应用强化学习技术,但他们的评估往往局限于有限的移动设置,缺乏高速条件下的验证。为了解决这些限制,我们提出了一个基于深度强化学习(DRL)的切换决策框架,以智能地抑制毫米波udn中不必要的切换。该框架利用了优势参与者-批评者(A2C)算法,该算法非常适合在动态网络环境中学习最优策略。采用切换跳变策略提高机动性的鲁棒性。使用切换率和吞吐量作为关键指标来评估性能。实验结果表明,该方法通过大量的训练有效地学习到最优的切换行为,并且优于已有的几种基准方法。随着用户速度的增加,该方法呈现出最稳定的切换性能,切换率仅增加28.74%,优于基线的60.7% ~ 91.6%。它还显示了对流动性引起的退化的强大弹性,吞吐量仅下降10%,显著低于基线方案中观察到的21.3%至57.1%的下降。在高速场景下,集成动态切换跳转进一步提高了算法的性能,累计奖励提高82.1%,吞吐量提高39%。
{"title":"Optimizing handover decisions with skipping mechanisms in 5G mmWave UDNs using reinforcement learning","authors":"Abate Selamawit Chane,&nbsp;Harun Ur Rashid,&nbsp;Kamrul Hasan,&nbsp;Awoke Loret Abiy,&nbsp;Seong Ho Jeong","doi":"10.1016/j.comnet.2026.112081","DOIUrl":"10.1016/j.comnet.2026.112081","url":null,"abstract":"<div><div>The rapid evolution of 5G and emerging technologies is reshaping cellular network architectures. In order to support the growing demands for these technologies, many network designs now incorporate Ultra-Dense Networks (UDNs), particularly in the millimeter wave (mmWave) operations, where dense base station layouts are utilized to overcome propagation challenges and improve capacity. However, such dense deployments significantly complicate mobility management by triggering more frequent handovers, leading to increased signaling overhead and frequent service disruption, as many of these handovers are redundant or offer minimal benefit. To minimize interruptions caused by frequent handovers (HOs), effective handover decision strategies are critical. Several existing schemes have been developed for low to medium mobility scenarios and typically rely on static decision policies, which fail to account for the dynamic nature of the network. Others apply reinforcement learning techniques, yet their evaluations are often restricted to limited mobility settings and lack validation under high-speed conditions. To address these limitations, we propose a handover decision framework based on deep reinforcement learning (DRL) to intelligently suppress unnecessary handovers in mmWave UDNs. The framework leverages the Advantage Actor-Critic (A2C) algorithm, which is well-suited for learning optimal policies in dynamic network environments. A handover skipping strategy is incorporated to improve mobility robustness. Performance is evaluated using handover rate and throughput as key metrics. Experimental results demonstrate that the proposed scheme effectively learns optimal handover behavior through extensive training and outperforms several benchmark approaches from prior studies. As user speed increases, the proposed approach exhibits the most stable handover performance, with only a 28.74% increase in handover rate and outperforms the baselines, which show increases ranging from 60.7% to 91.6%. It also demonstrates strong resilience to mobility-induced degradation, with just a 10% drop in throughput, significantly lower than the 21.3% to 57.1% drops observed in the baseline schemes. In high-speed scenarios, the integration of dynamic handover skipping further improves the algorithm’s performance, yielding an 82.1% increase in cumulative reward and a 39% improvement in throughput.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"278 ","pages":"Article 112081"},"PeriodicalIF":4.6,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-functional certification of edge-computing satellite systems 边缘计算卫星系统的非功能认证
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-31 DOI: 10.1016/j.comnet.2026.112036
Filippo Berto , Marco Anisetti , Qiyang Zhang , Shangguang Wang , Claudio A. Ardagna
Satellite telecommunication networks are playing an increasingly pivotal role in modern communication infrastructures, owing to their expansive coverage, high reliability, and growing capabilities in computing, storage, and bandwidth. In response to evolving market demands, mobile network operators are progressively integrating satellite systems with edge-cloud computing platforms to deliver advanced networking functionalities within a unified architecture. This integration places strong demands on the non-functional assessment (e.g., reliability, availability, and resource efficiency) of satellite-based edge nodes, introducing unprecedented challenges due to their unique operational constraints. In this paper, we propose a lightweight certification framework tailored for satellite computing systems, designed to assess and validate the non-functional posture of satellite edge networks. Our approach explicitly addresses the distinctive characteristics of satellite environments, including intermittent connectivity and constrained resource availability. We validate the proposed scheme through a realistic testbed implementation, modeling a 5G-enabled satellite edge node based on the Tiansuan satellite constellation, an experimental platform jointly developed by Beijing University of Posts and Telecommunications, Spacety, and Peking University.
卫星通信网络以其覆盖范围广、可靠性高、计算能力、存储能力和带宽等优势,在现代通信基础设施中发挥着越来越重要的作用。为响应不断变化的市场需求,移动网络运营商正逐步将卫星系统与边缘云计算平台集成,在统一架构内提供先进的网络功能。这种集成对基于卫星的边缘节点的非功能评估(例如,可靠性、可用性和资源效率)提出了强烈要求,由于其独特的操作限制,引入了前所未有的挑战。在本文中,我们提出了一个专为卫星计算系统量身定制的轻量级认证框架,旨在评估和验证卫星边缘网络的非功能状态。我们的方法明确地解决了卫星环境的独特特征,包括间歇性连接和有限的资源可用性。我们通过一个现实的测试平台实现验证了所提出的方案,建模了一个基于天绕卫星星座的5g卫星边缘节点,这是一个由北京邮电大学、空间科技和北京大学联合开发的实验平台。
{"title":"Non-functional certification of edge-computing satellite systems","authors":"Filippo Berto ,&nbsp;Marco Anisetti ,&nbsp;Qiyang Zhang ,&nbsp;Shangguang Wang ,&nbsp;Claudio A. Ardagna","doi":"10.1016/j.comnet.2026.112036","DOIUrl":"10.1016/j.comnet.2026.112036","url":null,"abstract":"<div><div>Satellite telecommunication networks are playing an increasingly pivotal role in modern communication infrastructures, owing to their expansive coverage, high reliability, and growing capabilities in computing, storage, and bandwidth. In response to evolving market demands, mobile network operators are progressively integrating satellite systems with edge-cloud computing platforms to deliver advanced networking functionalities within a unified architecture. This integration places strong demands on the non-functional assessment (e.g., reliability, availability, and resource efficiency) of satellite-based edge nodes, introducing unprecedented challenges due to their unique operational constraints. In this paper, we propose a lightweight certification framework tailored for satellite computing systems, designed to assess and validate the non-functional posture of satellite edge networks. Our approach explicitly addresses the distinctive characteristics of satellite environments, including intermittent connectivity and constrained resource availability. We validate the proposed scheme through a realistic testbed implementation, modeling a 5G-enabled satellite edge node based on the Tiansuan satellite constellation, an experimental platform jointly developed by Beijing University of Posts and Telecommunications, Spacety, and Peking University.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"278 ","pages":"Article 112036"},"PeriodicalIF":4.6,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-based fast-flux domain detection using graph neural networks 基于图神经网络的快速磁通域检测
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-30 DOI: 10.1016/j.comnet.2026.112075
Wei Xiong , Yang Wang , Haiyang Jiang , Hongtao Guan
Fast-flux domains are frequently exploited by cybercriminals to perform various attacks, making their detection crucial for maintaining network security. Traditional detection methods rely on manually defined statistical indicators to characterize the spatial distribution of a domain’s associated hosts, including the resolved hosts and authoritative name servers. However, given the increasingly decentralized nature of internet services, these statistical indicators may fail to capture the feature completely, resulting in inaccurate detection. To address this limitation, our proposed method leverages a graph structure to not only provide a more comprehensive representation of the existing feature but also incorporate a supplementary feature considering the spatial distribution between a domain’s client and the resolved hosts assigned to it. At the same time, we customize a graph sampling method to avoid significant increase in detection time caused by excessive graph size. To determine whether the constructed graph represents a fast-flux or benign domain, twelve types of Graph Neural Network (GNN) models, formed by pairwise combinations of three graph convolution methods and four graph pooling methods, are examined. Evaluation datasets are constructed from both public sources and real-world data, demonstrating that the GAT-SAG model performs optimally among the twelve GNN models and significantly outperforms state-of-the-art statistics-based models in terms of accuracy, with only a tolerable increase in time consumption.
快速通量域经常被网络犯罪分子利用来进行各种攻击,因此快速通量域的检测对于维护网络安全至关重要。传统的检测方法依赖于手动定义的统计指标来表征域关联主机的空间分布,包括已解析主机和权威域名服务器。然而,鉴于互联网服务日益分散的性质,这些统计指标可能无法完全捕捉到这一特征,从而导致检测不准确。为了解决这一限制,我们提出的方法利用图结构不仅提供了现有特征的更全面的表示,而且考虑到域客户端和分配给它的已解析主机之间的空间分布,还包含了一个补充特征。同时,我们定制了一种图采样方法,避免了由于图尺寸过大而导致的检测时间的显著增加。为了确定构建的图是代表快速通量域还是良性域,研究了由三种图卷积方法和四种图池化方法成对组合形成的12种图神经网络(GNN)模型。评估数据集由公共资源和实际数据构建而成,表明GAT-SAG模型在12个GNN模型中表现最佳,在准确性方面显著优于最先进的基于统计的模型,仅在可容忍的时间消耗上有所增加。
{"title":"Graph-based fast-flux domain detection using graph neural networks","authors":"Wei Xiong ,&nbsp;Yang Wang ,&nbsp;Haiyang Jiang ,&nbsp;Hongtao Guan","doi":"10.1016/j.comnet.2026.112075","DOIUrl":"10.1016/j.comnet.2026.112075","url":null,"abstract":"<div><div>Fast-flux domains are frequently exploited by cybercriminals to perform various attacks, making their detection crucial for maintaining network security. Traditional detection methods rely on manually defined statistical indicators to characterize the spatial distribution of a domain’s associated hosts, including the resolved hosts and authoritative name servers. However, given the increasingly decentralized nature of internet services, these statistical indicators may fail to capture the feature completely, resulting in inaccurate detection. To address this limitation, our proposed method leverages a graph structure to not only provide a more comprehensive representation of the existing feature but also incorporate a supplementary feature considering the spatial distribution between a domain’s client and the resolved hosts assigned to it. At the same time, we customize a graph sampling method to avoid significant increase in detection time caused by excessive graph size. To determine whether the constructed graph represents a fast-flux or benign domain, twelve types of Graph Neural Network (GNN) models, formed by pairwise combinations of three graph convolution methods and four graph pooling methods, are examined. Evaluation datasets are constructed from both public sources and real-world data, demonstrating that the GAT-SAG model performs optimally among the twelve GNN models and significantly outperforms state-of-the-art statistics-based models in terms of accuracy, with only a tolerable increase in time consumption.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"278 ","pages":"Article 112075"},"PeriodicalIF":4.6,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A botnet detection method for encrypted DNS traffic based on multi-branch knowledge distillation 一种基于多分支知识蒸馏的加密DNS流量僵尸网络检测方法
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-28 DOI: 10.1016/j.comnet.2026.112060
Zhipeng Qin , Hanbing Yan , Xiangyu Li , Peng Wang
With advancements in encrypted network communication technologies, botnets increasingly use encrypted DNS traffic to spread covertly and execute attacks. Botnet traffic exhibits diverse and complex behaviors, and detecting botnets within encrypted DNS traffic poses challenges, such as high concealment, low detection efficiency, and difficulties in feature matching. To address these issues, this paper proposes a botnet detection method for encrypted DNS traffic based on multi-branch knowledge distillation. This method utilizes an adaptive feature extraction algorithm to capture encrypted DNS traffic features, applies spatial clustering based on traffic characteristics for multi-classification of botnets, and adopts a multi-level knowledge distillation strategy to develop several specialized botnet detection models. These models operate in parallel, enhancing detection efficiency and accuracy. Experimental results demonstrate that this approach significantly reduces computational complexity while maintaining high precision, improving detection efficiency and real-time capabilities.
随着加密网络通信技术的发展,僵尸网络越来越多地利用加密的DNS流量进行隐蔽传播和攻击。僵尸网络流量行为多样、复杂,在加密DNS流量中检测僵尸网络存在隐蔽性高、检测效率低、特征匹配困难等问题。针对这些问题,本文提出了一种基于多分支知识蒸馏的加密DNS流量僵尸网络检测方法。该方法利用自适应特征提取算法捕获加密DNS流量特征,利用基于流量特征的空间聚类对僵尸网络进行多重分类,并采用多层次知识蒸馏策略建立多个专门的僵尸网络检测模型。这些模型并行运行,提高了检测效率和准确性。实验结果表明,该方法在保持高精度的同时显著降低了计算复杂度,提高了检测效率和实时性。
{"title":"A botnet detection method for encrypted DNS traffic based on multi-branch knowledge distillation","authors":"Zhipeng Qin ,&nbsp;Hanbing Yan ,&nbsp;Xiangyu Li ,&nbsp;Peng Wang","doi":"10.1016/j.comnet.2026.112060","DOIUrl":"10.1016/j.comnet.2026.112060","url":null,"abstract":"<div><div>With advancements in encrypted network communication technologies, botnets increasingly use encrypted DNS traffic to spread covertly and execute attacks. Botnet traffic exhibits diverse and complex behaviors, and detecting botnets within encrypted DNS traffic poses challenges, such as high concealment, low detection efficiency, and difficulties in feature matching. To address these issues, this paper proposes a botnet detection method for encrypted DNS traffic based on multi-branch knowledge distillation. This method utilizes an adaptive feature extraction algorithm to capture encrypted DNS traffic features, applies spatial clustering based on traffic characteristics for multi-classification of botnets, and adopts a multi-level knowledge distillation strategy to develop several specialized botnet detection models. These models operate in parallel, enhancing detection efficiency and accuracy. Experimental results demonstrate that this approach significantly reduces computational complexity while maintaining high precision, improving detection efficiency and real-time capabilities.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112060"},"PeriodicalIF":4.6,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An explainable transformer-based model for phishing email detection: A large language model approach 用于网络钓鱼电子邮件检测的可解释的基于转换器的模型:大型语言模型方法
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-28 DOI: 10.1016/j.comnet.2026.112061
Mohammad Amaz Uddin , Md Mahiuddin , Iqbal H. Sarker
Phishing email is a serious cyber threat that tries to deceive users by sending false emails with the intention of stealing confidential information or causing financial harm. Attackers, often posing as trustworthy entities, exploit technological advancements and sophistication to make the detection and prevention of phishing more challenging. Despite extensive academic research, phishing detection remains an ongoing and formidable challenge in the cybersecurity landscape. In this research paper, we present a fine-tuned transformer-based masked language model, RoBERTa (Robustly Optimized BERT Pretraining Approach), for phishing email detection. In the detection process, we employ a phishing email dataset and apply the preprocessing techniques to clean and address the class imbalance issues, thereby enhancing model performance. The results of the experiment demonstrate that our fine-tuned model outperforms traditional machine learning models with an accuracy of 98.45%. To ensure model transparency and user trust, we propose a hybrid explanation approach, LITA (LIME-Transformer Attribution), which integrates the potential of Local Interpretable Model-Agnostic Explanations (LIME) and Transformers Interpret methods. The proposed method provides more consistent and user-friendly insights, mitigating local attribution inconsistencies between the two explanation approaches. Moreover, the study highlights the model’s ability to generate its predictions by presenting positive and negative contribution scores using LIME, Transformers Interpret, and LITA.
网络钓鱼电子邮件是一种严重的网络威胁,它试图通过发送虚假电子邮件来欺骗用户,目的是窃取机密信息或造成经济损失。攻击者通常冒充值得信赖的实体,利用技术的进步和复杂性,使网络钓鱼的检测和预防更具挑战性。尽管进行了广泛的学术研究,网络钓鱼检测仍然是网络安全领域的一个持续而艰巨的挑战。在这篇研究论文中,我们提出了一个微调的基于变压器的屏蔽语言模型RoBERTa(鲁棒优化BERT预训练方法),用于网络钓鱼电子邮件检测。在检测过程中,我们采用网络钓鱼邮件数据集,并应用预处理技术来清理和解决类不平衡问题,从而提高模型的性能。实验结果表明,我们的微调模型优于传统的机器学习模型,准确率达到98.45%。为了确保模型透明度和用户信任,我们提出了一种混合解释方法,LITA (LIME- transformer Attribution),它集成了局部可解释模型不可知解释(LIME)和变压器解释方法的潜力。提出的方法提供了更一致和用户友好的见解,减轻了两种解释方法之间的局部归因不一致。此外,该研究还强调了该模型通过使用LIME、Transformers Interpret和LITA呈现积极和消极贡献分数来生成预测的能力。
{"title":"An explainable transformer-based model for phishing email detection: A large language model approach","authors":"Mohammad Amaz Uddin ,&nbsp;Md Mahiuddin ,&nbsp;Iqbal H. Sarker","doi":"10.1016/j.comnet.2026.112061","DOIUrl":"10.1016/j.comnet.2026.112061","url":null,"abstract":"<div><div>Phishing email is a serious cyber threat that tries to deceive users by sending false emails with the intention of stealing confidential information or causing financial harm. Attackers, often posing as trustworthy entities, exploit technological advancements and sophistication to make the detection and prevention of phishing more challenging. Despite extensive academic research, phishing detection remains an ongoing and formidable challenge in the cybersecurity landscape. In this research paper, we present a fine-tuned transformer-based masked language model, RoBERTa (Robustly Optimized BERT Pretraining Approach), for phishing email detection. In the detection process, we employ a phishing email dataset and apply the preprocessing techniques to clean and address the class imbalance issues, thereby enhancing model performance. The results of the experiment demonstrate that our fine-tuned model outperforms traditional machine learning models with an accuracy of 98.45%. To ensure model transparency and user trust, we propose a hybrid explanation approach, LITA (LIME-Transformer Attribution), which integrates the potential of Local Interpretable Model-Agnostic Explanations (LIME) and Transformers Interpret methods. The proposed method provides more consistent and user-friendly insights, mitigating local attribution inconsistencies between the two explanation approaches. Moreover, the study highlights the model’s ability to generate its predictions by presenting positive and negative contribution scores using LIME, Transformers Interpret, and LITA.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112061"},"PeriodicalIF":4.6,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Threshold-based eavesdropper detection for partial intercept-resend attack in noisy BB84 quantum key distribution BB84量子密钥分发中基于阈值的部分拦截重发攻击窃听检测
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-26 DOI: 10.1016/j.comnet.2026.112058
Francesco Fiorini, Rosario G. Garroppo, Michele Pagano
Quantum Key Distribution (QKD) protocols are critical for ensuring secure communication against the threats posed by post-quantum technologies. Among these, the BB84 protocol remains the most widely studied and implemented QKD scheme, providing a foundation for secure communication based on the principles of quantum mechanics. This paper investigates the BB84 protocol under a partial intercept-resend attack in a realistic scenario that accounts for system noise. In this context, existing attack detection methods rely on estimating the quantum bit error rate (QBER) in the portion of key bits exchanged over the classical channel to identify the attack. The proposed approach introduces a novel scheme in which the two communicating parties agree on the maximum fraction of shared key bits that can be correctly intercepted by the attacker. This parameter can be configured according to the security requirements of the application. The paper first presents the theoretical model for computing this parameter, which is subsequently used to develop a threshold-based detection method. Unlike other detection methods for intercept-resend attacks, the proposed scheme is independent of the interception density and relies solely on the system noise and the application’s security requirements. Finally, an enhanced version of the Python Quantum Solver library is implemented to test the proposed method using the Qiskit framework. Simulation results demonstrate the high accuracy and very low false negative rate of the proposed method, with a slight degradation in performance observed when the actual interception rate approaches the threshold defined by the security requirements.
量子密钥分发(QKD)协议对于确保安全通信免受后量子技术带来的威胁至关重要。其中,BB84协议仍然是研究和实现最广泛的QKD方案,为基于量子力学原理的安全通信提供了基础。本文在考虑系统噪声的实际情况下,研究了BB84协议在部分拦截重发攻击下的性能。在这种情况下,现有的攻击检测方法依赖于估计在经典信道上交换的密钥比特部分的量子误码率(QBER)来识别攻击。该方法引入了一种新的方案,在该方案中,通信双方对攻击者可以正确截获的共享密钥位的最大比例达成一致。该参数可根据应用的安全需求进行配置。本文首先提出了计算该参数的理论模型,然后利用该模型开发了一种基于阈值的检测方法。与其他拦截重发攻击检测方法不同,该方案不依赖于拦截密度,仅依赖于系统噪声和应用程序的安全需求。最后,实现了Python Quantum Solver库的增强版本,以使用Qiskit框架测试所提出的方法。仿真结果表明,该方法具有较高的准确率和极低的误报率,当实际拦截率接近安全需求定义的阈值时,性能略有下降。
{"title":"Threshold-based eavesdropper detection for partial intercept-resend attack in noisy BB84 quantum key distribution","authors":"Francesco Fiorini,&nbsp;Rosario G. Garroppo,&nbsp;Michele Pagano","doi":"10.1016/j.comnet.2026.112058","DOIUrl":"10.1016/j.comnet.2026.112058","url":null,"abstract":"<div><div>Quantum Key Distribution (QKD) protocols are critical for ensuring secure communication against the threats posed by post-quantum technologies. Among these, the BB84 protocol remains the most widely studied and implemented QKD scheme, providing a foundation for secure communication based on the principles of quantum mechanics. This paper investigates the BB84 protocol under a partial intercept-resend attack in a realistic scenario that accounts for system noise. In this context, existing attack detection methods rely on estimating the quantum bit error rate (QBER) in the portion of key bits exchanged over the classical channel to identify the attack. The proposed approach introduces a novel scheme in which the two communicating parties agree on the maximum fraction of shared key bits that can be correctly intercepted by the attacker. This parameter can be configured according to the security requirements of the application. The paper first presents the theoretical model for computing this parameter, which is subsequently used to develop a threshold-based detection method. Unlike other detection methods for intercept-resend attacks, the proposed scheme is independent of the interception density and relies solely on the system noise and the application’s security requirements. Finally, an enhanced version of the Python Quantum Solver library is implemented to test the proposed method using the Qiskit framework. Simulation results demonstrate the high accuracy and very low false negative rate of the proposed method, with a slight degradation in performance observed when the actual interception rate approaches the threshold defined by the security requirements.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112058"},"PeriodicalIF":4.6,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive approach for the onboarding, orchestration, and validation of network applications 用于网络应用程序的配置、编排和验证的综合方法
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-25 DOI: 10.1016/j.comnet.2026.112057
Rafael Direito , Kostis Trantzas , Jorge Gallego-Madrid , Ana Hermosilla , Diogo Gomes , Christos Tranoris , Rui L.A. Aguiar , Antonio Skarmeta , Spyros Denazis
The advent of 5G and Beyond 5G networks has propelled the development of innovative applications and services that harness network programmability, data from management and control interfaces, and the capabilities of network slicing. However, ensuring these applications function as intended and effectively utilize 5G/B5G capabilities remains a challenge, mainly due to their reliance on complex interactions with control plane Network Functions. This work addresses this issue by proposing a novel architecture to enhance the onboarding, orchestration, and validation of 5G/B5G-capable applications and services, while enabling the creation of application-tailored network slices. By integrating DevOps principles into the NFV ecosystem, the proposed architecture automates workflows for deployment, testing, and validation, while adhering to standardized onboarding models and continuous integration practices. Furthermore, we also address the realization of such architecture into a platform that supports extensive testing across multiple dimensions, including 5G readiness, security, performance, scalability, and availability. Besides introducing such a platform, this work also demonstrates its feasibility through the orchestration and validation of an automotive application that manages virtual On-Board Units within a 5G-enabled environment. The obtained results underscore the effectiveness of the proposed architecture, as well as the performance and scalability of the platform that materializes it. By integrating DevOps principles, our work aids in reducing deployment complexity, automating testing and validation, and enhancing the reliability of next-generation Network Applications, therefore accelerating their time-to-market.
5G和超5G网络的出现推动了创新应用和服务的发展,这些应用和服务利用了网络可编程性、来自管理和控制接口的数据以及网络切片功能。然而,确保这些应用程序按预期运行并有效利用5G/B5G功能仍然是一个挑战,主要是因为它们依赖于与控制平面网络功能的复杂交互。这项工作通过提出一种新的架构来解决这个问题,该架构可以增强支持5G/ b5g的应用和服务的启动、编排和验证,同时支持创建针对应用的网络切片。通过将DevOps原则集成到NFV生态系统中,所建议的体系结构将部署、测试和验证的工作流自动化,同时坚持标准化的入职模型和持续集成实践。此外,我们还解决了将这种架构实现为一个平台的问题,该平台支持跨多个维度的广泛测试,包括5G准备、安全性、性能、可扩展性和可用性。除了介绍这样一个平台之外,这项工作还通过编排和验证一个汽车应用程序来证明其可行性,该应用程序可以在支持5g的环境中管理虚拟车载单元。所获得的结果强调了所提出的体系结构的有效性,以及实现它的平台的性能和可扩展性。通过集成DevOps原则,我们的工作有助于降低部署复杂性,自动化测试和验证,并增强下一代网络应用程序的可靠性,从而加快其上市时间。
{"title":"A comprehensive approach for the onboarding, orchestration, and validation of network applications","authors":"Rafael Direito ,&nbsp;Kostis Trantzas ,&nbsp;Jorge Gallego-Madrid ,&nbsp;Ana Hermosilla ,&nbsp;Diogo Gomes ,&nbsp;Christos Tranoris ,&nbsp;Rui L.A. Aguiar ,&nbsp;Antonio Skarmeta ,&nbsp;Spyros Denazis","doi":"10.1016/j.comnet.2026.112057","DOIUrl":"10.1016/j.comnet.2026.112057","url":null,"abstract":"<div><div>The advent of 5G and Beyond 5G networks has propelled the development of innovative applications and services that harness network programmability, data from management and control interfaces, and the capabilities of network slicing. However, ensuring these applications function as intended and effectively utilize 5G/B5G capabilities remains a challenge, mainly due to their reliance on complex interactions with control plane Network Functions. This work addresses this issue by proposing a novel architecture to enhance the onboarding, orchestration, and validation of 5G/B5G-capable applications and services, while enabling the creation of application-tailored network slices. By integrating DevOps principles into the NFV ecosystem, the proposed architecture automates workflows for deployment, testing, and validation, while adhering to standardized onboarding models and continuous integration practices. Furthermore, we also address the realization of such architecture into a platform that supports extensive testing across multiple dimensions, including 5G readiness, security, performance, scalability, and availability. Besides introducing such a platform, this work also demonstrates its feasibility through the orchestration and validation of an automotive application that manages virtual On-Board Units within a 5G-enabled environment. The obtained results underscore the effectiveness of the proposed architecture, as well as the performance and scalability of the platform that materializes it. By integrating DevOps principles, our work aids in reducing deployment complexity, automating testing and validation, and enhancing the reliability of next-generation Network Applications, therefore accelerating their time-to-market.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112057"},"PeriodicalIF":4.6,"publicationDate":"2026-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of learning-based intrusion detection systems for in-vehicle networks 基于学习的车载网络入侵检测系统研究
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-23 DOI: 10.1016/j.comnet.2026.112031
Muzun Althunayyan , Amir Javed , Omer Rana
Connected and Autonomous Vehicles (CAVs) have advanced modern transportation by improving the efficiency, safety, and convenience of mobility through automation and connectivity, yet they remain vulnerable to cybersecurity threats, particularly through the insecure Controller Area Network (CAN) bus. Cyberattacks can have devastating consequences in connected vehicles, including the loss of control over critical systems, necessitating robust security solutions. In-vehicle Intrusion Detection Systems (IDSs) offer a promising approach by detecting malicious activities in real time. This survey provides a comprehensive review of state-of-the-art research on learning-based in-vehicle IDSs, focusing on Machine Learning (ML), Deep Learning (DL), and Federated Learning (FL) approaches. Based on the reviewed studies, we critically examine existing IDS approaches, categorising them by the types of attacks they detect-known, unknown, and combined known-unknown attacks-while identifying their limitations. We also review the evaluation metrics used in research, emphasising the need to consider multiple criteria to meet the requirements of safety-critical systems. Additionally, we analyse FL-based IDSs and highlight their limitations. By doing so, this survey helps identify effective security measures, address existing limitations, and guide future research toward more resilient and adaptive protection mechanisms, ensuring the safety and reliability of CAVs.
联网和自动驾驶汽车(cav)通过自动化和连接性提高了交通的效率、安全性和便利性,推动了现代交通的发展,但它们仍然容易受到网络安全威胁,特别是通过不安全的控制器区域网络(CAN)总线。网络攻击可能对联网车辆造成毁灭性后果,包括对关键系统失去控制,因此需要强大的安全解决方案。车载入侵检测系统(ids)通过实时检测恶意活动提供了一种很有前途的方法。本调查对基于学习的车载ids的最新研究进行了全面回顾,重点是机器学习(ML)、深度学习(DL)和联邦学习(FL)方法。在回顾研究的基础上,我们严格检查了现有的IDS方法,根据它们检测到的攻击类型(已知、未知和已知-未知组合攻击)对它们进行了分类,同时确定了它们的局限性。我们还回顾了研究中使用的评估指标,强调需要考虑多个标准以满足安全关键系统的要求。此外,我们分析了基于fl的ids,并强调了它们的局限性。通过这样做,本调查有助于确定有效的安全措施,解决现有的限制,并指导未来研究更具弹性和适应性的保护机制,确保自动驾驶汽车的安全性和可靠性。
{"title":"A survey of learning-based intrusion detection systems for in-vehicle networks","authors":"Muzun Althunayyan ,&nbsp;Amir Javed ,&nbsp;Omer Rana","doi":"10.1016/j.comnet.2026.112031","DOIUrl":"10.1016/j.comnet.2026.112031","url":null,"abstract":"<div><div>Connected and Autonomous Vehicles (CAVs) have advanced modern transportation by improving the efficiency, safety, and convenience of mobility through automation and connectivity, yet they remain vulnerable to cybersecurity threats, particularly through the insecure Controller Area Network (CAN) bus. Cyberattacks can have devastating consequences in connected vehicles, including the loss of control over critical systems, necessitating robust security solutions. In-vehicle Intrusion Detection Systems (IDSs) offer a promising approach by detecting malicious activities in real time. This survey provides a comprehensive review of state-of-the-art research on learning-based in-vehicle IDSs, focusing on Machine Learning (ML), Deep Learning (DL), and Federated Learning (FL) approaches. Based on the reviewed studies, we critically examine existing IDS approaches, categorising them by the types of attacks they detect-known, unknown, and combined known-unknown attacks-while identifying their limitations. We also review the evaluation metrics used in research, emphasising the need to consider multiple criteria to meet the requirements of safety-critical systems. Additionally, we analyse FL-based IDSs and highlight their limitations. By doing so, this survey helps identify effective security measures, address existing limitations, and guide future research toward more resilient and adaptive protection mechanisms, ensuring the safety and reliability of CAVs.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"277 ","pages":"Article 112031"},"PeriodicalIF":4.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1