首页 > 最新文献

Computer Networks最新文献

英文 中文
RAGN: Detecting unknown malicious network traffic using a robust adaptive graph neural network
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-10 DOI: 10.1016/j.comnet.2025.111184
Ernest Akpaku , Jinfu Chen , Mukhtar Ahmed , Francis Kwadzo Agbenyegah , William Leslie Brown-Acquaye
As network environments evolve, detecting unknown malicious network traffic becomes increasingly challenging due to the dynamic and sophisticated nature of modern cyberattacks. Graph Attention Networks (GATs) have shown promise in modeling complex network interactions but remain vulnerable to adversarial attacks that exploit weaknesses in the graph structure. In this work, we propose the Robust Adaptive Graph Neural Network (RAGN), an enhanced GAT-based framework that introduces adaptive attention mechanisms to improve detection accuracy and robustness against adversarial manipulations in network traffic graphs. RAGN iteratively adjusts the graph structure and feature space to suppress adversarial perturbations by assigning lower attention scores to unreliable edges and refining feature representations based on the feature smoothness regularization principle. To assess the robustness of the proposed RAGN model and compare it with baseline models, we introduced an effective dynamic graph attack method known as Semantic-Preserving Adversarial Node Injection (SPAN). We benchmarked its performance against state-of-the-art graph attack methods, including DICE, DGA, and RWCS. SPAN incrementally injects small batches of malicious nodes, refining their edges and features to target both the structural and temporal aspects of dynamic graphs. It preserves semantic integrity, and generates effective yet imperceptible perturbations, providing a rigorous test of the resilience of graph neural networks. Experiments conducted on four datasets, demonstrate that RAGN demonstrates robustness against adversarial, and zero-day attacks. It also demonstrates resilience against targeted, malicious node injection attacks in dynamic network environments. RAGN demonstrated consistent robustness, with misclassification rates increasing only marginally (by less than 1.2%) even under significant dynamic changes.
随着网络环境的发展,由于现代网络攻击的动态性和复杂性,检测未知恶意网络流量变得越来越具有挑战性。图注意网络(GAT)在模拟复杂的网络交互方面已显示出良好的前景,但仍然容易受到利用图结构弱点的恶意攻击。在这项工作中,我们提出了鲁棒性自适应图神经网络(RAGN),这是一种基于 GAT 的增强型框架,它引入了自适应关注机制,以提高检测精度和鲁棒性,抵御网络流量图中的恶意操作。RAGN 会迭代调整图结构和特征空间,通过为不可靠的边分配较低的注意力分数,并根据特征平滑正则化原则完善特征表示,从而抑制对抗性扰动。为了评估所提出的 RAGN 模型的鲁棒性并将其与基线模型进行比较,我们引入了一种有效的动态图攻击方法,即语义保留对抗性节点注入(SPAN)。我们将其性能与最先进的图攻击方法(包括 DICE、DGA 和 RWCS)进行了比较。SPAN 以增量方式注入小批恶意节点,完善其边缘和特征,以针对动态图的结构和时间方面。它能保持语义的完整性,并产生有效但不易察觉的扰动,为图神经网络的恢复能力提供了严格的测试。在四个数据集上进行的实验表明,RAGN 对对抗性攻击和零日攻击具有很强的抵御能力。它还能在动态网络环境中抵御有针对性的恶意节点注入攻击。RAGN 表现出了一贯的鲁棒性,即使在显著的动态变化下,误分类率也仅略有增加(增幅小于 1.2%)。
{"title":"RAGN: Detecting unknown malicious network traffic using a robust adaptive graph neural network","authors":"Ernest Akpaku ,&nbsp;Jinfu Chen ,&nbsp;Mukhtar Ahmed ,&nbsp;Francis Kwadzo Agbenyegah ,&nbsp;William Leslie Brown-Acquaye","doi":"10.1016/j.comnet.2025.111184","DOIUrl":"10.1016/j.comnet.2025.111184","url":null,"abstract":"<div><div>As network environments evolve, detecting unknown malicious network traffic becomes increasingly challenging due to the dynamic and sophisticated nature of modern cyberattacks. Graph Attention Networks (GATs) have shown promise in modeling complex network interactions but remain vulnerable to adversarial attacks that exploit weaknesses in the graph structure. In this work, we propose the Robust Adaptive Graph Neural Network (RAGN), an enhanced GAT-based framework that introduces adaptive attention mechanisms to improve detection accuracy and robustness against adversarial manipulations in network traffic graphs. RAGN iteratively adjusts the graph structure and feature space to suppress adversarial perturbations by assigning lower attention scores to unreliable edges and refining feature representations based on the feature smoothness regularization principle. To assess the robustness of the proposed RAGN model and compare it with baseline models, we introduced an effective dynamic graph attack method known as Semantic-Preserving Adversarial Node Injection (SPAN). We benchmarked its performance against state-of-the-art graph attack methods, including DICE, DGA, and RWCS. SPAN incrementally injects small batches of malicious nodes, refining their edges and features to target both the structural and temporal aspects of dynamic graphs. It preserves semantic integrity, and generates effective yet imperceptible perturbations, providing a rigorous test of the resilience of graph neural networks. Experiments conducted on four datasets, demonstrate that RAGN demonstrates robustness against adversarial, and zero-day attacks. It also demonstrates resilience against targeted, malicious node injection attacks in dynamic network environments. RAGN demonstrated consistent robustness, with misclassification rates increasing only marginally (by less than 1.2%) even under significant dynamic changes.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"262 ","pages":"Article 111184"},"PeriodicalIF":4.4,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized traffic detection utilizing blockchain-federated learning with quality-driven aggregation
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-10 DOI: 10.1016/j.comnet.2025.111179
Wei Liu , Wentao Cui , Bin Wang , Heng Pan , Wei She , Zhao Tian
Federated Learning (FL) has been widely applied in network traffic detection to address issues such as insufficient data, data imbalance, and limited data sources. However, FL still has some drawbacks, including excessive load on the central server, vulnerability to attacks, and the potential presence of malicious or low-quality local models during aggregation. In this paper, we propose a novel approach for encrypted traffic classification to promote reliable data sharing and improve classification accuracy. First, we design a four-layer framework for secure traffic classification, based on FL and blockchain to replace the central server. In this framework, each client dynamically switches between the trainer and the validator, either training or validating the local model, with the validator ultimately uploading the global model to the blockchain. Furthermore, to address the issues of potential malicious and low-quality model in aggregation, we propose a new Quality-Driven Validator-Trainer Aggregation (QDVTA) algorithm. The algorithm selectively filters out malicious and low-quality models in each round of aggregation, improving the robustness of the framework while minimizing the loss in model accuracy. Experiments were conducted on the ISCXVPN2016, ISCXTor2016, and CICIoT2022 datasets. Compared to existing methods, the proposed approach achieves accuracy rates of 89.19%, 89.50%, and 94.42% in the presence of malicious nodes, demonstrating its effectiveness over state-of-the-art methods.
{"title":"Decentralized traffic detection utilizing blockchain-federated learning with quality-driven aggregation","authors":"Wei Liu ,&nbsp;Wentao Cui ,&nbsp;Bin Wang ,&nbsp;Heng Pan ,&nbsp;Wei She ,&nbsp;Zhao Tian","doi":"10.1016/j.comnet.2025.111179","DOIUrl":"10.1016/j.comnet.2025.111179","url":null,"abstract":"<div><div>Federated Learning (FL) has been widely applied in network traffic detection to address issues such as insufficient data, data imbalance, and limited data sources. However, FL still has some drawbacks, including excessive load on the central server, vulnerability to attacks, and the potential presence of malicious or low-quality local models during aggregation. In this paper, we propose a novel approach for encrypted traffic classification to promote reliable data sharing and improve classification accuracy. First, we design a four-layer framework for secure traffic classification, based on FL and blockchain to replace the central server. In this framework, each client dynamically switches between the trainer and the validator, either training or validating the local model, with the validator ultimately uploading the global model to the blockchain. Furthermore, to address the issues of potential malicious and low-quality model in aggregation, we propose a new Quality-Driven Validator-Trainer Aggregation (QDVTA) algorithm. The algorithm selectively filters out malicious and low-quality models in each round of aggregation, improving the robustness of the framework while minimizing the loss in model accuracy. Experiments were conducted on the ISCXVPN2016, ISCXTor2016, and CICIoT2022 datasets. Compared to existing methods, the proposed approach achieves accuracy rates of 89.19%, 89.50%, and 94.42% in the presence of malicious nodes, demonstrating its effectiveness over state-of-the-art methods.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"262 ","pages":"Article 111179"},"PeriodicalIF":4.4,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social network botnet attack mitigation model for cloud
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-08 DOI: 10.1016/j.comnet.2025.111160
Hooman Alavizadeh, Ahmad Salehi S., A.S.M. Kayes, Wenny Rahayu, Tharam Dillon
Online Social Network (OSN) botnet attacks pose a growing threat to the cloud environment and reduce the services’ availability and reliability for users by launching distributed denial of service (DDoS) attacks on crucial servers in the cloud. These attacks involve the deployment of sophisticated botnets that exploit the interconnected nature of social networks to identify targets, exploit vulnerabilities, and launch attacks. The prevalence and impact of these botnet-driven attacks have recently been studied. Although the detection of these botnet attacks is still a challenging process, it remains crucial to gain a comprehensive understanding of and evaluate the best defense strategies against botnet attacks. This evaluation can be further utilized to formulate effective defense plans to mitigate the impact of such botnet attacks. In this paper, we first investigate the properties of OSN botnet attack stages that eventually lead to launching DDoS attacks toward a cloud system. Then, we formalize a defensive model using a sequential game model to analyze both the attacker’s and defenders’ best equilibrium strategies for the proposed botnet attack scenario. Moreover, we formulate optimal strategies for the defender against various attack strategies. Our experiments reveal the best defense strategies against various attack rates to maintain cloud functionality. Finally, we discuss possible countermeasures for these OSN botnet threats.
{"title":"Social network botnet attack mitigation model for cloud","authors":"Hooman Alavizadeh,&nbsp;Ahmad Salehi S.,&nbsp;A.S.M. Kayes,&nbsp;Wenny Rahayu,&nbsp;Tharam Dillon","doi":"10.1016/j.comnet.2025.111160","DOIUrl":"10.1016/j.comnet.2025.111160","url":null,"abstract":"<div><div>Online Social Network (OSN) botnet attacks pose a growing threat to the cloud environment and reduce the services’ availability and reliability for users by launching distributed denial of service (DDoS) attacks on crucial servers in the cloud. These attacks involve the deployment of sophisticated botnets that exploit the interconnected nature of social networks to identify targets, exploit vulnerabilities, and launch attacks. The prevalence and impact of these botnet-driven attacks have recently been studied. Although the detection of these botnet attacks is still a challenging process, it remains crucial to gain a comprehensive understanding of and evaluate the best defense strategies against botnet attacks. This evaluation can be further utilized to formulate effective defense plans to mitigate the impact of such botnet attacks. In this paper, we first investigate the properties of OSN botnet attack stages that eventually lead to launching DDoS attacks toward a cloud system. Then, we formalize a defensive model using a sequential game model to analyze both the attacker’s and defenders’ best equilibrium strategies for the proposed botnet attack scenario. Moreover, we formulate optimal strategies for the defender against various attack strategies. Our experiments reveal the best defense strategies against various attack rates to maintain cloud functionality. Finally, we discuss possible countermeasures for these OSN botnet threats.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"262 ","pages":"Article 111160"},"PeriodicalIF":4.4,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Yardstick-Stackelberg pricing-based incentive mechanism for Federated Learning in Edge Computing
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-08 DOI: 10.1016/j.comnet.2025.111186
Qianhui Yu , Hai Xue , Celimuge Wu , Ya Liu , Wunan Guo
Federated Learning (FL) enables collaborative model training across multiple participants without sharing original data, making it a valuable tool for preserving privacy in Mobile Edge Computing (MEC) environment. However, due to users’ varying levels of motivation and commitment, it is challenging to incentivize effective participation in FL. To address this, we propose a pricing-based incentive mechanism that enhances FL efficiency and energy sustainability in MEC. To be specific, we firstly develop the formula of incentive mechanism based on the yardstick pricing rule. Subsequently, we determine the optimal hyperparameters of the utility function aiming to maximize model accuracy. Additionally, we formulate a Stackelberg game to derive optimal reward strategies, balancing users’ transmission power allocation and the server’s reward distribution. Simulation results show that our proposed scheme outperforms other existing schemes with over 98.2% accuracy, 0.7% server utility enhancement, and 14.6% server loss decrease compared with static incentives. Moreover, our proposed scheme contributes to faster growth in both server and users utilities when compared with the advanced schemes by varying user numbers, which demonstrates its better scalability and adaptability.
{"title":"Yardstick-Stackelberg pricing-based incentive mechanism for Federated Learning in Edge Computing","authors":"Qianhui Yu ,&nbsp;Hai Xue ,&nbsp;Celimuge Wu ,&nbsp;Ya Liu ,&nbsp;Wunan Guo","doi":"10.1016/j.comnet.2025.111186","DOIUrl":"10.1016/j.comnet.2025.111186","url":null,"abstract":"<div><div>Federated Learning (FL) enables collaborative model training across multiple participants without sharing original data, making it a valuable tool for preserving privacy in Mobile Edge Computing (MEC) environment. However, due to users’ varying levels of motivation and commitment, it is challenging to incentivize effective participation in FL. To address this, we propose a pricing-based incentive mechanism that enhances FL efficiency and energy sustainability in MEC. To be specific, we firstly develop the formula of incentive mechanism based on the yardstick pricing rule. Subsequently, we determine the optimal hyperparameters of the utility function aiming to maximize model accuracy. Additionally, we formulate a Stackelberg game to derive optimal reward strategies, balancing users’ transmission power allocation and the server’s reward distribution. Simulation results show that our proposed scheme outperforms other existing schemes with over 98.2% accuracy, 0.7% server utility enhancement, and 14.6% server loss decrease compared with static incentives. Moreover, our proposed scheme contributes to faster growth in both server and users utilities when compared with the advanced schemes by varying user numbers, which demonstrates its better scalability and adaptability.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"262 ","pages":"Article 111186"},"PeriodicalIF":4.4,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic reconfiguration of wireless sensor networks: A survey
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-07 DOI: 10.1016/j.comnet.2025.111176
Salma Najjar , Michael David , William Derigent , Ahmed Zouinkhi
To control the lifetime and optimize the energy consumption of wireless sensor networks (WSNs), while ensuring that application needs are met, it is essential to dynamically reconfigure the network by adjusting the parameters of its nodes. A great deal of research has been devoted to proposing methods for reconfiguring WSNs. However, there are no comprehensive studies analyzing the different reconfiguration parameters, architectures or strategies. For this reason, this paper proposes the first systematic literature review on reconfiguration mechanisms for self-organized wireless sensor networks (SOWSNs) and those controlled by an external controller, following the principles of software-defined wireless sensor networks (SDWSNs), with a particular focus on energy optimization. The main objective of this work is to explore reconfiguration strategies in depth from three different aspects: the parameters adjusted to optimize energy (What?), the architectures used for decision making (Who?), and the moments of reconfiguration implementation (When?). In addition, this study identifies unexploited areas in this field and suggests that hybrid and predictive approaches could be a promising way of overcoming these gaps.
{"title":"Dynamic reconfiguration of wireless sensor networks: A survey","authors":"Salma Najjar ,&nbsp;Michael David ,&nbsp;William Derigent ,&nbsp;Ahmed Zouinkhi","doi":"10.1016/j.comnet.2025.111176","DOIUrl":"10.1016/j.comnet.2025.111176","url":null,"abstract":"<div><div>To control the lifetime and optimize the energy consumption of wireless sensor networks (WSNs), while ensuring that application needs are met, it is essential to dynamically reconfigure the network by adjusting the parameters of its nodes. A great deal of research has been devoted to proposing methods for reconfiguring WSNs. However, there are no comprehensive studies analyzing the different reconfiguration parameters, architectures or strategies. For this reason, this paper proposes the first systematic literature review on reconfiguration mechanisms for self-organized wireless sensor networks (SOWSNs) and those controlled by an external controller, following the principles of software-defined wireless sensor networks (SDWSNs), with a particular focus on energy optimization. The main objective of this work is to explore reconfiguration strategies in depth from three different aspects: the parameters adjusted to optimize energy (What?), the architectures used for decision making (Who?), and the moments of reconfiguration implementation (When?). In addition, this study identifies unexploited areas in this field and suggests that hybrid and predictive approaches could be a promising way of overcoming these gaps.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"262 ","pages":"Article 111176"},"PeriodicalIF":4.4,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAID: Mobility-aware information dissemination in mobile IoT using temporal point processes MAID:利用时间点过程在移动物联网中进行移动感知信息传播
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-07 DOI: 10.1016/j.comnet.2025.111173
Yongqing Cai, Dianjie Lu, Jing Chen, Guijuan Zhang
The mobile Internet of Things (IoT) integrates mobile communication technology with IoT to connect various physical devices (e.g., sensors, smart devices, vehicles, and home appliances) for data collection, processing, and distribution. However, current research on mobile IoT information dissemination overlooks the stochastic nature of device mobility, leading to inaccurate predictions of dissemination scale. To address this, we propose a mobility-aware information dissemination model (MAID) using the temporal point process (TPP) to investigate the stochastic dynamics introduced by device mobility. First, we develop a TPP-based model to describe random events, such as movement, linking, unlinking, and information dissemination. We propose a mobility-aware intensity prediction method to calculate event intensities within the TPP framework. Finally, we predict the scale of information dissemination on the basis of the calculated intensity and develop an event-driven simulation system to model network structure changes and information dissemination within the mobile IoT. The simulation results indicate that device mobility accelerates network structure changes, thereby increasing the scope and scale of information dissemination. This dynamic has a two-sided effect on dissemination efficiency, depending on the initial network sparsity. Extensive experiments on synthetic datasets show that our method improves the accuracy of dissemination scale prediction by over 92% compared to four baseline methods.
{"title":"MAID: Mobility-aware information dissemination in mobile IoT using temporal point processes","authors":"Yongqing Cai,&nbsp;Dianjie Lu,&nbsp;Jing Chen,&nbsp;Guijuan Zhang","doi":"10.1016/j.comnet.2025.111173","DOIUrl":"10.1016/j.comnet.2025.111173","url":null,"abstract":"<div><div>The mobile Internet of Things (IoT) integrates mobile communication technology with IoT to connect various physical devices (e.g., sensors, smart devices, vehicles, and home appliances) for data collection, processing, and distribution. However, current research on mobile IoT information dissemination overlooks the stochastic nature of device mobility, leading to inaccurate predictions of dissemination scale. To address this, we propose a mobility-aware information dissemination model (MAID) using the temporal point process (TPP) to investigate the stochastic dynamics introduced by device mobility. First, we develop a TPP-based model to describe random events, such as movement, linking, unlinking, and information dissemination. We propose a mobility-aware intensity prediction method to calculate event intensities within the TPP framework. Finally, we predict the scale of information dissemination on the basis of the calculated intensity and develop an event-driven simulation system to model network structure changes and information dissemination within the mobile IoT. The simulation results indicate that device mobility accelerates network structure changes, thereby increasing the scope and scale of information dissemination. This dynamic has a two-sided effect on dissemination efficiency, depending on the initial network sparsity. Extensive experiments on synthetic datasets show that our method improves the accuracy of dissemination scale prediction by over 92% compared to four baseline methods.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"262 ","pages":"Article 111173"},"PeriodicalIF":4.4,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143609794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A trust and bundling-based task allocation scheme to enhance completion rate and data quality for mobile crowdsensing 基于信任和捆绑的任务分配方案,提高移动众感应的完成率和数据质量
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-06 DOI: 10.1016/j.comnet.2025.111189
Yunchuan Kang , Houbing Herbert Song , Tian Wang , Shaobo Zhang , Mianxiong Dong , Anfeng Liu
In Mobile CrowdSensing (MCS), task bundling has shown promise in improving task completion rate by pairing unpopular tasks with popular ones. However, existing methods often assume truthful data from workers, an assumption misaligned with real-world MCS scenarios. Workers tend to submit low-quality or false data to maximize their rewards, particularly given the Information Elicitation Without Verification (IEWV) problem, which hinders the detection of dishonest behavior. To address this, we propose a Trust and Bundling-based Task Allocation (TBTA) scheme to enhance task completion rates and data quality at a low cost. The TBTA scheme includes three main strategies: (1) a trusted worker identification algorithm that evaluates workers' trust degrees by considering the IEWV challenge, allowing for the selection of reliable workers and thus ensuring higher data quality; (2) a task bundling method using the Non-dominated Sorting Genetic Algorithm II to bundle unpopular tasks with popular ones strategically, maximizing platform utility and completion rates; and (3) an optimal allocation algorithm that assigns trusted workers to tasks best suited to their capabilities, thus improving data reliability and minimizing costs. Experimental results demonstrate that compared to the state-of-the-art methods, the TBTA scheme achieves a 15.54 % improvement in task completion rate, and a 1.83 % reduction in worker travel distance.
{"title":"A trust and bundling-based task allocation scheme to enhance completion rate and data quality for mobile crowdsensing","authors":"Yunchuan Kang ,&nbsp;Houbing Herbert Song ,&nbsp;Tian Wang ,&nbsp;Shaobo Zhang ,&nbsp;Mianxiong Dong ,&nbsp;Anfeng Liu","doi":"10.1016/j.comnet.2025.111189","DOIUrl":"10.1016/j.comnet.2025.111189","url":null,"abstract":"<div><div>In Mobile CrowdSensing (MCS), task bundling has shown promise in improving task completion rate by pairing unpopular tasks with popular ones. However, existing methods often assume truthful data from workers, an assumption misaligned with real-world MCS scenarios. Workers tend to submit low-quality or false data to maximize their rewards, particularly given the Information Elicitation Without Verification (IEWV) problem, which hinders the detection of dishonest behavior. To address this, we propose a Trust and Bundling-based Task Allocation (TBTA) scheme to enhance task completion rates and data quality at a low cost. The TBTA scheme includes three main strategies: (1) a trusted worker identification algorithm that evaluates workers' trust degrees by considering the IEWV challenge, allowing for the selection of reliable workers and thus ensuring higher data quality; (2) a task bundling method using the Non-dominated Sorting Genetic Algorithm II to bundle unpopular tasks with popular ones strategically, maximizing platform utility and completion rates; and (3) an optimal allocation algorithm that assigns trusted workers to tasks best suited to their capabilities, thus improving data reliability and minimizing costs. Experimental results demonstrate that compared to the state-of-the-art methods, the TBTA scheme achieves a 15.54 % improvement in task completion rate, and a 1.83 % reduction in worker travel distance.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"262 ","pages":"Article 111189"},"PeriodicalIF":4.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hidden AS link prediction based on random forest feature selection and GWO-XGBoost model
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-05 DOI: 10.1016/j.comnet.2025.111164
Zekang Wang, Fuxiang Yuan, Ruixiang Li, Meng Zhang, Xiangyang Luo
Internet AS-level topology measurement is crucial for improving network stability and security. The presence of hidden AS links poses a challenge for accurately measuring the AS-level topology. Link prediction serves as a primary technical approach for discovering hidden AS links. However, the effectiveness of existing methods is susceptible to features and model hyperparameters, necessitating improvements in prediction performance. In this paper, a hidden AS link prediction method based on random forest feature selection and a GWO-XGBoost model is proposed. First, BGP data is preprocessed to eliminate erroneous information from AS paths, and suitable AS triplets for training the prediction model are constructed. Then, the traffic volume and ratio at the first and last nodes of these triplets are analyzed to extract four new features. These are combined with features extracted by typical methods to form an initial prediction feature set. Additionally, the random forest algorithm is used to select initial features, remove redundant features, and construct an optimal feature subset. Finally, the initial prediction model XGBoost is trained using the optimal feature subset, while the Grey Wolf Optimizer (GWO) algorithm is employed to search for optimal hyperparameters, thus constructing a fusion model GWO-XGBoost that achieves hidden AS link prediction. Extensive experiments are conducted on the AS-level topology with 81,998 nodes and 401,925 links collected from RouteViews and RIPE RIS projects. The results show that the proposed method has significant advantages over the typical prediction methods TopoScope and LOC-TopoScope. The prediction accuracy increases by 5.30% and 3.96%, respectively, and the number of discovered hidden AS links increases by 23.76% and 6.08%, respectively.
{"title":"Hidden AS link prediction based on random forest feature selection and GWO-XGBoost model","authors":"Zekang Wang,&nbsp;Fuxiang Yuan,&nbsp;Ruixiang Li,&nbsp;Meng Zhang,&nbsp;Xiangyang Luo","doi":"10.1016/j.comnet.2025.111164","DOIUrl":"10.1016/j.comnet.2025.111164","url":null,"abstract":"<div><div>Internet AS-level topology measurement is crucial for improving network stability and security. The presence of hidden AS links poses a challenge for accurately measuring the AS-level topology. Link prediction serves as a primary technical approach for discovering hidden AS links. However, the effectiveness of existing methods is susceptible to features and model hyperparameters, necessitating improvements in prediction performance. In this paper, a hidden AS link prediction method based on random forest feature selection and a GWO-XGBoost model is proposed. First, BGP data is preprocessed to eliminate erroneous information from AS paths, and suitable AS triplets for training the prediction model are constructed. Then, the traffic volume and ratio at the first and last nodes of these triplets are analyzed to extract four new features. These are combined with features extracted by typical methods to form an initial prediction feature set. Additionally, the random forest algorithm is used to select initial features, remove redundant features, and construct an optimal feature subset. Finally, the initial prediction model XGBoost is trained using the optimal feature subset, while the Grey Wolf Optimizer (GWO) algorithm is employed to search for optimal hyperparameters, thus constructing a fusion model GWO-XGBoost that achieves hidden AS link prediction. Extensive experiments are conducted on the AS-level topology with 81,998 nodes and 401,925 links collected from RouteViews and RIPE RIS projects. The results show that the proposed method has significant advantages over the typical prediction methods <em>TopoScope</em> and <em>LOC-TopoScope</em>. The prediction accuracy increases by 5.30% and 3.96%, respectively, and the number of discovered hidden AS links increases by 23.76% and 6.08%, respectively.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"262 ","pages":"Article 111164"},"PeriodicalIF":4.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint optimization of VNF deployment and UAV trajectory planning in Multi-UAV-enabled mobile edge networks
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-05 DOI: 10.1016/j.comnet.2025.111163
Junbin Liang, Qiao He
Multi-Unmanned Aerial Vehicle (UAV)-enabled mobile edge networks have emerged as a promising networking paradigm that uses multiple UAVs with limited communication and computation capacities as edge servers to traverse along planned trajectories to visit designated ground users (GUs) for providing network services in partial or no network coverage areas, e.g., disaster areas. Based on network virtualization technology, network services can be flexibly provisioned as virtual network functions (VNFs) deployed at the UAVs. However, given a set of UAVs with initial locations and a set of VNF requests from different GUs on different locations, how to deploy the on-demand VNFs on the limited-capacities UAVs with consideration that which UAV should carry which VNFs to serve which requests, and then plan trajectories for each UAV to visit their target GUs to complete its serving task, aiming to minimize both the energy consumption of the UAVs and the cost of UAVs accepting requests, is a challenging problem, where the cost UAVs accepting requests is composed of the instantiation cost of deploying VNFs and the computing cost of processing GU requests in the VNFs. In this paper, since the VNF deployment and the UAV trajectory planning have coupling effect, we focus on joint optimization of the two operations. We firstly formulate it as a nonconvex mixed integer non-linear programming problem. Then, we propose a hierarchical hybrid deep reinforcement learning algorithm based on jointly optimizing discrete and continuous action to solve the problem. Finally, we evaluate the performance of the proposed algorithm and the simulation results demonstrate its effectiveness.
{"title":"Joint optimization of VNF deployment and UAV trajectory planning in Multi-UAV-enabled mobile edge networks","authors":"Junbin Liang,&nbsp;Qiao He","doi":"10.1016/j.comnet.2025.111163","DOIUrl":"10.1016/j.comnet.2025.111163","url":null,"abstract":"<div><div>Multi-Unmanned Aerial Vehicle (UAV)-enabled mobile edge networks have emerged as a promising networking paradigm that uses multiple UAVs with limited communication and computation capacities as edge servers to traverse along planned trajectories to visit designated ground users (GUs) for providing network services in partial or no network coverage areas, e.g., disaster areas. Based on network virtualization technology, network services can be flexibly provisioned as virtual network functions (VNFs) deployed at the UAVs. However, given a set of UAVs with initial locations and a set of VNF requests from different GUs on different locations, how to deploy the on-demand VNFs on the limited-capacities UAVs with consideration that which UAV should carry which VNFs to serve which requests, and then plan trajectories for each UAV to visit their target GUs to complete its serving task, aiming to minimize both the energy consumption of the UAVs and the cost of UAVs accepting requests, is a challenging problem, where the cost UAVs accepting requests is composed of the instantiation cost of deploying VNFs and the computing cost of processing GU requests in the VNFs. In this paper, since the VNF deployment and the UAV trajectory planning have coupling effect, we focus on joint optimization of the two operations. We firstly formulate it as a nonconvex mixed integer non-linear programming problem. Then, we propose a hierarchical hybrid deep reinforcement learning algorithm based on jointly optimizing discrete and continuous action to solve the problem. Finally, we evaluate the performance of the proposed algorithm and the simulation results demonstrate its effectiveness.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"262 ","pages":"Article 111163"},"PeriodicalIF":4.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and evaluation of an Autonomous Cyber Defence agent using DRL and an augmented LLM
IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-05 DOI: 10.1016/j.comnet.2025.111162
Johannes Loevenich , Erik Adler , Tobias Hürten , Roberto Rigolin F. Lopes
In this paper, we design and evaluate an Autonomous Cyber Defence (ACD) agent to monitor and act within critical network segments connected to untrusted infrastructure hosting active adversaries. We assume that modern network segments use software-defined controllers with the means to host ACD agents and other cybersecurity tools that implement hybrid AI models. Our agent uses a hybrid AI architecture that integrates deep reinforcement learning (DRL), augmented Large Language Models (LLMs), and rule-based systems. This architecture can be implemented in software-defined network controllers, enabling automated defensive actions such as monitoring, analysis, decoy deployment, service removal, and recovery. A core contribution of our work is the construction of three cybersecurity knowledge graphs that organise and map data from network logs, open source Cyber Threat Intelligence (CTI) reports, and vulnerability frameworks. These graphs enable automatic mapping of Common Vulnerabilities and Exposures (CVEs) to offensive tactics and techniques defined in the MITRE ATT&CK framework using Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer (GPT) models. Our experimental evaluation of the knowledge graphs shows that BERT-based models perform better, with precision (83.02%), recall (75.92%), and macro F1 scores (58.70%) significantly outperforming GPT models. The ACD agent was evaluated in a Cyber Operations Research (ACO) gym against eleven DRL models, including Proximal Policy Optimisation (PPO), Hierarchical PPO, and ensembles under two different attacker strategies. The results show that our ACD agent outperformed baseline implementations, with its DRL models effectively mitigating attacks and recovering compromised systems. In addition, we implemented and evaluated a chatbot using Retrieval-Augmented Generation (RAG) and a prompting agent augmented with the CTI reports represented in the cybersecurity knowledge graphs. The chatbot achieved high scores on generation metrics such as relevance (0.85), faithfulness (0.83), and semantic similarity (0.88), as well as retrieval metrics such as contextual precision (0.91). The experimental results suggest that the integration of hybrid AI systems with knowledge graphs can enable the automation and improve the precision of cyber defence operations, and also provide a robust interface for cybersecurity experts to interpret and respond to advanced cybersecurity threats.
{"title":"Design and evaluation of an Autonomous Cyber Defence agent using DRL and an augmented LLM","authors":"Johannes Loevenich ,&nbsp;Erik Adler ,&nbsp;Tobias Hürten ,&nbsp;Roberto Rigolin F. Lopes","doi":"10.1016/j.comnet.2025.111162","DOIUrl":"10.1016/j.comnet.2025.111162","url":null,"abstract":"<div><div>In this paper, we design and evaluate an Autonomous Cyber Defence (ACD) agent to monitor and act within critical network segments connected to untrusted infrastructure hosting active adversaries. We assume that modern network segments use software-defined controllers with the means to host ACD agents and other cybersecurity tools that implement hybrid AI models. Our agent uses a hybrid AI architecture that integrates deep reinforcement learning (DRL), augmented Large Language Models (LLMs), and rule-based systems. This architecture can be implemented in software-defined network controllers, enabling automated defensive actions such as monitoring, analysis, decoy deployment, service removal, and recovery. A core contribution of our work is the construction of three cybersecurity knowledge graphs that organise and map data from network logs, open source Cyber Threat Intelligence (CTI) reports, and vulnerability frameworks. These graphs enable automatic mapping of Common Vulnerabilities and Exposures (CVEs) to offensive tactics and techniques defined in the MITRE ATT&amp;CK framework using Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer (GPT) models. Our experimental evaluation of the knowledge graphs shows that BERT-based models perform better, with precision (83.02%), recall (75.92%), and macro F1 scores (58.70%) significantly outperforming GPT models. The ACD agent was evaluated in a Cyber Operations Research (ACO) gym against eleven DRL models, including Proximal Policy Optimisation (PPO), Hierarchical PPO, and ensembles under two different attacker strategies. The results show that our ACD agent outperformed baseline implementations, with its DRL models effectively mitigating attacks and recovering compromised systems. In addition, we implemented and evaluated a chatbot using Retrieval-Augmented Generation (RAG) and a prompting agent augmented with the CTI reports represented in the cybersecurity knowledge graphs. The chatbot achieved high scores on generation metrics such as relevance (0.85), faithfulness (0.83), and semantic similarity (0.88), as well as retrieval metrics such as contextual precision (0.91). The experimental results suggest that the integration of hybrid AI systems with knowledge graphs can enable the automation and improve the precision of cyber defence operations, and also provide a robust interface for cybersecurity experts to interpret and respond to advanced cybersecurity threats.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"262 ","pages":"Article 111162"},"PeriodicalIF":4.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143562749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1