首页 > 最新文献

Computer Communications最新文献

英文 中文
Deeply fused flow and topology features for botnet detection based on a pretrained GCN
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1016/j.comcom.2025.108084
Xiaoyuan Meng , Bo Lang , Yuhao Yan , Yanxi Liu
The characteristics of botnets are mainly reflected in their network behaviors and the intercommunication relationships among their bots. The existing botnet detection methods typically use only one kind of feature, i.e., flow features or topological features; each feature type overlooks the other type of features and affects the resulting model performance. In this paper, for the first time, we propose a botnet detection model that uses a graph convolutional network (GCN) to deeply fuse flow features and topological features. We construct communication graphs from network traffic and represent node attributes with flow features. The extreme sample imbalance phenomenon exhibited by the existing public traffic datasets makes training a GCN model impractical. To address this problem, we propose a pretrained GCN framework that utilizes a public balanced artificial communication graph dataset to pretrain the GCN model, and the feature output obtained from the last hidden layer of the GCN model containing the flow and topology information is input into the Extra Tree classification model. Furthermore, our model can effectively detect command-and-control (C2) and peer-to-peer (P2P) botnets by simply adjusting the number of layers in the GCN. The experimental results obtained on public datasets demonstrate that our approach outperforms the current state-of-the-art botnet detection models. In addition, our model also performs well in real-world botnet detection scenarios.
{"title":"Deeply fused flow and topology features for botnet detection based on a pretrained GCN","authors":"Xiaoyuan Meng ,&nbsp;Bo Lang ,&nbsp;Yuhao Yan ,&nbsp;Yanxi Liu","doi":"10.1016/j.comcom.2025.108084","DOIUrl":"10.1016/j.comcom.2025.108084","url":null,"abstract":"<div><div>The characteristics of botnets are mainly reflected in their network behaviors and the intercommunication relationships among their bots. The existing botnet detection methods typically use only one kind of feature, i.e., flow features or topological features; each feature type overlooks the other type of features and affects the resulting model performance. In this paper, for the first time, we propose a botnet detection model that uses a graph convolutional network (GCN) to deeply fuse flow features and topological features. We construct communication graphs from network traffic and represent node attributes with flow features. The extreme sample imbalance phenomenon exhibited by the existing public traffic datasets makes training a GCN model impractical. To address this problem, we propose a pretrained GCN framework that utilizes a public balanced artificial communication graph dataset to pretrain the GCN model, and the feature output obtained from the last hidden layer of the GCN model containing the flow and topology information is input into the Extra Tree classification model. Furthermore, our model can effectively detect command-and-control (C2) and peer-to-peer (P2P) botnets by simply adjusting the number of layers in the GCN. The experimental results obtained on public datasets demonstrate that our approach outperforms the current state-of-the-art botnet detection models. In addition, our model also performs well in real-world botnet detection scenarios.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108084"},"PeriodicalIF":4.5,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-agent deep reinforcement learning-based partial offloading and resource allocation in vehicular edge computing networks
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-24 DOI: 10.1016/j.comcom.2025.108081
Jianbin Xue , Luyao Wang , Qingda Yu , Peipei Mao
The advancement of intelligent transportation systems and the increase in vehicle density have led to a need for more efficient computation offloading in vehicular edge computing networks (VECNs). However, traditional approaches are unable to meet the service demand of each vehicle due to limited resources and overload. Therefore, in this paper, we aim to minimize the long-term computation overhead (including delay and energy consumption) of vehicles. First, we propose combining the computational resources of local vehicles, idle vehicles, and roadside units (RSUs) to formulate a computation offloading strategy and resource allocation scheme based on multi-agent deep reinforcement learning (MADRL), which optimizes the dual offloading decisions for both total and residual tasks as well as system resource allocation for each vehicle. Furthermore, due to the high mobility of vehicles, we propose a task migration strategy (TMS) algorithm based on communication distance and vehicle movement speed to avoid failure of computation result delivery when a vehicle moves out of the communication range of an RSU service node. Finally, we formulate the computation offloading problem for vehicles as a Markov game process and design a Partial Offloading and Resource Allocation algorithm based on the collaborative Multi-Agent Twin Delayed Deep Deterministic Policy Gradient (PORA-MATD3). PORA-MATD3 optimizes the offloading decisions and resource allocation for each vehicle through a centralized training and distributed execution approach. Simulation results demonstrate that PORA-MATD3 significantly reduces the computational overhead of each vehicle compared to other baseline algorithms in VECN scenarios.
{"title":"Multi-agent deep reinforcement learning-based partial offloading and resource allocation in vehicular edge computing networks","authors":"Jianbin Xue ,&nbsp;Luyao Wang ,&nbsp;Qingda Yu ,&nbsp;Peipei Mao","doi":"10.1016/j.comcom.2025.108081","DOIUrl":"10.1016/j.comcom.2025.108081","url":null,"abstract":"<div><div>The advancement of intelligent transportation systems and the increase in vehicle density have led to a need for more efficient computation offloading in vehicular edge computing networks (VECNs). However, traditional approaches are unable to meet the service demand of each vehicle due to limited resources and overload. Therefore, in this paper, we aim to minimize the long-term computation overhead (including delay and energy consumption) of vehicles. First, we propose combining the computational resources of local vehicles, idle vehicles, and roadside units (RSUs) to formulate a computation offloading strategy and resource allocation scheme based on multi-agent deep reinforcement learning (MADRL), which optimizes the dual offloading decisions for both total and residual tasks as well as system resource allocation for each vehicle. Furthermore, due to the high mobility of vehicles, we propose a task migration strategy (TMS) algorithm based on communication distance and vehicle movement speed to avoid failure of computation result delivery when a vehicle moves out of the communication range of an RSU service node. Finally, we formulate the computation offloading problem for vehicles as a Markov game process and design a Partial Offloading and Resource Allocation algorithm based on the collaborative Multi-Agent Twin Delayed Deep Deterministic Policy Gradient (PORA-MATD3). PORA-MATD3 optimizes the offloading decisions and resource allocation for each vehicle through a centralized training and distributed execution approach. Simulation results demonstrate that PORA-MATD3 significantly reduces the computational overhead of each vehicle compared to other baseline algorithms in VECN scenarios.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"234 ","pages":"Article 108081"},"PeriodicalIF":4.5,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the right choice of data from popular datasets for Internet traffic classification
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-22 DOI: 10.1016/j.comcom.2025.108068
Jacek Krupski, Marcin Iwanowski, Waldemar Graniszewski
Machine learning (ML) models used to analyze Internet traffic, similar to models in all other fields of ML, need to be fed by training datasets. Many such sets consist of labeled samples of the collected traffic data from harmful and benign traffic classes captured from the actual traffic. Since the traffic recording tools capture all the transmitted data, they contain much information related to the registration process that is irrelevant to the actual traffic class. Moreover, they are not fully anonymized. Thus, there is a need to preprocess the data before proper modeling, which should always be addressed in related studies, but often, this is not done. In our paper, we focus on the dependence of the efficiency of threat detection ML models by selecting the appropriate data samples from the training sets during preprocessing. We are analyzing three popular datasets: USTC-TFC2016, VPN-nonVPN, and TOR-nonTOR, which are widely used in traffic classification, security, and privacy-enhancing technologies research. We show that some choices of data sample pieces, although maximizing the model’s efficiency, would not result in similar outcomes in the case of traffic data other than the learning set. The reason is that, in these cases, models are biased due to learning incidental correlations that appear in the datasets used for training the model, introduced by auxiliary data related to the network traffic capturing and transmission process. They are present in popular datasets but may never appear in traffic data. Consequently, the models trained on such datasets, without any preprocessing and anonymization, would never reach the accuracy levels of the training data. Our paper introduces five consecutive levels of anonymization of the traffic data and points out that only the highest provide correct learning results. We validate the results by applying decision trees, random forests, and extra tree models. Having found the optimal part of the header data that may safely be used, we focus on the length of the remaining part of the traffic data to find its minimal length, which preserves good detection accuracy.
{"title":"On the right choice of data from popular datasets for Internet traffic classification","authors":"Jacek Krupski,&nbsp;Marcin Iwanowski,&nbsp;Waldemar Graniszewski","doi":"10.1016/j.comcom.2025.108068","DOIUrl":"10.1016/j.comcom.2025.108068","url":null,"abstract":"<div><div>Machine learning (ML) models used to analyze Internet traffic, similar to models in all other fields of ML, need to be fed by training datasets. Many such sets consist of labeled samples of the collected traffic data from harmful and benign traffic classes captured from the actual traffic. Since the traffic recording tools capture all the transmitted data, they contain much information related to the registration process that is irrelevant to the actual traffic class. Moreover, they are not fully anonymized. Thus, there is a need to preprocess the data before proper modeling, which should always be addressed in related studies, but often, this is not done. In our paper, we focus on the dependence of the efficiency of threat detection ML models by selecting the appropriate data samples from the training sets during preprocessing. We are analyzing three popular datasets: USTC-TFC2016, VPN-nonVPN, and TOR-nonTOR, which are widely used in traffic classification, security, and privacy-enhancing technologies research. We show that some choices of data sample pieces, although maximizing the model’s efficiency, would not result in similar outcomes in the case of traffic data other than the learning set. The reason is that, in these cases, models are biased due to learning incidental correlations that appear in the datasets used for training the model, introduced by auxiliary data related to the network traffic capturing and transmission process. They are present in popular datasets but may never appear in traffic data. Consequently, the models trained on such datasets, without any preprocessing and anonymization, would never reach the accuracy levels of the training data. Our paper introduces five consecutive levels of anonymization of the traffic data and points out that only the highest provide correct learning results. We validate the results by applying decision trees, random forests, and extra tree models. Having found the optimal part of the header data that may safely be used, we focus on the length of the remaining part of the traffic data to find its minimal length, which preserves good detection accuracy.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108068"},"PeriodicalIF":4.5,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Conditional handover for 5G networks with dynamic obstacles
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-21 DOI: 10.1016/j.comcom.2025.108067
Souvik Deb , Megh Rathod , Rishi Balamurugan , Shankar K. Ghosh , Rajeev Kumar Singh , Samriddha Sanyal
To enhance seamless connectivity in millimetre wave New Radio networks, Conditional handover has evolved as a promising solution. Unlike A3 handover where handover execution is certain after receiving handover command from the serving access network, in Conditional handover, handover execution is conditional on Reference signal received power measurements from current and target access networks, as well as on handover parameters such as preparation and execution offsets. Presence of dynamic obstacles may block the signal from serving and (or) target access networks, which results in violation of the conditions for handover preparation/execution. Moreover, signal blockage by dynamic obstacles may cause radio link failure, which may cause handover failure as well. Analytic evaluation of Conditional handover in the presence of dynamic obstacles is quite limited in the existing literature. In this work, handover performance of Conditional handover has been analysed in terms of handover latency, handover packet loss and handover failure probability. A Markov model accounting the effect of dynamic obstacles, handover parameters (e.g., execution offset, preparation offset, time-to-preparation and time-to-execution), user velocity and channel fading characteristics has been proposed to characterize handover failure. Results obtained from the proposed analytic model have been validated against simulation results. Our study reveals that optimal configuration of handover parameters is actually conditional on the presence of dynamic obstacles, user velocity and fading characteristics. This study will be helpful for the mobile operators to configure handover parameters for New Radio systems where dynamic obstacles are present.
{"title":"Evaluating Conditional handover for 5G networks with dynamic obstacles","authors":"Souvik Deb ,&nbsp;Megh Rathod ,&nbsp;Rishi Balamurugan ,&nbsp;Shankar K. Ghosh ,&nbsp;Rajeev Kumar Singh ,&nbsp;Samriddha Sanyal","doi":"10.1016/j.comcom.2025.108067","DOIUrl":"10.1016/j.comcom.2025.108067","url":null,"abstract":"<div><div>To enhance seamless connectivity in millimetre wave New Radio networks, Conditional handover has evolved as a promising solution. Unlike A3 handover where handover execution is certain after receiving handover command from the serving access network, in Conditional handover, handover execution is <em>conditional</em> on Reference signal received power measurements from current and target access networks, as well as on handover parameters such as preparation and execution offsets. Presence of dynamic obstacles may block the signal from serving and (or) target access networks, which results in violation of the conditions for handover preparation/execution. Moreover, signal blockage by dynamic obstacles may cause radio link failure, which may cause handover failure as well. Analytic evaluation of Conditional handover in the presence of dynamic obstacles is quite limited in the existing literature. In this work, handover performance of Conditional handover has been analysed in terms of handover latency, handover packet loss and handover failure probability. A Markov model accounting the effect of dynamic obstacles, handover parameters (e.g., execution offset, preparation offset, time-to-preparation and time-to-execution), user velocity and channel fading characteristics has been proposed to characterize handover failure. Results obtained from the proposed analytic model have been validated against simulation results. Our study reveals that optimal configuration of handover parameters is actually conditional on the presence of dynamic obstacles, user velocity and fading characteristics. This study will be helpful for the mobile operators to configure handover parameters for New Radio systems where dynamic obstacles are present.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108067"},"PeriodicalIF":4.5,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-agent enhanced DDPG method for federated learning resource allocation in IoT
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-21 DOI: 10.1016/j.comcom.2025.108066
Yue Sun , Hui Xia , Chuxiao Su , Rui Zhang , Jieru Wang , Kunkun Jia
In the Internet of Things (IoT), federated learning (FL) is a distributed machine learning method that significantly improves model performance by utilizing local device data for collaborative training. However, applying FL in IoT also presents new challenges: the significant differences in computing and communication capabilities among IoT devices and the limited resources make efficient resource allocation crucial. This paper proposes a multi-agent enhanced deep deterministic policy gradient method (MAEDDPG) based on deep reinforcement learning to obtain the optimal resource allocation strategy. Firstly, MAEDDPG introduces long short-term memory networks to address the local observation problem in multi-agent settings. Secondly, noise networks are employed during training to enhance exploration, preventing the model from getting stuck in local optima. Finally, an enhanced double critic network is designed to reduce the error in value function estimation. MAEDDPG effectively obtains the optimal resource allocation strategy, coordinating the computing and communication resources of various IoT devices, thereby balancing FL training time and IoT device energy consumption. The experimental results show that the proposed MAEDDPG method outperforms the state-of-the-art method in IoT, reducing the average system cost by 12.4%.
{"title":"A multi-agent enhanced DDPG method for federated learning resource allocation in IoT","authors":"Yue Sun ,&nbsp;Hui Xia ,&nbsp;Chuxiao Su ,&nbsp;Rui Zhang ,&nbsp;Jieru Wang ,&nbsp;Kunkun Jia","doi":"10.1016/j.comcom.2025.108066","DOIUrl":"10.1016/j.comcom.2025.108066","url":null,"abstract":"<div><div>In the Internet of Things (IoT), federated learning (FL) is a distributed machine learning method that significantly improves model performance by utilizing local device data for collaborative training. However, applying FL in IoT also presents new challenges: the significant differences in computing and communication capabilities among IoT devices and the limited resources make efficient resource allocation crucial. This paper proposes a multi-agent enhanced deep deterministic policy gradient method (MAEDDPG) based on deep reinforcement learning to obtain the optimal resource allocation strategy. Firstly, MAEDDPG introduces long short-term memory networks to address the local observation problem in multi-agent settings. Secondly, noise networks are employed during training to enhance exploration, preventing the model from getting stuck in local optima. Finally, an enhanced double critic network is designed to reduce the error in value function estimation. MAEDDPG effectively obtains the optimal resource allocation strategy, coordinating the computing and communication resources of various IoT devices, thereby balancing FL training time and IoT device energy consumption. The experimental results show that the proposed MAEDDPG method outperforms the state-of-the-art method in IoT, reducing the average system cost by 12.4%.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108066"},"PeriodicalIF":4.5,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143131571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing healthcare infrastructure resilience through agent-based simulation methods
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-20 DOI: 10.1016/j.comcom.2025.108070
David Carramiñana, Ana M. Bernardos, Juan A. Besada, José R. Casar
Critical infrastructures face demanding challenges due to natural and human-generated threats, such as pandemics, workforce shortages or cyber-attacks, which might severely compromise service quality. To improve system resilience, decision-makers would need intelligent tools for quick and efficient resource allocation. This article explores an agent-based simulation model that intends to capture a part of the complexity of critical infrastructure systems, particularly considering the interdependencies of healthcare systems with information and telecommunication systems. Such a model enables to implement a simulation-based optimization approach in which the exposure of critical systems to risks is evaluated, while comparing the mitigation effects of multiple tactical and strategical decision alternatives to enhance their resilience. The proposed model is designed to be parameterizable, to enable adapting it to risk scenarios with different severity, and it facilitates the compilation of relevant performance indicators enabling monitoring at both agent level and system level. To validate the agent-based model, a literature-supported methodology has been used to perform cross-validation, sensitivity analysis and test the usefulness of the proposed model through a use case. The use case analyzes the impact of a concurrent pandemic and a cyber-attack on a hospital and compares different resiliency-enhancing countermeasures using contingency tables. Overall, the use case illustrates the feasibility and versatility of the proposed approach to enhance resiliency.
{"title":"Enhancing healthcare infrastructure resilience through agent-based simulation methods","authors":"David Carramiñana,&nbsp;Ana M. Bernardos,&nbsp;Juan A. Besada,&nbsp;José R. Casar","doi":"10.1016/j.comcom.2025.108070","DOIUrl":"10.1016/j.comcom.2025.108070","url":null,"abstract":"<div><div>Critical infrastructures face demanding challenges due to natural and human-generated threats, such as pandemics, workforce shortages or cyber-attacks, which might severely compromise service quality. To improve system resilience, decision-makers would need intelligent tools for quick and efficient resource allocation. This article explores an agent-based simulation model that intends to capture a part of the complexity of critical infrastructure systems, particularly considering the interdependencies of healthcare systems with information and telecommunication systems. Such a model enables to implement a simulation-based optimization approach in which the exposure of critical systems to risks is evaluated, while comparing the mitigation effects of multiple tactical and strategical decision alternatives to enhance their resilience. The proposed model is designed to be parameterizable, to enable adapting it to risk scenarios with different severity, and it facilitates the compilation of relevant performance indicators enabling monitoring at both agent level and system level. To validate the agent-based model, a literature-supported methodology has been used to perform cross-validation, sensitivity analysis and test the usefulness of the proposed model through a use case. The use case analyzes the impact of a concurrent pandemic and a cyber-attack on a hospital and compares different resiliency-enhancing countermeasures using contingency tables. Overall, the use case illustrates the feasibility and versatility of the proposed approach to enhance resiliency.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"234 ","pages":"Article 108070"},"PeriodicalIF":4.5,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143142685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DPS-IIoT: Non-interactive zero-knowledge proof-inspired access control towards information-centric Industrial Internet of Things
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-20 DOI: 10.1016/j.comcom.2025.108065
Dun Li , Noel Crespi , Roberto Minerva , Wei Liang , Kuan-Ching Li , Joanna Kołodziej
The advancements in 5G/6G communication technologies have enabled the rapid development and expanded application of the Industrial Internet of Things (IIoT). However, the limitations of traditional host-centric networks are becoming increasingly evident, especially in meeting the growing demands of the IIoT for higher data speeds, enhanced privacy protections, and improved resilience to disruptions. In this work, we present the ZK-CP-ABE algorithm, a novel security framework designed to enhance security and efficiency in distributing content within the IIoT. By integrating a non-interactive zero-knowledge proof (ZKP) protocol for user authentication and data validation into the existing Ciphertext-Policy Attribute-Based Encryption (CP-ABE), the ZK-CP-ABE algorithm substantially improves privacy protections while efficiently managing bandwidth usage. Furthermore, we propose the Distributed Publish-Subscribe Industrial Internet of Things (DPS-IIoT) system, which uses Hyperledger Fabric blockchain technology to deploy access policies and ensure the integrity of ZKP from tampering and cyber-attacks, thus enhancing the security and reliability of IIoT networks. To validate the effectiveness of our approach, extensive experiments were conducted, demonstrating that the proposed ZK-CP-ABE algorithm significantly reduces bandwidth consumption, while maintaining robust security against unauthorized access. Experimental evaluation shows that the ZK-CP-ABE algorithm and DPS-IIoT system significantly enhance bandwidth efficiency and overall throughput in IIoT environments.
{"title":"DPS-IIoT: Non-interactive zero-knowledge proof-inspired access control towards information-centric Industrial Internet of Things","authors":"Dun Li ,&nbsp;Noel Crespi ,&nbsp;Roberto Minerva ,&nbsp;Wei Liang ,&nbsp;Kuan-Ching Li ,&nbsp;Joanna Kołodziej","doi":"10.1016/j.comcom.2025.108065","DOIUrl":"10.1016/j.comcom.2025.108065","url":null,"abstract":"<div><div>The advancements in 5G/6G communication technologies have enabled the rapid development and expanded application of the Industrial Internet of Things (IIoT). However, the limitations of traditional host-centric networks are becoming increasingly evident, especially in meeting the growing demands of the IIoT for higher data speeds, enhanced privacy protections, and improved resilience to disruptions. In this work, we present the ZK-CP-ABE algorithm, a novel security framework designed to enhance security and efficiency in distributing content within the IIoT. By integrating a non-interactive zero-knowledge proof (ZKP) protocol for user authentication and data validation into the existing Ciphertext-Policy Attribute-Based Encryption (CP-ABE), the ZK-CP-ABE algorithm substantially improves privacy protections while efficiently managing bandwidth usage. Furthermore, we propose the Distributed Publish-Subscribe Industrial Internet of Things (DPS-IIoT) system, which uses Hyperledger Fabric blockchain technology to deploy access policies and ensure the integrity of ZKP from tampering and cyber-attacks, thus enhancing the security and reliability of IIoT networks. To validate the effectiveness of our approach, extensive experiments were conducted, demonstrating that the proposed ZK-CP-ABE algorithm significantly reduces bandwidth consumption, while maintaining robust security against unauthorized access. Experimental evaluation shows that the ZK-CP-ABE algorithm and DPS-IIoT system significantly enhance bandwidth efficiency and overall throughput in IIoT environments.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108065"},"PeriodicalIF":4.5,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Just a little human intelligence feedback! Unsupervised learning assisted supervised learning data poisoning based backdoor removal
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-18 DOI: 10.1016/j.comcom.2025.108052
Ting Luo , Huaibing Peng , Anmin Fu , Wei Yang , Lihui Pang , Said F. Al-Sarawi , Derek Abbott , Yansong Gao
Backdoor attacks on deep learning (DL) models are recognized as one of the most alarming security threats, particularly in security-critical applications. A primary source of backdoor introduction is data outsourcing such as when data is aggregated from third parties or end Internet of Things (IoT) devices, which are susceptible to various attacks. Significant efforts have been made to counteract backdoor attacks through defensive measures. However, the majority of them are ineffective to either evolving trigger types or backdoor types. This study proposes a poisoned data detection method, termed as LABOR (unsupervised Learning Assisted supervised learning data poisoning based Backd Or Removal), by incorporating a little human intelligence feedback. LABOR is specifically devised to counter backdoor induced by dirty-label data poisoning on the most common classification tasks. The key insight is that regardless of the underlying trigger types (e.g., patch or imperceptible triggers) and intended backdoor types (e.g., universal or partial backdoor), the poisoned samples still preserve the semantic features of their original classes. By clustering these poisoned samples based on their original categories through unsupervised learning, with category identification assisted by human intelligence, LABOR can detect and remove poisoned samples by identifying discrepancies between cluster categories and classification model predictions. Extensive experiments on eight benchmark datasets, including an intrusion detection dataset relevant to IoT device protection, validate LABOR’s effectiveness in combating dirty-label poisoning-based backdoor attacks. LABOR’s robustness is further demonstrated across various trigger and backdoor types, as well as diverse data modalities, including image, audio and text.
{"title":"Just a little human intelligence feedback! Unsupervised learning assisted supervised learning data poisoning based backdoor removal","authors":"Ting Luo ,&nbsp;Huaibing Peng ,&nbsp;Anmin Fu ,&nbsp;Wei Yang ,&nbsp;Lihui Pang ,&nbsp;Said F. Al-Sarawi ,&nbsp;Derek Abbott ,&nbsp;Yansong Gao","doi":"10.1016/j.comcom.2025.108052","DOIUrl":"10.1016/j.comcom.2025.108052","url":null,"abstract":"<div><div>Backdoor attacks on deep learning (DL) models are recognized as one of the most alarming security threats, particularly in security-critical applications. A primary source of backdoor introduction is data outsourcing such as when data is aggregated from third parties or end Internet of Things (IoT) devices, which are susceptible to various attacks. Significant efforts have been made to counteract backdoor attacks through defensive measures. However, the majority of them are ineffective to either evolving trigger types or backdoor types. This study proposes a poisoned data detection method, termed as <span>LABOR</span> (unsupervised <strong>L</strong>earning <strong>A</strong>ssisted supervised learning data poisoning based <strong>B</strong>ackd <strong>O</strong>r <strong>R</strong>emoval), by incorporating a little human intelligence feedback. <span>LABOR</span> is specifically devised to counter backdoor induced by dirty-label data poisoning on the most common classification tasks. The key insight is that regardless of the underlying trigger types (e.g., patch or imperceptible triggers) and intended backdoor types (e.g., universal or partial backdoor), the poisoned samples still preserve the semantic features of their original classes. By clustering these poisoned samples based on their original categories through unsupervised learning, with category identification assisted by human intelligence, <span>LABOR</span> can detect and remove poisoned samples by identifying discrepancies between cluster categories and classification model predictions. Extensive experiments on eight benchmark datasets, including an intrusion detection dataset relevant to IoT device protection, validate <span>LABOR</span>’s effectiveness in combating dirty-label poisoning-based backdoor attacks. <span>LABOR</span>’s robustness is further demonstrated across various trigger and backdoor types, as well as diverse data modalities, including image, audio and text.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108052"},"PeriodicalIF":4.5,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COIChain: Blockchain scheme for privacy data authentication in cross-organizational identification
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-16 DOI: 10.1016/j.comcom.2025.108054
Zhexuan Yang , Xiao Qu , Zeng Chen , Guozi Sun
In cross-institutional user authentication, users’ personal privacy information is often exposed to the risk of disclosure and abuse. Users should have the right to decide on their own data, and others should not be able to use user data without users’ permission. In this study, we adopted a user-centered framework, so that users can obtain authorization among different resource owners through qualification proof, avoiding the dissemination of users’ personal privacy data. We have developed a blockchain-based cross-institutional authorization architecture where users can obtain identity authentication between different entities by structuring transactions. Through the selective disclosure algorithm, the user’s private information is hidden during the user identity authentication, and the authenticity of the user’s private information is verified by disclosing the user’s non-private information and authentication credentials. The architecture supports the generation of identity credentials of constant size based on atomic properties. We prototype the system on Ethereum. The prototype of the system is tested. The experiment proves that the sum of user information processing and verification time is about 80ms, and the time fluctuation of user information processing is very small. The results show that our data flow scheme can effectively avoid the privacy leakage problem in the user cross-agency authentication scenario with a small cost.
{"title":"COIChain: Blockchain scheme for privacy data authentication in cross-organizational identification","authors":"Zhexuan Yang ,&nbsp;Xiao Qu ,&nbsp;Zeng Chen ,&nbsp;Guozi Sun","doi":"10.1016/j.comcom.2025.108054","DOIUrl":"10.1016/j.comcom.2025.108054","url":null,"abstract":"<div><div>In cross-institutional user authentication, users’ personal privacy information is often exposed to the risk of disclosure and abuse. Users should have the right to decide on their own data, and others should not be able to use user data without users’ permission. In this study, we adopted a user-centered framework, so that users can obtain authorization among different resource owners through qualification proof, avoiding the dissemination of users’ personal privacy data. We have developed a blockchain-based cross-institutional authorization architecture where users can obtain identity authentication between different entities by structuring transactions. Through the selective disclosure algorithm, the user’s private information is hidden during the user identity authentication, and the authenticity of the user’s private information is verified by disclosing the user’s non-private information and authentication credentials. The architecture supports the generation of identity credentials of constant size based on atomic properties. We prototype the system on Ethereum. The prototype of the system is tested. The experiment proves that the sum of user information processing and verification time is about 80ms, and the time fluctuation of user information processing is very small. The results show that our data flow scheme can effectively avoid the privacy leakage problem in the user cross-agency authentication scenario with a small cost.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108054"},"PeriodicalIF":4.5,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement learning based offloading and resource allocation for multi-intelligent vehicles in green edge-cloud computing
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-11 DOI: 10.1016/j.comcom.2025.108051
Liying Li , Yifei Gao , Peiwen Xia , Sijie Lin , Peijin Cong , Junlong Zhou
Green edge-cloud computing (GECC) collaborative service architecture has become one of the mainstream frameworks for real-time intensive multi-intelligent vehicle applications in intelligent transportation systems (ITS). In GECC systems, effective task offloading and resource allocation are critical to system performance and efficiency. Existing works on task offloading and resource allocation for multi-intelligent vehicles in GECC systems focus on designing static methods, which offload tasks once or a fixed number of times. This offloading manner may lead to low resource utilization due to congestion on edge servers and is not suitable for ITS with dynamically changing parameters such as bandwidth. To solve the above problems, we present a dynamic task offloading and resource allocation method, which allows tasks to be offloaded an arbitrary number of times under time and resource constraints. Specifically, we consider the characteristics of tasks and propose a remaining model to obtain the states of vehicles and tasks in real-time. Then we present a task offloading and resource allocation method considering both time and energy according to a designed real-time multi-agent deep deterministic policy gradient (RT-MADDPG) model. Our approach can offload tasks in arbitrary number of times under resource and time constraints, and can dynamically adjust the task offloading and resource allocation solutions according to changing system states to maximize system utility, which considers both task processing time and energy. Extensive simulation results indicate that the proposed RT-MADDPG method can effectively improve the utility of ITS compared to 2 benchmarking methods.
{"title":"Reinforcement learning based offloading and resource allocation for multi-intelligent vehicles in green edge-cloud computing","authors":"Liying Li ,&nbsp;Yifei Gao ,&nbsp;Peiwen Xia ,&nbsp;Sijie Lin ,&nbsp;Peijin Cong ,&nbsp;Junlong Zhou","doi":"10.1016/j.comcom.2025.108051","DOIUrl":"10.1016/j.comcom.2025.108051","url":null,"abstract":"<div><div>Green edge-cloud computing (GECC) collaborative service architecture has become one of the mainstream frameworks for real-time intensive multi-intelligent vehicle applications in intelligent transportation systems (ITS). In GECC systems, effective task offloading and resource allocation are critical to system performance and efficiency. Existing works on task offloading and resource allocation for multi-intelligent vehicles in GECC systems focus on designing static methods, which offload tasks once or a fixed number of times. This offloading manner may lead to low resource utilization due to congestion on edge servers and is not suitable for ITS with dynamically changing parameters such as bandwidth. To solve the above problems, we present a dynamic task offloading and resource allocation method, which allows tasks to be offloaded an arbitrary number of times under time and resource constraints. Specifically, we consider the characteristics of tasks and propose a remaining model to obtain the states of vehicles and tasks in real-time. Then we present a task offloading and resource allocation method considering both time and energy according to a designed real-time multi-agent deep deterministic policy gradient (RT-MADDPG) model. Our approach can offload tasks in arbitrary number of times under resource and time constraints, and can dynamically adjust the task offloading and resource allocation solutions according to changing system states to maximize system utility, which considers both task processing time and energy. Extensive simulation results indicate that the proposed RT-MADDPG method can effectively improve the utility of ITS compared to 2 benchmarking methods.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"232 ","pages":"Article 108051"},"PeriodicalIF":4.5,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143160900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Communications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1