Pub Date : 2025-02-27DOI: 10.1016/j.comnet.2025.111148
Saeed Banaeian Far , Mohammad Reza Chalak Qazani , Seyed Mojtaba Hosseini Bamakan , Azadeh Imani Rad , Ahad Zareravasan
The blockchain trilemma, which poses inherent trade-offs among security, scalability, and decentralization, remains a critical challenge in blockchain technology. While numerous Layer 1 (L1) blockchains and Layer 2 (L2) scaling solutions have attempted to address these dimensions, no single approach has successfully optimized all three simultaneously. This study proposes an innovative L2 framework that leverages both classic and advanced cryptographic techniques to break the blockchain trilemma comprehensively. By incorporating the lightweight Knapsack-based encryption scheme, the framework achieves efficient computation and throughput, even for a large volume of transactions. Additionally, the integration of the MPC-in-the-Head (MPCitH) protocol ensures robust confidentiality while maintaining computational efficiency. The proposed framework introduces a novel reference model for evaluating blockchain solutions and demonstrates superiority across all trilemma dimensions. Experimental analysis and rigorous proofs confirm that this framework achieves enhanced scalability, decentralization, and security compared to existing approaches, offering a new benchmark for blockchain innovation.
{"title":"A novel Layer 2 framework for breaking the blockchain trilemma problem using MPC-in-the-Head","authors":"Saeed Banaeian Far , Mohammad Reza Chalak Qazani , Seyed Mojtaba Hosseini Bamakan , Azadeh Imani Rad , Ahad Zareravasan","doi":"10.1016/j.comnet.2025.111148","DOIUrl":"10.1016/j.comnet.2025.111148","url":null,"abstract":"<div><div>The blockchain trilemma, which poses inherent trade-offs among security, scalability, and decentralization, remains a critical challenge in blockchain technology. While numerous Layer 1 (L1) blockchains and Layer 2 (L2) scaling solutions have attempted to address these dimensions, no single approach has successfully optimized all three simultaneously. This study proposes an innovative L2 framework that leverages both classic and advanced cryptographic techniques to break the blockchain trilemma comprehensively. By incorporating the lightweight Knapsack-based encryption scheme, the framework achieves efficient computation and throughput, even for a large volume of transactions. Additionally, the integration of the MPC-in-the-Head (MPCitH) protocol ensures robust confidentiality while maintaining computational efficiency. The proposed framework introduces a novel reference model for evaluating blockchain solutions and demonstrates superiority across all trilemma dimensions. Experimental analysis and rigorous proofs confirm that this framework achieves enhanced scalability, decentralization, and security compared to existing approaches, offering a new benchmark for blockchain innovation.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"261 ","pages":"Article 111148"},"PeriodicalIF":4.4,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of cryptocurrencies has become an emerging and popular way of trading as they gain legitimacy. To address the issue of privacy leakage, some techniques to hide transaction amounts have been proposed such as the MimbleWimble protocol. However, these privacy enhancement schemes basically apply to one-to-one tradings between one payer and one payee, resulting in cryptocurrencies not being used in broader scenarios such as more than one payer or payee (referred to as multi-party transactions in this paper). In this work, we propose a new privacy-preserving decentralized multi-party payment (PDMP) scheme that ensures the transaction amounts in multi-party transactions remain confidential to other parties, and define the ideal functionality for it which captures the privacy and security properties in cryptocurrencies. Then we instantiate a lattice-based PDMP protocol in a hybrid model which can universally composable (UC) securely realize the functionality with a simulation-based security proof. We construct a lattice-based verifiable multi-secret sharing scheme and a lattice-based multi-prover non-interactive zero-knowledge argument to support the protocol, both of which enjoy the security in the future quantum computer era. At last, we have carried out experimental implementation of the scheme to prove its feasibility.
{"title":"A lattice-based privacy-preserving decentralized multi-party payment scheme","authors":"Jisheng Dong , Qingni Shen , Junkai Liang , Cong Li , Xinyu Feng , Yuejian Fang","doi":"10.1016/j.comnet.2025.111129","DOIUrl":"10.1016/j.comnet.2025.111129","url":null,"abstract":"<div><div>The use of cryptocurrencies has become an emerging and popular way of trading as they gain legitimacy. To address the issue of privacy leakage, some techniques to hide transaction amounts have been proposed such as the MimbleWimble protocol. However, these privacy enhancement schemes basically apply to one-to-one tradings between one payer and one payee, resulting in cryptocurrencies not being used in broader scenarios such as more than one payer or payee (referred to as multi-party transactions in this paper). In this work, we propose a new privacy-preserving decentralized multi-party payment (PDMP) scheme that ensures the transaction amounts in multi-party transactions remain confidential to other parties, and define the ideal functionality for it which captures the privacy and security properties in cryptocurrencies. Then we instantiate a lattice-based PDMP protocol in a hybrid model which can universally composable (UC) securely realize the functionality with a simulation-based security proof. We construct a lattice-based verifiable multi-secret sharing scheme and a lattice-based multi-prover non-interactive zero-knowledge argument to support the protocol, both of which enjoy the security in the future quantum computer era. At last, we have carried out experimental implementation of the scheme to prove its feasibility.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"262 ","pages":"Article 111129"},"PeriodicalIF":4.4,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-26DOI: 10.1016/j.comnet.2025.111150
Thi-Thu-Huong Le , Shinwook Heo , Jaehan Cho , Howon Kim
The imperative for robust detection mechanisms has grown in the face of increasingly sophisticated Distributed Denial of Service (DDoS) attacks. This paper introduces DDoSBERT, an innovative approach harnessing transformer text classification for DDoS detection. The methodology conducts a detailed exploration of feature selection methods, emphasizing the selection of critical techniques, including Correlation, Mutual Information, and Univariate Feature Selection. Motivated by the dynamic landscape of DDoS attacks, DDoSBERT confronts contemporary challenges such as binary and multi-attack classification and imbalance attack classification. The methodology delves into diverse text transformation techniques for feature selection and employs three transformer classification models: distilbert-base-uncased, prunebert-base-uncased-6-finepruned-w-distil-mnli, and distilbert-base-uncased-finetuned-sst-2-english. Additionally, the paper outlines a comprehensive framework for assessing the importance of features in the context of five DDoS datasets, comprised of APA-DDoS, CRCDDoS2022, DDoS Attack SDN, CIC-DDoS-2019, and BCCC-cPacket-Cloud-DDoS-2024 datasets. The experimental results, rigorously evaluated against relevant benchmarks, affirm the efficacy of DDoSBERT, underscoring its significance in enhancing the resilience of systems against text-based transformation DDoS attacks. The discussion section interprets the results, highlights the implications of the findings, and acknowledges limitations while suggesting avenues for future research.
{"title":"DDoSBERT: Fine-tuning variant text classification bidirectional encoder representations from transformers for DDoS detection","authors":"Thi-Thu-Huong Le , Shinwook Heo , Jaehan Cho , Howon Kim","doi":"10.1016/j.comnet.2025.111150","DOIUrl":"10.1016/j.comnet.2025.111150","url":null,"abstract":"<div><div>The imperative for robust detection mechanisms has grown in the face of increasingly sophisticated Distributed Denial of Service (DDoS) attacks. This paper introduces DDoSBERT, an innovative approach harnessing transformer text classification for DDoS detection. The methodology conducts a detailed exploration of feature selection methods, emphasizing the selection of critical techniques, including Correlation, Mutual Information, and Univariate Feature Selection. Motivated by the dynamic landscape of DDoS attacks, DDoSBERT confronts contemporary challenges such as binary and multi-attack classification and imbalance attack classification. The methodology delves into diverse text transformation techniques for feature selection and employs three transformer classification models: distilbert-base-uncased, prunebert-base-uncased-6-finepruned-w-distil-mnli, and distilbert-base-uncased-finetuned-sst-2-english. Additionally, the paper outlines a comprehensive framework for assessing the importance of features in the context of five DDoS datasets, comprised of APA-DDoS, CRCDDoS2022, DDoS Attack SDN, CIC-DDoS-2019, and BCCC-cPacket-Cloud-DDoS-2024 datasets. The experimental results, rigorously evaluated against relevant benchmarks, affirm the efficacy of DDoSBERT, underscoring its significance in enhancing the resilience of systems against text-based transformation DDoS attacks. The discussion section interprets the results, highlights the implications of the findings, and acknowledges limitations while suggesting avenues for future research.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"262 ","pages":"Article 111150"},"PeriodicalIF":4.4,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-25DOI: 10.1016/j.comnet.2025.111146
Debashisha Mishra , Emiliano Traversi , Angelo Trotta , Prasanna Raut , Boris Galkin , Marco Di Felice , Enrico Natalizio
Unmanned aerial vehicle base stations (UAV-BSs) empowered with network slicing capabilities are presented in this work to support three heterogeneous classes of 5G slice service types, namely enhanced mobile broadband (eMBB), ultra-reliable low-latency communication (uRLLC), massive machine-type communication (mMTC). The coexistence of eMBB, uRLLC and mMTC services multiplexed over common UAV-BS radio resources leads to an incredibly challenging downlink scheduling problem due to the underlying trade-off of end-user requirements in terms of coverage, traffic demand, data rates, latency, reliability, and UAV-specific constraints. To this end, a modular and customizable two-phase resource slicing optimization framework is proposed for UAV-BS known as gEneral rAn Slicing optImizEr fRamework (EASIER) decomposed into: (i) resource optimizer (RO) and (ii) scheduling validator (SV). The reciprocation of RO and SV guided by above split optimization model can generate efficient scheduling decisions that benefit constrained UAV platforms in terms of finite computation and endurance. Furthermore, prioritizing per slice user acceptance rate, our results show that EASIER not only adheres to slice-specific SLAs (service level agreements) specified by the slice owners (i.e., tenants), but also benefit from efficient UAV-BS positioning to improvise service offering by 15% as compared to a slice-agnostic “default” positioning.
{"title":"Network slicing in aerial base station (UAV-BS) towards coexistence of heterogeneous 5G services","authors":"Debashisha Mishra , Emiliano Traversi , Angelo Trotta , Prasanna Raut , Boris Galkin , Marco Di Felice , Enrico Natalizio","doi":"10.1016/j.comnet.2025.111146","DOIUrl":"10.1016/j.comnet.2025.111146","url":null,"abstract":"<div><div>Unmanned aerial vehicle base stations (UAV-BSs) empowered with network slicing capabilities are presented in this work to support three heterogeneous classes of 5G slice service types, namely enhanced mobile broadband (eMBB), ultra-reliable low-latency communication (uRLLC), massive machine-type communication (mMTC). The coexistence of eMBB, uRLLC and mMTC services multiplexed over common UAV-BS radio resources leads to an incredibly challenging downlink scheduling problem due to the underlying trade-off of end-user requirements in terms of coverage, traffic demand, data rates, latency, reliability, and UAV-specific constraints. To this end, a modular and customizable two-phase resource slicing optimization framework is proposed for UAV-BS known as gEneral rAn Slicing optImizEr fRamework (<em>EASIER</em>) decomposed into: (i) resource optimizer (RO) and (ii) scheduling validator (SV). The reciprocation of RO and SV guided by above split optimization model can generate efficient scheduling decisions that benefit constrained UAV platforms in terms of finite computation and endurance. Furthermore, prioritizing per slice user acceptance rate, our results show that <em>EASIER</em> not only adheres to slice-specific SLAs (service level agreements) specified by the slice owners (<em>i.e.,</em> tenants), but also benefit from efficient UAV-BS positioning to improvise service offering by 15% as compared to a slice-agnostic “default” positioning.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"261 ","pages":"Article 111146"},"PeriodicalIF":4.4,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143508269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-25DOI: 10.1016/j.comnet.2025.111145
Osman Tugay Basaran, Falko Dressler
Generative Artificial Intelligence (AI) techniques have become integral part in advancing next generation wireless communication systems by enabling sophisticated data modeling and feature extraction for enhanced network performance. In the realm of open radio access networks (O-RAN), characterized by their disaggregated architecture and heterogeneous components from multiple vendors, the deployment of generative models offers significant advantages for network management such as traffic analysis, traffic forecasting and anomaly detection. However, the complex and dynamic nature of O-RAN introduces challenges that necessitate not only accurate detection mechanisms but also reduced complexity, scalability, and most importantly interpretability to facilitate effective network management. In this study, we introduce the XAInomaly framework, an explainable and interpretable Semi-supervised (SS) Deep Contractive Autoencoder (DeepCAE) design for anomaly detection in O-RAN. Our approach leverages the generative modeling capabilities of our SS-DeepCAE model to learn compressed, robust representations of normal network behavior, which captures essential features, enabling the identification of deviations indicative of anomalies. To address the black-box nature of deep learning models, we propose reactive Explainable AI (XAI) technique called fastshap-C, which is providing transparency into the model’s decision-making process and highlighting the features contributing to anomaly detection.
{"title":"XAInomaly: Explainable and interpretable Deep Contractive Autoencoder for O-RAN traffic anomaly detection","authors":"Osman Tugay Basaran, Falko Dressler","doi":"10.1016/j.comnet.2025.111145","DOIUrl":"10.1016/j.comnet.2025.111145","url":null,"abstract":"<div><div>Generative Artificial Intelligence (AI) techniques have become integral part in advancing next generation wireless communication systems by enabling sophisticated data modeling and feature extraction for enhanced network performance. In the realm of open radio access networks (O-RAN), characterized by their disaggregated architecture and heterogeneous components from multiple vendors, the deployment of generative models offers significant advantages for network management such as traffic analysis, traffic forecasting and anomaly detection. However, the complex and dynamic nature of O-RAN introduces challenges that necessitate not only accurate detection mechanisms but also reduced complexity, scalability, and most importantly interpretability to facilitate effective network management. In this study, we introduce the XAInomaly framework, an explainable and interpretable Semi-supervised (SS) Deep Contractive Autoencoder (DeepCAE) design for anomaly detection in O-RAN. Our approach leverages the generative modeling capabilities of our SS-DeepCAE model to learn compressed, robust representations of normal network behavior, which captures essential features, enabling the identification of deviations indicative of anomalies. To address the black-box nature of deep learning models, we propose reactive Explainable AI (XAI) technique called fastshap-C, which is providing transparency into the model’s decision-making process and highlighting the features contributing to anomaly detection.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"261 ","pages":"Article 111145"},"PeriodicalIF":4.4,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-24DOI: 10.1016/j.comnet.2025.111147
Mukhtar Ahmed , Jinfu Chen , Ernest Akpaku , Rexford Nii Ayitey Sosu
The increasing sophistication of network attacks, particularly zero-day threats, underscores the need for robust, unsupervised detection methods. These attacks can flood networks with malicious traffic, overwhelm resources, or render services unavailable to legitimate users. Existing machine learning methods for zero-day attack detection typically rely on statistical features of network traffic, such as packet sizes and inter-arrival times. However, traditional approaches that depend on manually labeled data and linear structures often struggle to capture the intricate spatiotemporal correlations crucial for detecting unknown attacks. This paper introduces the Multiscale Temporal Convolutional Recurrent Autoencoder (MTCR-AE), an innovative framework designed to detect malicious network traffic by leveraging Multiscale Temporal Convolutional Networks (TCN) and Gated Recurrent Units (GRU). The MTCR-AE model captures both short- and long-range spatiotemporal dependencies while incorporating a temporal attention mechanism to dynamically prioritize critical features. The MTCR-AE operates in an unsupervised manner, eliminating the need for manual data labeling and enhancing its scalability for real-world applications. Experimental evaluations conducted on four benchmark datasets — ISCX-IDS-2012, USTC-TFC-2016, CIRA-CIC-DoHBrw2020, and CICIoT2023 — demonstrate the model’s superior performance, achieving an accuracy of 99.69%, precision of 99.63%, recall of 99.69%, and an F1-score of 99.66%. The results highlight the model’s capability to deliver state-of-the-art detection performance while maintaining low false positive and false negative rates, offering a scalable and reliable solution for dynamic network environments.
{"title":"MTCR-AE: A Multiscale Temporal Convolutional Recurrent Autoencoder for unsupervised malicious network traffic detection","authors":"Mukhtar Ahmed , Jinfu Chen , Ernest Akpaku , Rexford Nii Ayitey Sosu","doi":"10.1016/j.comnet.2025.111147","DOIUrl":"10.1016/j.comnet.2025.111147","url":null,"abstract":"<div><div>The increasing sophistication of network attacks, particularly zero-day threats, underscores the need for robust, unsupervised detection methods. These attacks can flood networks with malicious traffic, overwhelm resources, or render services unavailable to legitimate users. Existing machine learning methods for zero-day attack detection typically rely on statistical features of network traffic, such as packet sizes and inter-arrival times. However, traditional approaches that depend on manually labeled data and linear structures often struggle to capture the intricate spatiotemporal correlations crucial for detecting unknown attacks. This paper introduces the Multiscale Temporal Convolutional Recurrent Autoencoder (MTCR-AE), an innovative framework designed to detect malicious network traffic by leveraging Multiscale Temporal Convolutional Networks (TCN) and Gated Recurrent Units (GRU). The MTCR-AE model captures both short- and long-range spatiotemporal dependencies while incorporating a temporal attention mechanism to dynamically prioritize critical features. The MTCR-AE operates in an unsupervised manner, eliminating the need for manual data labeling and enhancing its scalability for real-world applications. Experimental evaluations conducted on four benchmark datasets — ISCX-IDS-2012, USTC-TFC-2016, CIRA-CIC-DoHBrw2020, and CICIoT2023 — demonstrate the model’s superior performance, achieving an accuracy of 99.69%, precision of 99.63%, recall of 99.69%, and an F1-score of 99.66%. The results highlight the model’s capability to deliver state-of-the-art detection performance while maintaining low false positive and false negative rates, offering a scalable and reliable solution for dynamic network environments.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"261 ","pages":"Article 111147"},"PeriodicalIF":4.4,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143512489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-21DOI: 10.1016/j.comnet.2025.111115
Mohamad Alkadamani, Colin Brown, Kareem Baddour, Mathieu Châteauvert, Janaki Parekh, Adrian Florea
The emerging demand for localized private networks tailored to specific and diverse use cases has increased interest in developing local spectrum licensing approaches that differ from traditional broad-coverage schemes. There is a significant need for forward-looking quantitative studies to guide technical decisions and ensure that licensing conditions align with overarching goals, such as supporting high spectrum reuse in potentially dense network deployments. To address this gap in the research literature, a novel data-driven methodology is introduced to evaluate the potential effectiveness of local spectrum licensing schemes from a regulatory perspective. This methodology utilizes real-world data to simulate prospective local deployment scenarios, capturing critical geographic details such as high-demand market areas, realistic industry locations, and high-resolution clutter information. The practical application of this methodology is demonstrated through a case study focused on the 3.9 GHz band in Canada, highlighting the importance of incorporating contextually relevant geospatial datasets to better inform local licensing regulations.
{"title":"Spectrum efficiency through data: A methodology for evaluating local licensing strategies","authors":"Mohamad Alkadamani, Colin Brown, Kareem Baddour, Mathieu Châteauvert, Janaki Parekh, Adrian Florea","doi":"10.1016/j.comnet.2025.111115","DOIUrl":"10.1016/j.comnet.2025.111115","url":null,"abstract":"<div><div>The emerging demand for localized private networks tailored to specific and diverse use cases has increased interest in developing local spectrum licensing approaches that differ from traditional broad-coverage schemes. There is a significant need for forward-looking quantitative studies to guide technical decisions and ensure that licensing conditions align with overarching goals, such as supporting high spectrum reuse in potentially dense network deployments. To address this gap in the research literature, a novel data-driven methodology is introduced to evaluate the potential effectiveness of local spectrum licensing schemes from a regulatory perspective. This methodology utilizes real-world data to simulate prospective local deployment scenarios, capturing critical geographic details such as high-demand market areas, realistic industry locations, and high-resolution clutter information. The practical application of this methodology is demonstrated through a case study focused on the 3.9 GHz band in Canada, highlighting the importance of incorporating contextually relevant geospatial datasets to better inform local licensing regulations.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"261 ","pages":"Article 111115"},"PeriodicalIF":4.4,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143508268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-21DOI: 10.1016/j.comnet.2025.111149
Guosheng Zhao , Zhiwen Li , Jian Wang
Crowdsensing networks can be attacked by multiple malicious attacks at the same time, which seriously affects the reliability and security of the network. However, the existing defense technology mainly defends against specific types of attacks and cannot meet the security requirements of hybrid malicious attacks. Based on this, a hybrid attack collaborative defense model based on the integrated honey badger algorithm is proposed. First, the Crowdsensing network is formally described, and the collaborative mechanism of users participating is generalized. Through defense skill matching, efficient collaboration between users is achieved to meet the defense requirements of hybrid attacks. Then, the integrated honey badger algorithm is designed to solve the collaborative defense multi-objective optimization problem. The search intensity between multiple objectives is balanced by individual fitness evaluation, and the advantages and disadvantages of individuals in the collaborative defense scheme on the defense objectives are evaluated, and the Pareto optimal defense scheme is selected. At the same time, the algorithm continuously updates the collaborative defense scheme by selecting the exploration method suitable for the current defense state, and iteratively obtains the global optimal collaborative defense scheme. Finally, hybrid attack simulation and collaborative defense method performance analysis are carried out in the shared bicycle scheduling scenario. The experimental results show the feasibility and effectiveness of the proposed collaborative defense method.
{"title":"Hybrid attacks collaborative defense model using an ensemble honey badger algorithm","authors":"Guosheng Zhao , Zhiwen Li , Jian Wang","doi":"10.1016/j.comnet.2025.111149","DOIUrl":"10.1016/j.comnet.2025.111149","url":null,"abstract":"<div><div>Crowdsensing networks can be attacked by multiple malicious attacks at the same time, which seriously affects the reliability and security of the network. However, the existing defense technology mainly defends against specific types of attacks and cannot meet the security requirements of hybrid malicious attacks. Based on this, a hybrid attack collaborative defense model based on the integrated honey badger algorithm is proposed. First, the Crowdsensing network is formally described, and the collaborative mechanism of users participating is generalized. Through defense skill matching, efficient collaboration between users is achieved to meet the defense requirements of hybrid attacks. Then, the integrated honey badger algorithm is designed to solve the collaborative defense multi-objective optimization problem. The search intensity between multiple objectives is balanced by individual fitness evaluation, and the advantages and disadvantages of individuals in the collaborative defense scheme on the defense objectives are evaluated, and the Pareto optimal defense scheme is selected. At the same time, the algorithm continuously updates the collaborative defense scheme by selecting the exploration method suitable for the current defense state, and iteratively obtains the global optimal collaborative defense scheme. Finally, hybrid attack simulation and collaborative defense method performance analysis are carried out in the shared bicycle scheduling scenario. The experimental results show the feasibility and effectiveness of the proposed collaborative defense method.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"261 ","pages":"Article 111149"},"PeriodicalIF":4.4,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143488830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-21DOI: 10.1016/j.comnet.2025.111130
Zirun Zhao, Zhaowen Lin, Yi Sun
With the popularity of commercial artificial intelligence (AI), the importance of individual data is constantly increasing for the construction of large models. To ensure the utility of the released model, the security of individual data must be guaranteed with high confidence. Federated learning (FL), as the common paradigm for distributed learning, are usually subjected to various external attacks such as inversion attack or membership inference attack. Some solutions based on differential privacy (DP) are proposed to resist data revelation. However, the intelligence and collusion of adversaries are often underestimated during the training process. In this paper, an anti-inference differentially private federated learning protocol ADPF is proposed for data protection in an untrusted environment. ADPF models the attacker-defender scenario as a two-phase complete information dynamic game and designs optimization problems to find optimal budget allocations in different phases of training. Comparative experiments demonstrate that the performance of ADPF outperforms state-of-the-art differentially private federated learning protocol in both attack resistance and model utility.
{"title":"ADPF: Anti-inference differentially private protocol for federated learning","authors":"Zirun Zhao, Zhaowen Lin, Yi Sun","doi":"10.1016/j.comnet.2025.111130","DOIUrl":"10.1016/j.comnet.2025.111130","url":null,"abstract":"<div><div>With the popularity of commercial artificial intelligence (AI), the importance of individual data is constantly increasing for the construction of large models. To ensure the utility of the released model, the security of individual data must be guaranteed with high confidence. Federated learning (FL), as the common paradigm for distributed learning, are usually subjected to various external attacks such as inversion attack or membership inference attack. Some solutions based on differential privacy (DP) are proposed to resist data revelation. However, the intelligence and collusion of adversaries are often underestimated during the training process. In this paper, an anti-inference differentially private federated learning protocol ADPF is proposed for data protection in an untrusted environment. ADPF models the attacker-defender scenario as a two-phase complete information dynamic game and designs optimization problems to find optimal budget allocations in different phases of training. Comparative experiments demonstrate that the performance of ADPF outperforms state-of-the-art differentially private federated learning protocol in both attack resistance and model utility.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"261 ","pages":"Article 111130"},"PeriodicalIF":4.4,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-21DOI: 10.1016/j.comnet.2025.111131
Changpeng Zhu , Bincheng Fan , Nan Xiang , Bo Han , Tian Zhou
Interplanetary File System (IPFS) network, a prominent peer-to-peer decentralized file-sharing network, is extensively utilized by decentralized applications such as Blockchain and Metaverse for data sharing. With a substantial upsurge in data, compression schemes have been incorporated into the native IPFS protocol to enhance the efficiency of block transmission over an IPFS network. However, the protocols extended by these schemes drastically impair data shareability with the native protocol, contravening the core principles of IPFS-based distributed storage.
To address the issue, this paper presents a fine-grained compression scheme for the native IPFS protocol. Its primary concept is that blocks rather than files are considered as the smallest compressible units for block transmission, thereby ensuring the preservation of the data shareability with the protocol. To achieve the scheme, a three-layer architecture is firstly proposed and incorporated into the protocol as an independent compression component to support a variety of compression algorithms. Consequently, Exchange layer and Storage layer in the protocol are extended by leveraging the component to achieve block-level compression and decompression in the workflows of upload and download for the acceleration of block transmission over an IPFS network. Furthermore, a block pre-request approach is proposed and incorporated into Exchange layer to improve the block request mechanism in the layer which frequently causes block request awaiting, reducing block provision speed for compression algorithms, thus downgrading the acceleration. A comprehensive evaluation indicates that this extended IPFS protocol by our scheme has the same level of data shareability as the native IPFS protocol, and contributes to block transmission performance enhancement in download by up to 69% and in upload by as much as 201% relative to the current leading coarse-grained compression scheme, referred to as IPFSz.
{"title":"A fine-grained compression scheme for block transmission acceleration over IPFS network","authors":"Changpeng Zhu , Bincheng Fan , Nan Xiang , Bo Han , Tian Zhou","doi":"10.1016/j.comnet.2025.111131","DOIUrl":"10.1016/j.comnet.2025.111131","url":null,"abstract":"<div><div>Interplanetary File System (IPFS) network, a prominent peer-to-peer decentralized file-sharing network, is extensively utilized by decentralized applications such as Blockchain and Metaverse for data sharing. With a substantial upsurge in data, compression schemes have been incorporated into the native IPFS protocol to enhance the efficiency of block transmission over an IPFS network. However, the protocols extended by these schemes drastically impair data shareability with the native protocol, contravening the core principles of IPFS-based distributed storage.</div><div>To address the issue, this paper presents a fine-grained compression scheme for the native IPFS protocol. Its primary concept is that blocks rather than files are considered as the smallest compressible units for block transmission, thereby ensuring the preservation of the data shareability with the protocol. To achieve the scheme, a three-layer architecture is firstly proposed and incorporated into the protocol as an independent compression component to support a variety of compression algorithms. Consequently, <em>Exchange</em> layer and <em>Storage</em> layer in the protocol are extended by leveraging the component to achieve block-level compression and decompression in the workflows of <em>upload</em> and <em>download</em> for the acceleration of block transmission over an IPFS network. Furthermore, a block pre-request approach is proposed and incorporated into <em>Exchange</em> layer to improve the block request mechanism in the layer which frequently causes block request awaiting, reducing block provision speed for compression algorithms, thus downgrading the acceleration. A comprehensive evaluation indicates that this extended IPFS protocol by our scheme has the same level of data shareability as the native IPFS protocol, and contributes to block transmission performance enhancement in <em>download</em> by up to 69% and in <em>upload</em> by as much as 201% relative to the current leading coarse-grained compression scheme, referred to as IPFSz.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"261 ","pages":"Article 111131"},"PeriodicalIF":4.4,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}