Pub Date : 2024-07-14DOI: 10.1016/j.comcom.2024.07.002
Jianpo Li, Jinjian Pang, Xiaojuan Fan
In communication network planning, a rational base station layout plays a crucial role in improving communication speed, ensuring service quality, and reducing investment costs. To address this, the article calibrated the urban microcell (UMa) signal propagation model using the least squares method, based on road test data collected from three distinct environments: dense urban areas, general urban areas, and suburbs. With the calibrated model, a detailed link budget analysis was performed on the planning area, calculating the maximum coverage radius required for a single base station to meet communication demands, and accordingly determining the number of base stations needed. Subsequently, this article proposed the Adaptive Mutation Genetic Algorithm (AMGA) and formulated a mathematical model for optimizing 5G base station coverage to improve the base station layout. Simulation experiments were conducted in three different scenarios, and the results indicate that the proposed AMGA algorithm effectively enhances base station coverage while reducing construction costs, thoroughly demonstrating the value of base station layout optimization in practical applications.
{"title":"Optimization of 5G base station coverage based on self-adaptive mutation genetic algorithm","authors":"Jianpo Li, Jinjian Pang, Xiaojuan Fan","doi":"10.1016/j.comcom.2024.07.002","DOIUrl":"10.1016/j.comcom.2024.07.002","url":null,"abstract":"<div><p>In communication network planning, a rational base station layout plays a crucial role in improving communication speed, ensuring service quality, and reducing investment costs. To address this, the article calibrated the urban microcell (UMa) signal propagation model using the least squares method, based on road test data collected from three distinct environments: dense urban areas, general urban areas, and suburbs. With the calibrated model, a detailed link budget analysis was performed on the planning area, calculating the maximum coverage radius required for a single base station to meet communication demands, and accordingly determining the number of base stations needed. Subsequently, this article proposed the Adaptive Mutation Genetic Algorithm (AMGA) and formulated a mathematical model for optimizing 5G base station coverage to improve the base station layout. Simulation experiments were conducted in three different scenarios, and the results indicate that the proposed AMGA algorithm effectively enhances base station coverage while reducing construction costs, thoroughly demonstrating the value of base station layout optimization in practical applications.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"225 ","pages":"Pages 83-95"},"PeriodicalIF":4.5,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141638484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-14DOI: 10.1016/j.comcom.2024.07.006
Yan Zhang , Mingyu Chen , Meng Yuan , Wancheng Zhang , Luis A. Lago
The asymmetric massive multiple-input–multiple-output (MIMO) array improves system capacity and provides wide-area coverage for the Internet of Things (IoT). In this paper, we propose a novel attention-based model for path loss (PL) prediction in asymmetric massive MIMO IoT systems. To represent the propagation characteristics, the propagation image that considers the detailed environment, beamwidth pattern, and propagation-statistics feature is designed. Benefiting from the shuffle attention computation, the proposed model, termed a shuffle-attention-based convolutional neural network (SAN), can effectively extract the detailed features of the propagation scenario from the image. Besides, we design the beamwidth-scenario transfer learning (BWSTL) algorithm to assist the SAN model in predicting PL in the new asymmetric massive MIMO IoT systems, where the beamwidth configuration and propagation scenario are different. It is shown that the proposed model outperforms the empirical model and other state-of-the-art artificial intelligence-based models. Aided by the BWSTL algorithm, the SAN model can be transferred to new propagation conditions with limited samples, which is beneficial to the fast deployment in the new asymmetric massive MIMO IoT systems.
非对称大规模多输入多输出(MIMO)阵列可提高系统容量,并为物联网(IoT)提供广域覆盖。本文提出了一种基于注意力的新模型,用于非对称大规模多输入多输出物联网系统中的路径损耗(PL)预测。为了表示传播特性,我们设计了考虑到详细环境、波束宽度模式和传播统计特征的传播图像。得益于洗牌注意力计算,所提出的基于洗牌注意力的卷积神经网络(SAN)模型能有效地从图像中提取传播场景的细节特征。此外,我们还设计了波束宽度场景转移学习(BWSTL)算法,以辅助 SAN 模型预测波束宽度配置和传播场景不同的新型非对称大规模 MIMO 物联网系统中的 PL。结果表明,所提出的模型优于经验模型和其他最先进的基于人工智能的模型。在 BWSTL 算法的辅助下,SAN 模型可以在样本有限的情况下转移到新的传播条件,这有利于在新的非对称大规模 MIMO 物联网系统中快速部署。
{"title":"Attention-transfer-based path loss prediction in asymmetric massive MIMO IoT systems","authors":"Yan Zhang , Mingyu Chen , Meng Yuan , Wancheng Zhang , Luis A. Lago","doi":"10.1016/j.comcom.2024.07.006","DOIUrl":"10.1016/j.comcom.2024.07.006","url":null,"abstract":"<div><p>The asymmetric massive multiple-input–multiple-output (MIMO) array improves system capacity and provides wide-area coverage for the Internet of Things (IoT). In this paper, we propose a novel attention-based model for path loss (PL) prediction in asymmetric massive MIMO IoT systems. To represent the propagation characteristics, the propagation image that considers the detailed environment, beamwidth pattern, and propagation-statistics feature is designed. Benefiting from the shuffle attention computation, the proposed model, termed a shuffle-attention-based convolutional neural network (SAN), can effectively extract the detailed features of the propagation scenario from the image. Besides, we design the beamwidth-scenario transfer learning (BWSTL) algorithm to assist the SAN model in predicting PL in the new asymmetric massive MIMO IoT systems, where the beamwidth configuration and propagation scenario are different. It is shown that the proposed model outperforms the empirical model and other state-of-the-art artificial intelligence-based models. Aided by the BWSTL algorithm, the SAN model can be transferred to new propagation conditions with limited samples, which is beneficial to the fast deployment in the new asymmetric massive MIMO IoT systems.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107905"},"PeriodicalIF":4.5,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141701428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1016/j.comcom.2024.06.015
Zedian Shao , Kun Yang , Peng Sun , Yulin Hu , Azzedine Boukerche
The emergence of autonomous driving technologies has been significantly influenced by advancements in perception systems. Traditional single-agent detection models, while effective in certain scenarios, exhibit limitations in complex environments, necessitating the shift towards collaborative detection models. While numerous studies have investigated the fundamental architecture and primary elements within this domain, comprehensive analyses focusing on the evolution from single-agent-based detection systems to collaborative detection systems are notably absent. This paper provides a comprehensive examination of this transition, delineating the development from single agent to collaborative perception models in autonomous driving. Initially, this paper delves into single-agent detection models, discussing their capabilities, limitations, and application scenarios. Subsequently, the focus shifts to collaborative detection models, which leverage Vehicle-to-Everything (V2X) communication to enhance perception and decision-making in complex environments. Fundamental concepts about mainstream collaborative approaches and mechanisms are reviewed to present the general organization of collaborative detection models. Furthermore, we critically evaluates various collaborative models, comparing their performance, data fusion strategies, and adaptability in dynamic settings. The integration of V2X-enabled Internet-of-Vehicles (IoV) introduces a pivotal evolution in the transition from single-agent-based detection to multi-agent collaborative sensing. This advancement allows for real-time interaction of sensory information between vehicles, augmenting the development of collaborative sensing. However, the interaction of sensory information also increases the load on the network, highlighting the need for strategies that achieve a balance between communication overhead and the improvement in perception capabilities. We concludes with future perspectives, emphasizing the potential issues the development of collaborative detection models will meet and the promising directions for future research.
{"title":"The evolution of detection systems and their application for intelligent transportation systems: From solo to symphony","authors":"Zedian Shao , Kun Yang , Peng Sun , Yulin Hu , Azzedine Boukerche","doi":"10.1016/j.comcom.2024.06.015","DOIUrl":"10.1016/j.comcom.2024.06.015","url":null,"abstract":"<div><p>The emergence of autonomous driving technologies has been significantly influenced by advancements in perception systems. Traditional single-agent detection models, while effective in certain scenarios, exhibit limitations in complex environments, necessitating the shift towards collaborative detection models. While numerous studies have investigated the fundamental architecture and primary elements within this domain, comprehensive analyses focusing on the evolution from single-agent-based detection systems to collaborative detection systems are notably absent. This paper provides a comprehensive examination of this transition, delineating the development from single agent to collaborative perception models in autonomous driving. Initially, this paper delves into single-agent detection models, discussing their capabilities, limitations, and application scenarios. Subsequently, the focus shifts to collaborative detection models, which leverage Vehicle-to-Everything (V2X) communication to enhance perception and decision-making in complex environments. Fundamental concepts about mainstream collaborative approaches and mechanisms are reviewed to present the general organization of collaborative detection models. Furthermore, we critically evaluates various collaborative models, comparing their performance, data fusion strategies, and adaptability in dynamic settings. The integration of V2X-enabled Internet-of-Vehicles (IoV) introduces a pivotal evolution in the transition from single-agent-based detection to multi-agent collaborative sensing. This advancement allows for real-time interaction of sensory information between vehicles, augmenting the development of collaborative sensing. However, the interaction of sensory information also increases the load on the network, highlighting the need for strategies that achieve a balance between communication overhead and the improvement in perception capabilities. We concludes with future perspectives, emphasizing the potential issues the development of collaborative detection models will meet and the promising directions for future research.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"225 ","pages":"Pages 96-119"},"PeriodicalIF":4.5,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141638485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1016/j.comcom.2024.07.001
Wojciech Ciezobka , Maksymilian Wojnar , Krzysztof Rusek , Katarzyna Kosek-Szott , Szymon Szott , Anatolij Zubow , Falko Dressler
Appropriate data rate selection at the physical layer is crucial for Wi-Fi network performance: too high rates lead to loss of data frames, while too low rates cause increased latency and inefficient channel use. Most existing methods adopt a probing approach and empirically assess the transmission success probability for each available rate. However, a transmission failure can also be caused by frame collisions. Thus, each collision leads to an unnecessary decrease in the data rate. We avoid this issue by resorting to the fine timing measurement (FTM) procedure, part of IEEE 802.11, which allows stations to perform ranging, i.e., measure their spatial distance to the AP. Since distance is not affected by sporadic distortions such as internal and external channel interference, we use this knowledge for data rate selection. Specifically, we propose FTMRate, which applies statistical learning (a form of machine learning) to estimate the distance based on measurements, predicts channel quality from the distance, and selects data rates based on channel quality. We define three distinct estimation approaches: exponential smoothing, Kalman filter, and particle filter. Then, with a thorough performance evaluation using simulations and an experimental validation with real-world devices, we show that our approach has several positive features: it is resilient to collisions, provides near-instantaneous convergence, is compatible with commercial-off-the-shelf devices, and supports pedestrian mobility. Thanks to these features, FTMRate outperforms existing solutions in a variety of line-of-sight scenarios, providing close to optimal results. Additionally, we introduce Hybrid FTMRate, which can intelligently fall back to a probing-based approach to cover non-line-of-sight cases. Finally, we discuss the applicability of the method and its usefulness in various scenarios.
{"title":"Using ranging for collision-immune IEEE 802.11 rate selection with statistical learning","authors":"Wojciech Ciezobka , Maksymilian Wojnar , Krzysztof Rusek , Katarzyna Kosek-Szott , Szymon Szott , Anatolij Zubow , Falko Dressler","doi":"10.1016/j.comcom.2024.07.001","DOIUrl":"https://doi.org/10.1016/j.comcom.2024.07.001","url":null,"abstract":"<div><p>Appropriate data rate selection at the physical layer is crucial for Wi-Fi network performance: too high rates lead to loss of data frames, while too low rates cause increased latency and inefficient channel use. Most existing methods adopt a probing approach and empirically assess the transmission success probability for each available rate. However, a transmission failure can also be caused by frame collisions. Thus, each collision leads to an unnecessary decrease in the data rate. We avoid this issue by resorting to the fine timing measurement (FTM) procedure, part of IEEE 802.11, which allows stations to perform ranging, i.e., measure their spatial distance to the AP. Since distance is not affected by sporadic distortions such as internal and external channel interference, we use this knowledge for data rate selection. Specifically, we propose FTMRate, which applies statistical learning (a form of machine learning) to estimate the distance based on measurements, predicts channel quality from the distance, and selects data rates based on channel quality. We define three distinct estimation approaches: exponential smoothing, Kalman filter, and particle filter. Then, with a thorough performance evaluation using simulations and an experimental validation with real-world devices, we show that our approach has several positive features: it is resilient to collisions, provides near-instantaneous convergence, is compatible with commercial-off-the-shelf devices, and supports pedestrian mobility. Thanks to these features, FTMRate outperforms existing solutions in a variety of line-of-sight scenarios, providing close to optimal results. Additionally, we introduce Hybrid FTMRate, which can intelligently fall back to a probing-based approach to cover non-line-of-sight cases. Finally, we discuss the applicability of the method and its usefulness in various scenarios.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"225 ","pages":"Pages 10-26"},"PeriodicalIF":4.5,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0140366424002317/pdfft?md5=c3e0ee40f8b7376a105dac7c3824d995&pid=1-s2.0-S0140366424002317-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141595395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1016/j.comcom.2024.06.009
Xinjiao Li , Guowei Wu , Lin Yao , Shisong Geng
Federated learning based on local differential privacy and blockchain can effectively mitigate the privacy issues of server and provide strong privacy against multiple kinds of attack. However, the actual privacy of users gradually decreases with the frequency of user updates, and noises from perturbation cause contradictions between privacy and utility. To enhance user privacy while ensuring data utility, we propose a Hybrid Aggregation mechanism based on Shuffling, Subsampling and Shapley value (HASSS) for federated learning under blockchain framework. HASSS includes two procedures, private intra-local domain aggregation and efficient inter-local domain evaluation. During the private aggregation, the local updates of users are selected and randomized to achieve gradient index privacy and gradient privacy, and then are shuffled and subsampled by shufflers to achieve identity privacy and privacy amplification. During the efficient evaluation, local servers that aggregated updates within domains broadcast and receive updates from other local servers, based on which the contribution of each local server is calculated to select nodes for global update. Two comprehensive sets are applied to evaluate the performance of HASSS. Simulations show that our scheme can enhance user privacy while ensuring data utility.
{"title":"Hybrid aggregation for federated learning under blockchain framework","authors":"Xinjiao Li , Guowei Wu , Lin Yao , Shisong Geng","doi":"10.1016/j.comcom.2024.06.009","DOIUrl":"10.1016/j.comcom.2024.06.009","url":null,"abstract":"<div><p>Federated learning based on local differential privacy and blockchain can effectively mitigate the privacy issues of server and provide strong privacy against multiple kinds of attack. However, the actual privacy of users gradually decreases with the frequency of user updates, and noises from perturbation cause contradictions between privacy and utility. To enhance user privacy while ensuring data utility, we propose a Hybrid Aggregation mechanism based on Shuffling, Subsampling and Shapley value (HASSS) for federated learning under blockchain framework. HASSS includes two procedures, private intra-local domain aggregation and efficient inter-local domain evaluation. During the private aggregation, the local updates of users are selected and randomized to achieve gradient index privacy and gradient privacy, and then are shuffled and subsampled by shufflers to achieve identity privacy and privacy amplification. During the efficient evaluation, local servers that aggregated updates within domains broadcast and receive updates from other local servers, based on which the contribution of each local server is calculated to select nodes for global update. Two comprehensive sets are applied to evaluate the performance of HASSS. Simulations show that our scheme can enhance user privacy while ensuring data utility.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"225 ","pages":"Pages 311-323"},"PeriodicalIF":4.5,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141709064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-03DOI: 10.1016/j.comcom.2024.06.020
Henning Stubbe, Sebastian Gallenmüller, Manuel Simon, Eric Hauser, Dominik Scholz, Georg Carle
The development and roll-out of new Ethernet standards increase the available bandwidths in computer networks. This growth presents significant advantages, enabling novel applications. At the same time, the increase introduces new challenges; higher data rates reduce the available time budget to process each packet. This development also impacts software-defined networks. Their data planes need to keep up with the increased traffic rates. Nevertheless, the control plane must not be ignored; fast reaction times are necessary to handle the increased rates handled by data planes efficiently.
In our work, we analyze the interaction of a high-performance data plane and different implementations for the control plane. We selected a P4 switching ASIC as our data plane. For the control plane, we investigate vendor-specific implementations and a standardized implementation called P4Runtime. To determine the performance of the control plane, we introduce a novel measurement methodology. This methodology allows measuring the delay between the initiation of rule updates on the control plane and their application on the data plane. We investigate the behavior of the data plane, its performance and non-atomicity of updates. Based on our findings, we apply different optimization strategies to improve control plane performance. Our measurements show that neglecting the control plane performance may impact network behavior due to delayed updates, but we also show how to minimize this delay and, thereby, its impact. We have released the experiment artifacts of our study including experiment scripts and measurement data.
{"title":"Exploring Data Plane Updates on P4 Switches with P4Runtime","authors":"Henning Stubbe, Sebastian Gallenmüller, Manuel Simon, Eric Hauser, Dominik Scholz, Georg Carle","doi":"10.1016/j.comcom.2024.06.020","DOIUrl":"https://doi.org/10.1016/j.comcom.2024.06.020","url":null,"abstract":"<div><p>The development and roll-out of new Ethernet standards increase the available bandwidths in computer networks. This growth presents significant advantages, enabling novel applications. At the same time, the increase introduces new challenges; higher data rates reduce the available time budget to process each packet. This development also impacts software-defined networks. Their data planes need to keep up with the increased traffic rates. Nevertheless, the control plane must not be ignored; fast reaction times are necessary to handle the increased rates handled by data planes efficiently.</p><p>In our work, we analyze the interaction of a high-performance data plane and different implementations for the control plane. We selected a P4 switching ASIC as our data plane. For the control plane, we investigate vendor-specific implementations and a standardized implementation called P4Runtime. To determine the performance of the control plane, we introduce a novel measurement methodology. This methodology allows measuring the delay between the initiation of rule updates on the control plane and their application on the data plane. We investigate the behavior of the data plane, its performance and non-atomicity of updates. Based on our findings, we apply different optimization strategies to improve control plane performance. Our measurements show that neglecting the control plane performance may impact network behavior due to delayed updates, but we also show how to minimize this delay and, thereby, its impact. We have released the experiment artifacts of our study including experiment scripts and measurement data.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"225 ","pages":"Pages 44-53"},"PeriodicalIF":4.5,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0140366424002305/pdfft?md5=cbe6a6793a5afc7ad78c96dfb15ffda6&pid=1-s2.0-S0140366424002305-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141595397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Social Internet of Things (Social IoT) introduces a fresh approach to promote the usability of IoT networks and enhance service discovery by incorporating social contexts. However, this approach encounters various challenges that impact its performance and reliability. One of the most prominent challenges is trust, specifically trust-related attacks, where certain users engage in malicious behaviors and launch attacks to spread harmful services. To ensure a trustworthy experience for end-users and prevent such attacks in real-time, it is highly significant to incorporate a trust management mechanism within the Social IoT network. To address this challenge, we propose a novel trust management mechanism that leverages blockchain technology. By integrating this technology, we aim to prevent trust-related attacks and create a secure environment. Additionally, we introduce a new consensus protocol for the blockchain called Spark-based Proof of Trust-related Attacks (SPoTA). This protocol is designed to process stream transactions in real-time using Apache Spark, a distributed stream processing engine. To implement SPoTA, we have developed a new classifier utilizing Spark Libraries. This classifier is capable of accurately categorizing transactions as either malicious or secure. As new transaction streams are read, the classifier is employed to classify and assign a label to each stream. This label assists the SPoTA protocol in making informed decisions regarding the validation or rejection of transactions. Our research findings demonstrate the effectiveness of our classifier in predicting malicious transactions, outstripping our previous works and other approaches reported in the literature. Additionally, our new protocol exhibits improved transaction processing times.
{"title":"Real-time prevention of trust-related attacks in social IoT using blockchain and Apache spark","authors":"Mariam Masmoudi , Ikram Amous , Corinne Amel Zayani , Florence Sèdes","doi":"10.1016/j.comcom.2024.06.019","DOIUrl":"https://doi.org/10.1016/j.comcom.2024.06.019","url":null,"abstract":"<div><p>The Social Internet of Things (Social IoT) introduces a fresh approach to promote the usability of IoT networks and enhance service discovery by incorporating social contexts. However, this approach encounters various challenges that impact its performance and reliability. One of the most prominent challenges is trust, specifically trust-related attacks, where certain users engage in malicious behaviors and launch attacks to spread harmful services. To ensure a trustworthy experience for end-users and prevent such attacks in real-time, it is highly significant to incorporate a trust management mechanism within the Social IoT network. To address this challenge, we propose a novel trust management mechanism that leverages blockchain technology. By integrating this technology, we aim to prevent trust-related attacks and create a secure environment. Additionally, we introduce a new consensus protocol for the blockchain called Spark-based Proof of Trust-related Attacks (SPoTA). This protocol is designed to process stream transactions in real-time using Apache Spark, a distributed stream processing engine. To implement SPoTA, we have developed a new classifier utilizing Spark Libraries. This classifier is capable of accurately categorizing transactions as either malicious or secure. As new transaction streams are read, the classifier is employed to classify and assign a label to each stream. This label assists the SPoTA protocol in making informed decisions regarding the validation or rejection of transactions. Our research findings demonstrate the effectiveness of our classifier in predicting malicious transactions, outstripping our previous works and other approaches reported in the literature. Additionally, our new protocol exhibits improved transaction processing times.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"225 ","pages":"Pages 65-82"},"PeriodicalIF":4.5,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141607895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-29DOI: 10.1016/j.comcom.2024.06.018
ZhiDong Huang, XiaoFei Wu, ShouBin Dong
Vehicular Edge Computing (VEC) provides a flexible distributed computing paradigm for offloading computations to the vehicular network, which can effectively solve the problem of limited vehicle computing resources and meet the on-vehicle computing requests of users. However, the conflict of interest between vehicle users and service providers leads to the need to consider a variety of conflict optimization goals for computing offloading, and the dynamic nature of vehicle networks, such as vehicle mobility and time-varying network conditions, make the offloading effectiveness of vehicle computing requests and the adaptability to complex VEC scenarios challenging. To address these challenges, this paper proposes a multi-objective optimization model suitable for computational offloading of dynamic heterogeneous VEC networks. By formulating the dynamic multi-objective computational offloading problem as a multi-objective Markov Decision Process (MOMDP), this paper designs a novel multi-objective reinforcement learning algorithm EMOTO, which aims to minimize the average task execution delay and average vehicle energy consumption, and maximize the revenue of service providers. In this paper, a preference priority sampling module is proposed, and a model-augmented environment estimator is introduced to learn the environmental model for multi-objective optimization, so as to solve the problem that the agent is difficult to learn steadily caused by the highly dynamic change of VEC environment, thus to effectively realize the joint optimization of multiple objectives and improve the decision-making accuracy and efficiency of the algorithm. Experiments show that EMOTO has superior performance on multiple optimization objectives compared with advanced multi-objective reinforcement learning algorithms. In addition, the algorithm shows robustness when applied to different environmental settings and better adapting to highly dynamic environments, and balancing the conflict of interest between vehicle users and service providers.
{"title":"Multi-objective task offloading for highly dynamic heterogeneous Vehicular Edge Computing: An efficient reinforcement learning approach","authors":"ZhiDong Huang, XiaoFei Wu, ShouBin Dong","doi":"10.1016/j.comcom.2024.06.018","DOIUrl":"https://doi.org/10.1016/j.comcom.2024.06.018","url":null,"abstract":"<div><p>Vehicular Edge Computing (VEC) provides a flexible distributed computing paradigm for offloading computations to the vehicular network, which can effectively solve the problem of limited vehicle computing resources and meet the on-vehicle computing requests of users. However, the conflict of interest between vehicle users and service providers leads to the need to consider a variety of conflict optimization goals for computing offloading, and the dynamic nature of vehicle networks, such as vehicle mobility and time-varying network conditions, make the offloading effectiveness of vehicle computing requests and the adaptability to complex VEC scenarios challenging. To address these challenges, this paper proposes a multi-objective optimization model suitable for computational offloading of dynamic heterogeneous VEC networks. By formulating the dynamic multi-objective computational offloading problem as a multi-objective Markov Decision Process (MOMDP), this paper designs a novel multi-objective reinforcement learning algorithm EMOTO, which aims to minimize the average task execution delay and average vehicle energy consumption, and maximize the revenue of service providers. In this paper, a preference priority sampling module is proposed, and a model-augmented environment estimator is introduced to learn the environmental model for multi-objective optimization, so as to solve the problem that the agent is difficult to learn steadily caused by the highly dynamic change of VEC environment, thus to effectively realize the joint optimization of multiple objectives and improve the decision-making accuracy and efficiency of the algorithm. Experiments show that EMOTO has superior performance on multiple optimization objectives compared with advanced multi-objective reinforcement learning algorithms. In addition, the algorithm shows robustness when applied to different environmental settings and better adapting to highly dynamic environments, and balancing the conflict of interest between vehicle users and service providers.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"225 ","pages":"Pages 27-43"},"PeriodicalIF":4.5,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141595396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-28DOI: 10.1016/j.comcom.2024.06.014
Zhongcheng Wei , Wei Chen , Weitao Tao , Shuli Ning , Bin Lian , Xiang Sun , Jijun Zhao
With the advancement of wireless sensing technology, human identification based on WiFi sensing has garnered significant attention in the fields of human–computer interaction and home security. Despite the initial success of WiFi sensing based human identification when the environment is fixed, the performance of the trained identity sensing model will be severely degraded when applied to unfamiliar environments. In this paper, a cross-domain human identification system (CATFSID) is proposed, which is able to achieve environment migration of trained model using up to 3-shot. CATFSID utilizes a dual adversarial training network, including cross-adversarial training between source and source domain classifiers, and adversarial training between source and target domain discriminators to extract environment-independent identity features. Introducing a method based on pseudo-label prediction, which assigns labels to target domain samples similar to the source domain samples, reduces the distribution bias of identity features between the source and target domains. The experimental results show accuracy of 90.1% and F1-Score of 89.33% when using 3 samples per user in the new environment.
{"title":"CATFSID: A few-shot human identification system based on cross-domain adversarial training","authors":"Zhongcheng Wei , Wei Chen , Weitao Tao , Shuli Ning , Bin Lian , Xiang Sun , Jijun Zhao","doi":"10.1016/j.comcom.2024.06.014","DOIUrl":"https://doi.org/10.1016/j.comcom.2024.06.014","url":null,"abstract":"<div><p>With the advancement of wireless sensing technology, human identification based on WiFi sensing has garnered significant attention in the fields of human–computer interaction and home security. Despite the initial success of WiFi sensing based human identification when the environment is fixed, the performance of the trained identity sensing model will be severely degraded when applied to unfamiliar environments. In this paper, a cross-domain human identification system (CATFSID) is proposed, which is able to achieve environment migration of trained model using up to 3-shot. CATFSID utilizes a dual adversarial training network, including cross-adversarial training between source and source domain classifiers, and adversarial training between source and target domain discriminators to extract environment-independent identity features. Introducing a method based on pseudo-label prediction, which assigns labels to target domain samples similar to the source domain samples, reduces the distribution bias of identity features between the source and target domains. The experimental results show accuracy of 90.1% and F1-<em>Score</em> of 89.33% when using 3 samples per user in the new environment.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"224 ","pages":"Pages 275-284"},"PeriodicalIF":4.5,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141542835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-28DOI: 10.1016/j.comcom.2024.06.016
Hongshuo Lyu , Jing Liu , Yingxu Lai , Beifeng Mao , Xianting Huang
With an increase in the complexity and scale of networks, cybersecurity faces increasingly severe challenges. For instance, an attacker can combine individual attacks into complex multi-stage attacks to infiltrate targets. Traditional intrusion detection systems (IDS) generate large number of alerts during an attack, including attack clues along with many false positives. Furthermore, due to the complexity and changefulness of attacks, security analysts spend considerable time and effort on discovering attack paths. Existing methods rely on attack knowledgebases or predefined correlation rules but can only identify known attacks. To address these limitations, this paper presents an attack correlation and scenario reconstruction method. We transform the abnormal flows corresponding to the alerts into abnormal states relationship graph (ASR-graph) and automatically correlate attacks through graph aggregation and clustering. We also implemented an attack path search algorithm to mine attack paths and trace the attack process. This method does not rely on prior knowledge; thus, it can well adapt to the changed attack plan, making it effective in correlating unknown attacks and identifying attack paths. Evaluation results show that the proposed method has higher accuracy and effectiveness than existing methods.
{"title":"AGCM: A multi-stage attack correlation and scenario reconstruction method based on graph aggregation","authors":"Hongshuo Lyu , Jing Liu , Yingxu Lai , Beifeng Mao , Xianting Huang","doi":"10.1016/j.comcom.2024.06.016","DOIUrl":"https://doi.org/10.1016/j.comcom.2024.06.016","url":null,"abstract":"<div><p>With an increase in the complexity and scale of networks, cybersecurity faces increasingly severe challenges. For instance, an attacker can combine individual attacks into complex multi-stage attacks to infiltrate targets. Traditional intrusion detection systems (IDS) generate large number of alerts during an attack, including attack clues along with many false positives. Furthermore, due to the complexity and changefulness of attacks, security analysts spend considerable time and effort on discovering attack paths. Existing methods rely on attack knowledgebases or predefined correlation rules but can only identify known attacks. To address these limitations, this paper presents an attack correlation and scenario reconstruction method. We transform the abnormal flows corresponding to the alerts into abnormal states relationship graph (ASR-graph) and automatically correlate attacks through graph aggregation and clustering. We also implemented an attack path search algorithm to mine attack paths and trace the attack process. This method does not rely on prior knowledge; thus, it can well adapt to the changed attack plan, making it effective in correlating unknown attacks and identifying attack paths. Evaluation results show that the proposed method has higher accuracy and effectiveness than existing methods.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"224 ","pages":"Pages 302-313"},"PeriodicalIF":4.5,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141542834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}