Pub Date : 2024-09-02DOI: 10.1016/j.comcom.2024.107938
Stefano Bini, Vincenzo Carletti, Alessia Saggese, Mario Vento
Speech is among the main forms of communication between humans and robots in industrial settings, being the most natural way for a human worker to issue commands. However, the presence of pervasive and loud environmental noise poses significant challenges to the adoption of Speech-Command Recognition systems onboard manufacturing robots; indeed, they are expected to perform in real time on hardware with limited computational capabilities and also to be robust and accurate in such complex environments. In this paper, we propose an innovative system based on an End-to-End architecture with a Conformer backbone. Our system is specifically designed to achieve high accuracy in noisy industrial environments and to guarantee a minimal computational burden to meet stringent real-time requirements while running on computing devices that are embedded in robots. In order to increase the generalization capability of the system, the training procedure is driven by a Curriculum Learning strategy combined with dynamic data augmentation techniques, that progressively increase the complexity of input samples by increasing the noise during the training phase. We have conducted extensive experimentation to assess the effectiveness of our system, using a dataset composed of more than 50,000 samples, of which about 2,000 have been acquired during the daily operations of a Stellantis Italian factory. The results confirm the suitability of the proposed approach to be adopted in a real industrial environment; indeed, it is able to achieve, on both English and Italian commands, an accuracy higher than 90%, maintaining a compact model size (the network is 1.81 ) and running in real-time on an industrial embedded device (namely over an NVIDIA Xavier NX).
{"title":"Robust speech command recognition in challenging industrial environments","authors":"Stefano Bini, Vincenzo Carletti, Alessia Saggese, Mario Vento","doi":"10.1016/j.comcom.2024.107938","DOIUrl":"10.1016/j.comcom.2024.107938","url":null,"abstract":"<div><p>Speech is among the main forms of communication between humans and robots in industrial settings, being the most natural way for a human worker to issue commands. However, the presence of pervasive and loud environmental noise poses significant challenges to the adoption of Speech-Command Recognition systems onboard manufacturing robots; indeed, they are expected to perform in real time on hardware with limited computational capabilities and also to be robust and accurate in such complex environments. In this paper, we propose an innovative system based on an End-to-End architecture with a Conformer backbone. Our system is specifically designed to achieve high accuracy in noisy industrial environments and to guarantee a minimal computational burden to meet stringent real-time requirements while running on computing devices that are embedded in robots. In order to increase the generalization capability of the system, the training procedure is driven by a Curriculum Learning strategy combined with dynamic data augmentation techniques, that progressively increase the complexity of input samples by increasing the noise during the training phase. We have conducted extensive experimentation to assess the effectiveness of our system, using a dataset composed of more than 50,000 samples, of which about 2,000 have been acquired during the daily operations of a Stellantis Italian factory. The results confirm the suitability of the proposed approach to be adopted in a real industrial environment; indeed, it is able to achieve, on both English and Italian commands, an accuracy higher than 90%, maintaining a compact model size (the network is 1.81 <span><math><mrow><mi>M</mi><mi>B</mi></mrow></math></span>) and running in real-time on an industrial embedded device (namely <span><math><mrow><mn>41</mn><mspace></mspace><mi>ms</mi></mrow></math></span> over an NVIDIA Xavier NX).</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107938"},"PeriodicalIF":4.5,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Locally verifiable aggregate signature primitives can reduce the complexity of aggregate signature verification by computing locally open algorithms to generate auxiliary parameters. However, the breakthrough results of quantum computers at this stage indicate that it will be possible for quantum computers to break through the security of traditional hardness-based aggregated signature schemes. In order to solve the above problems, this paper proposes for the first time a new locally verifiable class of multi-member quantum threshold aggregated digital signature scheme based on the property that the verification of quantum coset states is a projection on the trans-subspace. Combined with the idea of auxiliary parameter generation in traditional locally verifiable aggregated signatures, it makes the current stage of threshold quantum digital signatures realize the aggregated features, and reduces the complexity of the verification of aggregated signatures while realizing post-quantum security. In addition, the verification of the signature key (quantum state) of the signature members does not require measurement operations, and the generated signatures are classical, so the communication between the trusted third center (TC), the set of signature members, the classical digital signature verifier (CV), and the third-party trusted aggregation generator (TA) are all classical, simplifying the communication model. In the performance analysis we make this quantum aggregation signature scheme more flexible as well as less quantum state preparation compared to other schemes.
{"title":"Locally verifiable approximate multi-member quantum threshold aggregation digital signature scheme","authors":"Zixuan Lu, Qingshui Xue, Tianhao Zhang, Jiewei Cai, Jing Han, Yixun He, Yinhang Li","doi":"10.1016/j.comcom.2024.107934","DOIUrl":"10.1016/j.comcom.2024.107934","url":null,"abstract":"<div><p>Locally verifiable aggregate signature primitives can reduce the complexity of aggregate signature verification by computing locally open algorithms to generate auxiliary parameters. However, the breakthrough results of quantum computers at this stage indicate that it will be possible for quantum computers to break through the security of traditional hardness-based aggregated signature schemes. In order to solve the above problems, this paper proposes for the first time a new locally verifiable class of multi-member quantum threshold aggregated digital signature scheme based on the property that the verification of quantum coset states is a projection on the trans-subspace. Combined with the idea of auxiliary parameter generation in traditional locally verifiable aggregated signatures, it makes the current stage of threshold quantum digital signatures realize the aggregated features, and reduces the complexity of the verification of aggregated signatures while realizing post-quantum security. In addition, the verification of the signature key (quantum state) of the signature members does not require measurement operations, and the generated signatures are classical, so the communication between the trusted third center (TC), the set of signature members, the classical digital signature verifier (CV), and the third-party trusted aggregation generator (TA) are all classical, simplifying the communication model. In the performance analysis we make this quantum aggregation signature scheme more flexible as well as less quantum state preparation compared to other schemes.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107934"},"PeriodicalIF":4.5,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1016/j.comcom.2024.107936
Peiying Zhang , Ziyu Xu , Neeraj Kumar , Jian Wang , Lizhuang Tan , Ahmad Almogren
The space-air-ground integrated network (SAGIN) comprises a multitude of interconnected and integrated heterogeneous networks. Its network is large in scale, complex in structure, and highly dynamic. Virtual network embedding (VNE) is designed to efficiently allocate resources within the physical host to diverse virtual network requests (VNRs) with different constraints while improving the acceptance ratio of VNRs. However, in a heterogeneous SAGIN environment, improving the utilization of network resources while ensuring the performance of the VNE algorithm is a very challenging topic. To address the aforementioned issues, we first introduce a services diversion strategy (SDS) to select embedded nodes based on different service types and network state, thereby alleviating the uneven use of resources in different network domains. Subsequently, we propose a VNE algorithm (GAIL-VNE) based on generative adversarial imitation learning (GAIL). We construct a generator network based on the actor-critic architecture, which can generate the probability of physical nodes being embedded based on the observed network state. Secondly, we construct a discriminator network to distinguish between generator samples and expert samples, which aids in updating the generator network. After offline training, the generator and discriminator reach a Nash equilibrium through game confrontation. During the embedding process of VNRs, the output of the generator provides an effective basis for generating VNE solutions. Finally, we verify the effectiveness of this method through experiments involving offline training and online embedding.
{"title":"Generative adversarial imitation learning assisted virtual network embedding algorithm for space-air-ground integrated network","authors":"Peiying Zhang , Ziyu Xu , Neeraj Kumar , Jian Wang , Lizhuang Tan , Ahmad Almogren","doi":"10.1016/j.comcom.2024.107936","DOIUrl":"10.1016/j.comcom.2024.107936","url":null,"abstract":"<div><p>The space-air-ground integrated network (SAGIN) comprises a multitude of interconnected and integrated heterogeneous networks. Its network is large in scale, complex in structure, and highly dynamic. Virtual network embedding (VNE) is designed to efficiently allocate resources within the physical host to diverse virtual network requests (VNRs) with different constraints while improving the acceptance ratio of VNRs. However, in a heterogeneous SAGIN environment, improving the utilization of network resources while ensuring the performance of the VNE algorithm is a very challenging topic. To address the aforementioned issues, we first introduce a services diversion strategy (SDS) to select embedded nodes based on different service types and network state, thereby alleviating the uneven use of resources in different network domains. Subsequently, we propose a VNE algorithm (GAIL-VNE) based on generative adversarial imitation learning (GAIL). We construct a generator network based on the actor-critic architecture, which can generate the probability of physical nodes being embedded based on the observed network state. Secondly, we construct a discriminator network to distinguish between generator samples and expert samples, which aids in updating the generator network. After offline training, the generator and discriminator reach a Nash equilibrium through game confrontation. During the embedding process of VNRs, the output of the generator provides an effective basis for generating VNE solutions. Finally, we verify the effectiveness of this method through experiments involving offline training and online embedding.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107936"},"PeriodicalIF":4.5,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1016/j.comcom.2024.107928
Jiachao Yu, Chao Zhai, Hao Dai, Lina Zheng, Yujun Li
In heterogeneous cellular networks (HCNs), neighboring users often request similar contents asynchronously. Based on the content popularity, base stations (BSs) can download and cache contents when the network is idle, and transmit them locally when the network is busy, which can effectively reduce the backhaul burden and the transmission delay. We consider a two-tier HCN, where macro base stations (MBSs) and small base stations (SBSs) can cooperatively and probabilistically cache contents. Each user is associated to the BS with the maximum average received signal power in any tier. With the cooperative content transfer between MBS tier and SBS tier, users can adaptively obtain contents from BSs or remote content servers. We properly model both wired and wireless delays when a user requests an arbitrary content, and propose the concept of effective delay. Content caching probabilities are optimized using the Marine Predators Algorithm via minimizing the average effective delay. Numerical results show that our proposed cooperative caching scheme achieves much shorter delays than the benchmark caching schemes.
在异构蜂窝网络(HCN)中,相邻用户经常异步请求类似的内容。基站(BS)可以根据内容的受欢迎程度,在网络空闲时下载并缓存内容,在网络繁忙时在本地传输内容,从而有效减少回程负担和传输延迟。我们考虑了两层 HCN,其中宏基站(MBS)和小基站(SBS)可以合作并以概率方式缓存内容。每个用户都与任意层中平均接收信号功率最大的基站相关联。通过 MBS 层和 SBS 层之间的合作内容传输,用户可以自适应地从 BS 或远程内容服务器获取内容。我们对用户请求任意内容时的有线和无线延迟进行了适当建模,并提出了有效延迟的概念。通过最小化平均有效延迟,使用海洋捕食者算法优化内容缓存概率。数值结果表明,我们提出的合作缓存方案比基准缓存方案实现了更短的延迟。
{"title":"Cooperative edge-caching based transmission with minimum effective delay in heterogeneous cellular networks","authors":"Jiachao Yu, Chao Zhai, Hao Dai, Lina Zheng, Yujun Li","doi":"10.1016/j.comcom.2024.107928","DOIUrl":"10.1016/j.comcom.2024.107928","url":null,"abstract":"<div><p>In heterogeneous cellular networks (HCNs), neighboring users often request similar contents asynchronously. Based on the content popularity, base stations (BSs) can download and cache contents when the network is idle, and transmit them locally when the network is busy, which can effectively reduce the backhaul burden and the transmission delay. We consider a two-tier HCN, where macro base stations (MBSs) and small base stations (SBSs) can cooperatively and probabilistically cache contents. Each user is associated to the BS with the maximum average received signal power in any tier. With the cooperative content transfer between MBS tier and SBS tier, users can adaptively obtain contents from BSs or remote content servers. We properly model both wired and wireless delays when a user requests an arbitrary content, and propose the concept of effective delay. Content caching probabilities are optimized using the Marine Predators Algorithm via minimizing the average effective delay. Numerical results show that our proposed cooperative caching scheme achieves much shorter delays than the benchmark caching schemes.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107928"},"PeriodicalIF":4.5,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1016/j.comcom.2024.107932
Shujie Yang , Kefei Song , Zhenhui Yuan , Lujie Zhong , Mu Wang , Xiang Ji , Changqiao Xu
In the era of Industry 5.0, with the deep convergence of Industrial Internet of Things (IIoT) and 5G technology, stable transmission of massive data in heterogeneous networks becomes crucial. This is not only the key to improving the efficiency of human–machine collaboration, but also the basis for ensuring system continuity and reliability. The arrival of 5G has brought new challenges to the communication of IIoT in heterogeneous environments. Due to the inherent characteristics of wireless networks, such as random packet loss and network jitter, traditional transmission control schemes often fail to achieve optimal performance. In this paper we propose a novel transmission control algorithm, aBBR. It is an augmented algorithm based on BBRv3. aBBR dynamically adjusts the sending window size through real-time analysis to enhance the transmission performance in heterogeneous networks. Simulation results show that, compared to traditional algorithms, aBBR demonstrates the best comprehensive performance in terms of throughput, latency, and retransmission. When random packet loss exists in the link, aBBR improves the throughput by an average of 29.3% and decreases the retransmission rate by 18.5% while keeping the transmission delay at the same level as BBRv3.
{"title":"aBBR: An augmented BBR for collaborative intelligent transmission over heterogeneous networks in IIoT","authors":"Shujie Yang , Kefei Song , Zhenhui Yuan , Lujie Zhong , Mu Wang , Xiang Ji , Changqiao Xu","doi":"10.1016/j.comcom.2024.107932","DOIUrl":"10.1016/j.comcom.2024.107932","url":null,"abstract":"<div><p>In the era of Industry 5.0, with the deep convergence of Industrial Internet of Things (IIoT) and 5G technology, stable transmission of massive data in heterogeneous networks becomes crucial. This is not only the key to improving the efficiency of human–machine collaboration, but also the basis for ensuring system continuity and reliability. The arrival of 5G has brought new challenges to the communication of IIoT in heterogeneous environments. Due to the inherent characteristics of wireless networks, such as random packet loss and network jitter, traditional transmission control schemes often fail to achieve optimal performance. In this paper we propose a novel transmission control algorithm, aBBR. It is an augmented algorithm based on BBRv3. aBBR dynamically adjusts the sending window size through real-time analysis to enhance the transmission performance in heterogeneous networks. Simulation results show that, compared to traditional algorithms, aBBR demonstrates the best comprehensive performance in terms of throughput, latency, and retransmission. When random packet loss exists in the link, aBBR improves the throughput by an average of 29.3% and decreases the retransmission rate by 18.5% while keeping the transmission delay at the same level as BBRv3.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107932"},"PeriodicalIF":4.5,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-27DOI: 10.1016/j.comcom.2024.107933
Yuxin Gao , Jianming Zhu , Peikun Ni
The impact of social networks on real-life scenarios is intensifying with the diversification of the information they disseminate. Consequently, the interconnection between social networks and tangible networks is strengthening. Notably, we have observed that messages disseminated on social networks, particularly those soliciting aid, exert a significant influence on the underlying network structure. This study aims to investigate the role and importance of social networks in the information dissemination process, as well as to construct a linear threshold model tailored for the dissemination of emergency information across both real and social networks, leveraging conventional models of information spread. We have developed a model to increase the number of connection edges in social networks in order to enhance their worth. Additionally, we discovered that the objective function possesses submodular features and thus the created problem is NP-hard. As a result, we can use algorithms with approximative assurances of to solve our problem and ensures the accuracy of the solution. We also analyze the complexity of the algorithm in solving this problem. Finally we validated our conclusions with three publicly available datasets and one real data set to analysis the results of the solution.
{"title":"Research on maximizing real demand response based on link addition in social networks","authors":"Yuxin Gao , Jianming Zhu , Peikun Ni","doi":"10.1016/j.comcom.2024.107933","DOIUrl":"10.1016/j.comcom.2024.107933","url":null,"abstract":"<div><p>The impact of social networks on real-life scenarios is intensifying with the diversification of the information they disseminate. Consequently, the interconnection between social networks and tangible networks is strengthening. Notably, we have observed that messages disseminated on social networks, particularly those soliciting aid, exert a significant influence on the underlying network structure. This study aims to investigate the role and importance of social networks in the information dissemination process, as well as to construct a linear threshold model tailored for the dissemination of emergency information across both real and social networks, leveraging conventional models of information spread. We have developed a model to increase the number of connection edges in social networks in order to enhance their worth. Additionally, we discovered that the objective function possesses submodular features and thus the created problem is NP-hard. As a result, we can use algorithms with approximative assurances of <span><math><mrow><mn>1</mn><mo>−</mo><msup><mrow><mi>e</mi></mrow><mrow><mo>−</mo><mn>1</mn></mrow></msup><mo>−</mo><msup><mrow><mi>θ</mi></mrow><mrow><mo>′</mo></mrow></msup></mrow></math></span> to solve our problem and ensures the accuracy of the solution. We also analyze the complexity of the algorithm in solving this problem. Finally we validated our conclusions with three publicly available datasets and one real data set to analysis the results of the solution.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107933"},"PeriodicalIF":4.5,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26DOI: 10.1016/j.comcom.2024.107930
Ana Larrañaga-Zumeta , M. Carmen Lucas-Estañ , Javier Gozálvez , Aitor Arriola
The integration of 5G (5th Generation) and TSN (Time Sensitive Networking) networks is key for the support of emerging Industry 4.0 applications, where the flexibility and adaptability of 5G will be combined with the deterministic communications features provided by TSN. For an effective and efficient 5G-TSN integration both networks need to be coordinated. However, 5G has not been designed to provide deterministic communications. In this context, this paper proposes a 5G configured grant scheduling scheme that coordinates its decision with the TSN scheduling to satisfy the deterministic and end-to-end latency requirements of industrial applications. The proposed scheme avoids scheduling conflicts that can happen when packets of different TSN flows are generated with different periodicities. The proposed scheme efficiently coordinates the access to the radio resources of different TSN flows and complies with the 3GPP (Third Generation Partnership Project) standard requirements.
{"title":"5G configured grant scheduling for seamless integration with TSN industrial networks","authors":"Ana Larrañaga-Zumeta , M. Carmen Lucas-Estañ , Javier Gozálvez , Aitor Arriola","doi":"10.1016/j.comcom.2024.107930","DOIUrl":"10.1016/j.comcom.2024.107930","url":null,"abstract":"<div><p>The integration of 5G (5th Generation) and TSN (Time Sensitive Networking) networks is key for the support of emerging Industry 4.0 applications, where the flexibility and adaptability of 5G will be combined with the deterministic communications features provided by TSN. For an effective and efficient 5G-TSN integration both networks need to be coordinated. However, 5G has not been designed to provide deterministic communications. In this context, this paper proposes a 5G configured grant scheduling scheme that coordinates its decision with the TSN scheduling to satisfy the deterministic and end-to-end latency requirements of industrial applications. The proposed scheme avoids scheduling conflicts that can happen when packets of different TSN flows are generated with different periodicities. The proposed scheme efficiently coordinates the access to the radio resources of different TSN flows and complies with the 3GPP (Third Generation Partnership Project) standard requirements.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107930"},"PeriodicalIF":4.5,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-24DOI: 10.1016/j.comcom.2024.107927
Behnam Farzaneh , Nashid Shahriar , Abu Hena Al Muktadir , Md. Shamim Towhid , Mohammad Sadegh Khosravani
Network slicing is considered as a key enabler for 5G and beyond mobile networks for supporting a variety of new services, including enhanced mobile broadband, ultra-reliable and low-latency communication, and massive connectivity, on the same physical infrastructure. However, this technology increases the susceptibility of networks to cyber threats, particularly Distributed Denial-of-Service (DDoS) attacks. These attacks have the potential to cause service quality degradation by overloading network function(s) that are central to network slices to operate seamlessly. This calls for an Intrusion Detection System (IDS) as a shield against a wide array of DDoS attacks. In this regard, one promising solution would be the use of Deep Learning (DL) models for detecting possible DDoS attacks, an approach that has already made its way into the field given its manifest effectiveness. However, one particular challenge with DL models is that they require large volumes of labeled data for efficient training, which are not readily available in operational networks. A possible workaround is to resort to Transfer Learning (TL) approaches that can utilize the knowledge learned from prior training to a target domain with limited labeled data. This paper investigates how Deep Transfer Learning (DTL) based approaches can improve the detection of DDoS attacks in 5G networks by leveraging DL models, such as Bidirectional Long Short-Term Memory (BiLSTM), Convolutional Neural Network (CNN), Residual Network (ResNet), and Inception as base models. A comprehensive dataset generated in our 5G network slicing testbed serves as the source dataset for DTL, which includes both benign and different types of DDoS attack traffic. After learning features, patterns, and representations from the source dataset using initial training, we fine-tune base models using a variety of TL processes on a target DDoS attack dataset. The 5G-NIDD dataset, which has a sparse amount of annotated traffic pertaining to several DDoS attack generated in a real 5G network, is chosen as the target dataset. The results show that the proposed DTL models have performance improvements in detecting different types of DDoS attacks in 5G-NIDD dataset compared to the case when no TL is applied. According to the results, the BiLSTM and Inception models being identified as the top-performing models. BiLSTM indicates an improvement of 13.90%, 21.48%, and 12.22% in terms of accuracy, recall, and F1-score, respectively, whereas, Inception demonstrates an enhancement of 10.09% in terms of precision, compared to the models that do not adopt TL.
{"title":"DTL-5G: Deep transfer learning-based DDoS attack detection in 5G and beyond networks","authors":"Behnam Farzaneh , Nashid Shahriar , Abu Hena Al Muktadir , Md. Shamim Towhid , Mohammad Sadegh Khosravani","doi":"10.1016/j.comcom.2024.107927","DOIUrl":"10.1016/j.comcom.2024.107927","url":null,"abstract":"<div><p>Network slicing is considered as a key enabler for 5G and beyond mobile networks for supporting a variety of new services, including enhanced mobile broadband, ultra-reliable and low-latency communication, and massive connectivity, on the same physical infrastructure. However, this technology increases the susceptibility of networks to cyber threats, particularly Distributed Denial-of-Service (DDoS) attacks. These attacks have the potential to cause service quality degradation by overloading network function(s) that are central to network slices to operate seamlessly. This calls for an Intrusion Detection System (IDS) as a shield against a wide array of DDoS attacks. In this regard, one promising solution would be the use of Deep Learning (DL) models for detecting possible DDoS attacks, an approach that has already made its way into the field given its manifest effectiveness. However, one particular challenge with DL models is that they require large volumes of labeled data for efficient training, which are not readily available in operational networks. A possible workaround is to resort to Transfer Learning (TL) approaches that can utilize the knowledge learned from prior training to a target domain with limited labeled data. This paper investigates how Deep Transfer Learning (DTL) based approaches can improve the detection of DDoS attacks in 5G networks by leveraging DL models, such as Bidirectional Long Short-Term Memory (BiLSTM), Convolutional Neural Network (CNN), Residual Network (ResNet), and Inception as base models. A comprehensive dataset generated in our 5G network slicing testbed serves as the source dataset for DTL, which includes both benign and different types of DDoS attack traffic. After learning features, patterns, and representations from the source dataset using initial training, we fine-tune base models using a variety of TL processes on a target DDoS attack dataset. The 5G-NIDD dataset, which has a sparse amount of annotated traffic pertaining to several DDoS attack generated in a real 5G network, is chosen as the target dataset. The results show that the proposed DTL models have performance improvements in detecting different types of DDoS attacks in 5G-NIDD dataset compared to the case when no TL is applied. According to the results, the BiLSTM and Inception models being identified as the top-performing models. BiLSTM indicates an improvement of 13.90%, 21.48%, and 12.22% in terms of accuracy, recall, and F1-score, respectively, whereas, Inception demonstrates an enhancement of 10.09% in terms of precision, compared to the models that do not adopt TL.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107927"},"PeriodicalIF":4.5,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-24DOI: 10.1016/j.comcom.2024.107931
Xiangchuan Gao , Yancong Li , Zheng Dong , Xingwang Li
This paper proposes a new mixed differential and index modulation framework for noncoherent multiuser massive single-input multiple-output (SIMO) systems. While differential modulation and detection is a popular noncoherent scheme, its constellation collisions limit the resulting error performance. To address this issue, we introduce a user with binary index modulation (IM) among the differential users, achieving much reduced collisions. We then analyze a three-user SIMO system with binary modulations, attained a closed-form bit error rate (BER) expression with a fast noncoherent maximum-likelihood (ML) detection algorithm for each user. Furthermore, a closed-form optimal power loading vector is derived by minimizing the worst-case BER under individual power constraints. Finally, an efficient one-dimensional bisection search algorithm is employed to optimize constellations for arbitrary numbers of differential users and constellation sizes by minimizing the system BER. Simulation results validate the theoretical analysis and demonstrate the superiority of the proposed scheme compared to existing differential schemes.
本文为非相干多用户大规模单输入多输出(SIMO)系统提出了一种新的混合差分和索引调制框架。虽然差分调制和检测是一种流行的非相干方案,但其星座碰撞限制了由此产生的误差性能。为了解决这个问题,我们在差分用户中引入了一个二进制索引调制(IM)用户,从而大大减少了碰撞。然后,我们分析了采用二进制调制的三用户 SIMO 系统,为每个用户采用快速非相干最大似然 (ML) 检测算法,获得了闭式误码率 (BER) 表达式。此外,通过最小化单个功率约束下的最坏误码率,得出了闭式最优功率负载向量。最后,通过最小化系统误码率,采用高效的一维分段搜索算法来优化任意差分用户数和星座大小的星座。仿真结果验证了理论分析,并证明了与现有差分方案相比,拟议方案的优越性。
{"title":"Noncoherent multiuser massive SIMO with mixed differential and index modulation","authors":"Xiangchuan Gao , Yancong Li , Zheng Dong , Xingwang Li","doi":"10.1016/j.comcom.2024.107931","DOIUrl":"10.1016/j.comcom.2024.107931","url":null,"abstract":"<div><p>This paper proposes a new mixed differential and index modulation framework for noncoherent multiuser massive single-input multiple-output (SIMO) systems. While differential modulation and detection is a popular noncoherent scheme, its constellation collisions limit the resulting error performance. To address this issue, we introduce a user with binary index modulation (IM) among the differential users, achieving much reduced collisions. We then analyze a three-user SIMO system with binary modulations, attained a closed-form bit error rate (BER) expression with a fast noncoherent maximum-likelihood (ML) detection algorithm for each user. Furthermore, a closed-form optimal power loading vector is derived by minimizing the worst-case BER under individual power constraints. Finally, an efficient one-dimensional bisection search algorithm is employed to optimize constellations for arbitrary numbers of differential users and constellation sizes by minimizing the system BER. Simulation results validate the theoretical analysis and demonstrate the superiority of the proposed scheme compared to existing differential schemes.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107931"},"PeriodicalIF":4.5,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0140366424002780/pdfft?md5=04ca178949f6a116168982cd2b675a94&pid=1-s2.0-S0140366424002780-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142161739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1016/j.comcom.2024.107929
Dileep Kumar Sajnani, Xiaoping Li, Abdul Rasheed Mahesar
The rapid advancement of mobile communication technology and devices has greatly improved our way of life. It also presents a new possibility that data sources can be used to accomplish computing tasks at nearby locations. Mobile Edge Computing (MEC) is a computing model that provides computer resources specifically designed to handle mobile tasks. Nevertheless, there are certain obstacles that must be carefully tackled, specifically regarding the security and quality of services in the workflow scheduling over MEC. This research proposes a new method called Feedback Artificial Remora Optimization (FARO)-based workflow scheduling method to address the issues of scheduling processes with improved security in MEC. In this context, the fitness functions that are taken into account include multi-objective, such as CPU utilization, memory utilization, encryption cost, and execution time. These functions are used to enhance the scheduling of workflow tasks based on security considerations. The FARO algorithm is a combination of the Feedback Artificial Tree (FAT) and the Remora Optimization Algorithm (ROA). The experimental findings have demonstrated that the developed approach surpassed current methods by a large margin in terms of CPU use, memory consumption, encryption cost, and execution time, with values of 0.012, 0.010, 0.017, and 0.036, respectively.
移动通信技术和设备的飞速发展极大地改善了我们的生活方式。这也带来了一种新的可能性,即数据源可用于完成附近地点的计算任务。移动边缘计算(MEC)是一种计算模式,它提供专门用于处理移动任务的计算机资源。然而,有一些障碍必须认真解决,特别是在 MEC 上工作流调度的安全性和服务质量方面。本研究提出了一种新方法,即基于反馈人工雷莫拉优化(FARO)的工作流调度方法,以解决在 MEC 中提高安全性的流程调度问题。在这种情况下,考虑的适应度函数包括多目标,如 CPU 利用率、内存利用率、加密成本和执行时间。这些函数用于基于安全考虑因素加强工作流任务的调度。FARO 算法是反馈人工树(FAT)和 Remora 优化算法(ROA)的结合。实验结果表明,所开发的方法在 CPU 占用、内存消耗、加密成本和执行时间方面大大超过了现有方法,其值分别为 0.012、0.010、0.017 和 0.036。
{"title":"Secure workflow scheduling algorithm utilizing hybrid optimization in mobile edge computing environments","authors":"Dileep Kumar Sajnani, Xiaoping Li, Abdul Rasheed Mahesar","doi":"10.1016/j.comcom.2024.107929","DOIUrl":"10.1016/j.comcom.2024.107929","url":null,"abstract":"<div><p>The rapid advancement of mobile communication technology and devices has greatly improved our way of life. It also presents a new possibility that data sources can be used to accomplish computing tasks at nearby locations. Mobile Edge Computing (MEC) is a computing model that provides computer resources specifically designed to handle mobile tasks. Nevertheless, there are certain obstacles that must be carefully tackled, specifically regarding the security and quality of services in the workflow scheduling over MEC. This research proposes a new method called Feedback Artificial Remora Optimization (FARO)-based workflow scheduling method to address the issues of scheduling processes with improved security in MEC. In this context, the fitness functions that are taken into account include multi-objective, such as CPU utilization, memory utilization, encryption cost, and execution time. These functions are used to enhance the scheduling of workflow tasks based on security considerations. The FARO algorithm is a combination of the Feedback Artificial Tree (FAT) and the Remora Optimization Algorithm (ROA). The experimental findings have demonstrated that the developed approach surpassed current methods by a large margin in terms of CPU use, memory consumption, encryption cost, and execution time, with values of 0.012, 0.010, 0.017, and 0.036, respectively.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107929"},"PeriodicalIF":4.5,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}