Pub Date : 2024-08-28DOI: 10.1016/j.comcom.2024.107932
Shujie Yang , Kefei Song , Zhenhui Yuan , Lujie Zhong , Mu Wang , Xiang Ji , Changqiao Xu
In the era of Industry 5.0, with the deep convergence of Industrial Internet of Things (IIoT) and 5G technology, stable transmission of massive data in heterogeneous networks becomes crucial. This is not only the key to improving the efficiency of human–machine collaboration, but also the basis for ensuring system continuity and reliability. The arrival of 5G has brought new challenges to the communication of IIoT in heterogeneous environments. Due to the inherent characteristics of wireless networks, such as random packet loss and network jitter, traditional transmission control schemes often fail to achieve optimal performance. In this paper we propose a novel transmission control algorithm, aBBR. It is an augmented algorithm based on BBRv3. aBBR dynamically adjusts the sending window size through real-time analysis to enhance the transmission performance in heterogeneous networks. Simulation results show that, compared to traditional algorithms, aBBR demonstrates the best comprehensive performance in terms of throughput, latency, and retransmission. When random packet loss exists in the link, aBBR improves the throughput by an average of 29.3% and decreases the retransmission rate by 18.5% while keeping the transmission delay at the same level as BBRv3.
{"title":"aBBR: An augmented BBR for collaborative intelligent transmission over heterogeneous networks in IIoT","authors":"Shujie Yang , Kefei Song , Zhenhui Yuan , Lujie Zhong , Mu Wang , Xiang Ji , Changqiao Xu","doi":"10.1016/j.comcom.2024.107932","DOIUrl":"10.1016/j.comcom.2024.107932","url":null,"abstract":"<div><p>In the era of Industry 5.0, with the deep convergence of Industrial Internet of Things (IIoT) and 5G technology, stable transmission of massive data in heterogeneous networks becomes crucial. This is not only the key to improving the efficiency of human–machine collaboration, but also the basis for ensuring system continuity and reliability. The arrival of 5G has brought new challenges to the communication of IIoT in heterogeneous environments. Due to the inherent characteristics of wireless networks, such as random packet loss and network jitter, traditional transmission control schemes often fail to achieve optimal performance. In this paper we propose a novel transmission control algorithm, aBBR. It is an augmented algorithm based on BBRv3. aBBR dynamically adjusts the sending window size through real-time analysis to enhance the transmission performance in heterogeneous networks. Simulation results show that, compared to traditional algorithms, aBBR demonstrates the best comprehensive performance in terms of throughput, latency, and retransmission. When random packet loss exists in the link, aBBR improves the throughput by an average of 29.3% and decreases the retransmission rate by 18.5% while keeping the transmission delay at the same level as BBRv3.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107932"},"PeriodicalIF":4.5,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-27DOI: 10.1016/j.comcom.2024.107933
Yuxin Gao , Jianming Zhu , Peikun Ni
The impact of social networks on real-life scenarios is intensifying with the diversification of the information they disseminate. Consequently, the interconnection between social networks and tangible networks is strengthening. Notably, we have observed that messages disseminated on social networks, particularly those soliciting aid, exert a significant influence on the underlying network structure. This study aims to investigate the role and importance of social networks in the information dissemination process, as well as to construct a linear threshold model tailored for the dissemination of emergency information across both real and social networks, leveraging conventional models of information spread. We have developed a model to increase the number of connection edges in social networks in order to enhance their worth. Additionally, we discovered that the objective function possesses submodular features and thus the created problem is NP-hard. As a result, we can use algorithms with approximative assurances of to solve our problem and ensures the accuracy of the solution. We also analyze the complexity of the algorithm in solving this problem. Finally we validated our conclusions with three publicly available datasets and one real data set to analysis the results of the solution.
{"title":"Research on maximizing real demand response based on link addition in social networks","authors":"Yuxin Gao , Jianming Zhu , Peikun Ni","doi":"10.1016/j.comcom.2024.107933","DOIUrl":"10.1016/j.comcom.2024.107933","url":null,"abstract":"<div><p>The impact of social networks on real-life scenarios is intensifying with the diversification of the information they disseminate. Consequently, the interconnection between social networks and tangible networks is strengthening. Notably, we have observed that messages disseminated on social networks, particularly those soliciting aid, exert a significant influence on the underlying network structure. This study aims to investigate the role and importance of social networks in the information dissemination process, as well as to construct a linear threshold model tailored for the dissemination of emergency information across both real and social networks, leveraging conventional models of information spread. We have developed a model to increase the number of connection edges in social networks in order to enhance their worth. Additionally, we discovered that the objective function possesses submodular features and thus the created problem is NP-hard. As a result, we can use algorithms with approximative assurances of <span><math><mrow><mn>1</mn><mo>−</mo><msup><mrow><mi>e</mi></mrow><mrow><mo>−</mo><mn>1</mn></mrow></msup><mo>−</mo><msup><mrow><mi>θ</mi></mrow><mrow><mo>′</mo></mrow></msup></mrow></math></span> to solve our problem and ensures the accuracy of the solution. We also analyze the complexity of the algorithm in solving this problem. Finally we validated our conclusions with three publicly available datasets and one real data set to analysis the results of the solution.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107933"},"PeriodicalIF":4.5,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26DOI: 10.1016/j.comcom.2024.107930
Ana Larrañaga-Zumeta , M. Carmen Lucas-Estañ , Javier Gozálvez , Aitor Arriola
The integration of 5G (5th Generation) and TSN (Time Sensitive Networking) networks is key for the support of emerging Industry 4.0 applications, where the flexibility and adaptability of 5G will be combined with the deterministic communications features provided by TSN. For an effective and efficient 5G-TSN integration both networks need to be coordinated. However, 5G has not been designed to provide deterministic communications. In this context, this paper proposes a 5G configured grant scheduling scheme that coordinates its decision with the TSN scheduling to satisfy the deterministic and end-to-end latency requirements of industrial applications. The proposed scheme avoids scheduling conflicts that can happen when packets of different TSN flows are generated with different periodicities. The proposed scheme efficiently coordinates the access to the radio resources of different TSN flows and complies with the 3GPP (Third Generation Partnership Project) standard requirements.
{"title":"5G configured grant scheduling for seamless integration with TSN industrial networks","authors":"Ana Larrañaga-Zumeta , M. Carmen Lucas-Estañ , Javier Gozálvez , Aitor Arriola","doi":"10.1016/j.comcom.2024.107930","DOIUrl":"10.1016/j.comcom.2024.107930","url":null,"abstract":"<div><p>The integration of 5G (5th Generation) and TSN (Time Sensitive Networking) networks is key for the support of emerging Industry 4.0 applications, where the flexibility and adaptability of 5G will be combined with the deterministic communications features provided by TSN. For an effective and efficient 5G-TSN integration both networks need to be coordinated. However, 5G has not been designed to provide deterministic communications. In this context, this paper proposes a 5G configured grant scheduling scheme that coordinates its decision with the TSN scheduling to satisfy the deterministic and end-to-end latency requirements of industrial applications. The proposed scheme avoids scheduling conflicts that can happen when packets of different TSN flows are generated with different periodicities. The proposed scheme efficiently coordinates the access to the radio resources of different TSN flows and complies with the 3GPP (Third Generation Partnership Project) standard requirements.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107930"},"PeriodicalIF":4.5,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-24DOI: 10.1016/j.comcom.2024.107927
Behnam Farzaneh , Nashid Shahriar , Abu Hena Al Muktadir , Md. Shamim Towhid , Mohammad Sadegh Khosravani
Network slicing is considered as a key enabler for 5G and beyond mobile networks for supporting a variety of new services, including enhanced mobile broadband, ultra-reliable and low-latency communication, and massive connectivity, on the same physical infrastructure. However, this technology increases the susceptibility of networks to cyber threats, particularly Distributed Denial-of-Service (DDoS) attacks. These attacks have the potential to cause service quality degradation by overloading network function(s) that are central to network slices to operate seamlessly. This calls for an Intrusion Detection System (IDS) as a shield against a wide array of DDoS attacks. In this regard, one promising solution would be the use of Deep Learning (DL) models for detecting possible DDoS attacks, an approach that has already made its way into the field given its manifest effectiveness. However, one particular challenge with DL models is that they require large volumes of labeled data for efficient training, which are not readily available in operational networks. A possible workaround is to resort to Transfer Learning (TL) approaches that can utilize the knowledge learned from prior training to a target domain with limited labeled data. This paper investigates how Deep Transfer Learning (DTL) based approaches can improve the detection of DDoS attacks in 5G networks by leveraging DL models, such as Bidirectional Long Short-Term Memory (BiLSTM), Convolutional Neural Network (CNN), Residual Network (ResNet), and Inception as base models. A comprehensive dataset generated in our 5G network slicing testbed serves as the source dataset for DTL, which includes both benign and different types of DDoS attack traffic. After learning features, patterns, and representations from the source dataset using initial training, we fine-tune base models using a variety of TL processes on a target DDoS attack dataset. The 5G-NIDD dataset, which has a sparse amount of annotated traffic pertaining to several DDoS attack generated in a real 5G network, is chosen as the target dataset. The results show that the proposed DTL models have performance improvements in detecting different types of DDoS attacks in 5G-NIDD dataset compared to the case when no TL is applied. According to the results, the BiLSTM and Inception models being identified as the top-performing models. BiLSTM indicates an improvement of 13.90%, 21.48%, and 12.22% in terms of accuracy, recall, and F1-score, respectively, whereas, Inception demonstrates an enhancement of 10.09% in terms of precision, compared to the models that do not adopt TL.
{"title":"DTL-5G: Deep transfer learning-based DDoS attack detection in 5G and beyond networks","authors":"Behnam Farzaneh , Nashid Shahriar , Abu Hena Al Muktadir , Md. Shamim Towhid , Mohammad Sadegh Khosravani","doi":"10.1016/j.comcom.2024.107927","DOIUrl":"10.1016/j.comcom.2024.107927","url":null,"abstract":"<div><p>Network slicing is considered as a key enabler for 5G and beyond mobile networks for supporting a variety of new services, including enhanced mobile broadband, ultra-reliable and low-latency communication, and massive connectivity, on the same physical infrastructure. However, this technology increases the susceptibility of networks to cyber threats, particularly Distributed Denial-of-Service (DDoS) attacks. These attacks have the potential to cause service quality degradation by overloading network function(s) that are central to network slices to operate seamlessly. This calls for an Intrusion Detection System (IDS) as a shield against a wide array of DDoS attacks. In this regard, one promising solution would be the use of Deep Learning (DL) models for detecting possible DDoS attacks, an approach that has already made its way into the field given its manifest effectiveness. However, one particular challenge with DL models is that they require large volumes of labeled data for efficient training, which are not readily available in operational networks. A possible workaround is to resort to Transfer Learning (TL) approaches that can utilize the knowledge learned from prior training to a target domain with limited labeled data. This paper investigates how Deep Transfer Learning (DTL) based approaches can improve the detection of DDoS attacks in 5G networks by leveraging DL models, such as Bidirectional Long Short-Term Memory (BiLSTM), Convolutional Neural Network (CNN), Residual Network (ResNet), and Inception as base models. A comprehensive dataset generated in our 5G network slicing testbed serves as the source dataset for DTL, which includes both benign and different types of DDoS attack traffic. After learning features, patterns, and representations from the source dataset using initial training, we fine-tune base models using a variety of TL processes on a target DDoS attack dataset. The 5G-NIDD dataset, which has a sparse amount of annotated traffic pertaining to several DDoS attack generated in a real 5G network, is chosen as the target dataset. The results show that the proposed DTL models have performance improvements in detecting different types of DDoS attacks in 5G-NIDD dataset compared to the case when no TL is applied. According to the results, the BiLSTM and Inception models being identified as the top-performing models. BiLSTM indicates an improvement of 13.90%, 21.48%, and 12.22% in terms of accuracy, recall, and F1-score, respectively, whereas, Inception demonstrates an enhancement of 10.09% in terms of precision, compared to the models that do not adopt TL.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107927"},"PeriodicalIF":4.5,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-24DOI: 10.1016/j.comcom.2024.107931
Xiangchuan Gao , Yancong Li , Zheng Dong , Xingwang Li
This paper proposes a new mixed differential and index modulation framework for noncoherent multiuser massive single-input multiple-output (SIMO) systems. While differential modulation and detection is a popular noncoherent scheme, its constellation collisions limit the resulting error performance. To address this issue, we introduce a user with binary index modulation (IM) among the differential users, achieving much reduced collisions. We then analyze a three-user SIMO system with binary modulations, attained a closed-form bit error rate (BER) expression with a fast noncoherent maximum-likelihood (ML) detection algorithm for each user. Furthermore, a closed-form optimal power loading vector is derived by minimizing the worst-case BER under individual power constraints. Finally, an efficient one-dimensional bisection search algorithm is employed to optimize constellations for arbitrary numbers of differential users and constellation sizes by minimizing the system BER. Simulation results validate the theoretical analysis and demonstrate the superiority of the proposed scheme compared to existing differential schemes.
本文为非相干多用户大规模单输入多输出(SIMO)系统提出了一种新的混合差分和索引调制框架。虽然差分调制和检测是一种流行的非相干方案,但其星座碰撞限制了由此产生的误差性能。为了解决这个问题,我们在差分用户中引入了一个二进制索引调制(IM)用户,从而大大减少了碰撞。然后,我们分析了采用二进制调制的三用户 SIMO 系统,为每个用户采用快速非相干最大似然 (ML) 检测算法,获得了闭式误码率 (BER) 表达式。此外,通过最小化单个功率约束下的最坏误码率,得出了闭式最优功率负载向量。最后,通过最小化系统误码率,采用高效的一维分段搜索算法来优化任意差分用户数和星座大小的星座。仿真结果验证了理论分析,并证明了与现有差分方案相比,拟议方案的优越性。
{"title":"Noncoherent multiuser massive SIMO with mixed differential and index modulation","authors":"Xiangchuan Gao , Yancong Li , Zheng Dong , Xingwang Li","doi":"10.1016/j.comcom.2024.107931","DOIUrl":"10.1016/j.comcom.2024.107931","url":null,"abstract":"<div><p>This paper proposes a new mixed differential and index modulation framework for noncoherent multiuser massive single-input multiple-output (SIMO) systems. While differential modulation and detection is a popular noncoherent scheme, its constellation collisions limit the resulting error performance. To address this issue, we introduce a user with binary index modulation (IM) among the differential users, achieving much reduced collisions. We then analyze a three-user SIMO system with binary modulations, attained a closed-form bit error rate (BER) expression with a fast noncoherent maximum-likelihood (ML) detection algorithm for each user. Furthermore, a closed-form optimal power loading vector is derived by minimizing the worst-case BER under individual power constraints. Finally, an efficient one-dimensional bisection search algorithm is employed to optimize constellations for arbitrary numbers of differential users and constellation sizes by minimizing the system BER. Simulation results validate the theoretical analysis and demonstrate the superiority of the proposed scheme compared to existing differential schemes.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107931"},"PeriodicalIF":4.5,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0140366424002780/pdfft?md5=04ca178949f6a116168982cd2b675a94&pid=1-s2.0-S0140366424002780-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142161739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1016/j.comcom.2024.107929
Dileep Kumar Sajnani, Xiaoping Li, Abdul Rasheed Mahesar
The rapid advancement of mobile communication technology and devices has greatly improved our way of life. It also presents a new possibility that data sources can be used to accomplish computing tasks at nearby locations. Mobile Edge Computing (MEC) is a computing model that provides computer resources specifically designed to handle mobile tasks. Nevertheless, there are certain obstacles that must be carefully tackled, specifically regarding the security and quality of services in the workflow scheduling over MEC. This research proposes a new method called Feedback Artificial Remora Optimization (FARO)-based workflow scheduling method to address the issues of scheduling processes with improved security in MEC. In this context, the fitness functions that are taken into account include multi-objective, such as CPU utilization, memory utilization, encryption cost, and execution time. These functions are used to enhance the scheduling of workflow tasks based on security considerations. The FARO algorithm is a combination of the Feedback Artificial Tree (FAT) and the Remora Optimization Algorithm (ROA). The experimental findings have demonstrated that the developed approach surpassed current methods by a large margin in terms of CPU use, memory consumption, encryption cost, and execution time, with values of 0.012, 0.010, 0.017, and 0.036, respectively.
移动通信技术和设备的飞速发展极大地改善了我们的生活方式。这也带来了一种新的可能性,即数据源可用于完成附近地点的计算任务。移动边缘计算(MEC)是一种计算模式,它提供专门用于处理移动任务的计算机资源。然而,有一些障碍必须认真解决,特别是在 MEC 上工作流调度的安全性和服务质量方面。本研究提出了一种新方法,即基于反馈人工雷莫拉优化(FARO)的工作流调度方法,以解决在 MEC 中提高安全性的流程调度问题。在这种情况下,考虑的适应度函数包括多目标,如 CPU 利用率、内存利用率、加密成本和执行时间。这些函数用于基于安全考虑因素加强工作流任务的调度。FARO 算法是反馈人工树(FAT)和 Remora 优化算法(ROA)的结合。实验结果表明,所开发的方法在 CPU 占用、内存消耗、加密成本和执行时间方面大大超过了现有方法,其值分别为 0.012、0.010、0.017 和 0.036。
{"title":"Secure workflow scheduling algorithm utilizing hybrid optimization in mobile edge computing environments","authors":"Dileep Kumar Sajnani, Xiaoping Li, Abdul Rasheed Mahesar","doi":"10.1016/j.comcom.2024.107929","DOIUrl":"10.1016/j.comcom.2024.107929","url":null,"abstract":"<div><p>The rapid advancement of mobile communication technology and devices has greatly improved our way of life. It also presents a new possibility that data sources can be used to accomplish computing tasks at nearby locations. Mobile Edge Computing (MEC) is a computing model that provides computer resources specifically designed to handle mobile tasks. Nevertheless, there are certain obstacles that must be carefully tackled, specifically regarding the security and quality of services in the workflow scheduling over MEC. This research proposes a new method called Feedback Artificial Remora Optimization (FARO)-based workflow scheduling method to address the issues of scheduling processes with improved security in MEC. In this context, the fitness functions that are taken into account include multi-objective, such as CPU utilization, memory utilization, encryption cost, and execution time. These functions are used to enhance the scheduling of workflow tasks based on security considerations. The FARO algorithm is a combination of the Feedback Artificial Tree (FAT) and the Remora Optimization Algorithm (ROA). The experimental findings have demonstrated that the developed approach surpassed current methods by a large margin in terms of CPU use, memory consumption, encryption cost, and execution time, with values of 0.012, 0.010, 0.017, and 0.036, respectively.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107929"},"PeriodicalIF":4.5,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.comcom.2024.107926
Jinyuan Gu , Mingxing Wang , Wei Duan , Lei Zhang , Huaiping Zhang
Considering imperfect successive interference cancellation (SIC) for non-orthogonal multiple access (NOMA) communications, this work studies the cooperative reconfigurable intelligent surface (RIS)- and relay-assisted system under Nakagami-m fading. We focus on the performance comparison for such cooperative schemes, under different channel conditions and system parameters. In addition, we analyze the minimum required RIS elements number of the RIS-assisted scheme to achieve the same performance of the relay-assisted scheme with given signal-to-noise ratio (SNR). The cases of the optimal continuous phase shift and discrete phase shift designs of RIS are also discussed. We make a comparison between them, aiming to study the impact of the residual phase errors on performance. Specially, we compare the active RIS and passive RIS with the same system power constraint. Simulation results demonstrating the reliability of the analysis, validate that the relay-assisted scheme is superior to that of RIS-assisted one when the RIS elements number is small and transmitted power is lower. The results also confirm that the deployment of RIS should consider the actual situation of the application scenario.
{"title":"RIS-NOMA communications over Nakagami-m fading with imperfect successive interference cancellation","authors":"Jinyuan Gu , Mingxing Wang , Wei Duan , Lei Zhang , Huaiping Zhang","doi":"10.1016/j.comcom.2024.107926","DOIUrl":"10.1016/j.comcom.2024.107926","url":null,"abstract":"<div><p>Considering imperfect successive interference cancellation (SIC) for non-orthogonal multiple access (NOMA) communications, this work studies the cooperative reconfigurable intelligent surface (RIS)- and relay-assisted system under Nakagami-<em>m</em> fading. We focus on the performance comparison for such cooperative schemes, under different channel conditions and system parameters. In addition, we analyze the minimum required RIS elements number of the RIS-assisted scheme to achieve the same performance of the relay-assisted scheme with given signal-to-noise ratio (SNR). The cases of the optimal continuous phase shift and discrete phase shift designs of RIS are also discussed. We make a comparison between them, aiming to study the impact of the residual phase errors on performance. Specially, we compare the active RIS and passive RIS with the same system power constraint. Simulation results demonstrating the reliability of the analysis, validate that the relay-assisted scheme is superior to that of RIS-assisted one when the RIS elements number is small and transmitted power is lower. The results also confirm that the deployment of RIS should consider the actual situation of the application scenario.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107926"},"PeriodicalIF":4.5,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-19DOI: 10.1016/j.comcom.2024.107925
Xu Zhou , Jing Yang , Yijun Li , Shaobo Li , Zhidong Su
Traditional techniques for edge computing resource scheduling may result in large amounts of wasted server resources and energy consumption; thus, exploring new approaches to achieve higher resource and energy efficiency is a new challenge. Deep reinforcement learning (DRL) offers a promising solution by balancing resource utilization, latency, and energy optimization. However, current methods often focus solely on energy optimization for offloading and computing tasks, neglecting the impact of server numbers and resource operation status on energy efficiency and load balancing. On the other hand, prioritizing latency optimization may result in resource imbalance and increased energy waste. To address these challenges, we propose a novel energy optimization method coupled with a load balancing strategy. Our approach aims to minimize overall energy consumption and achieve server load balancing under latency constraints. This is achieved by controlling the number of active servers and individual server load states through a two stage DRL-based energy and resource optimization algorithm. Experimental results demonstrate that our scheme can save an average of 19.84% energy compared to mainstream reinforcement learning methods and 49.60% and 45.33% compared to Round Robin (RR) and random scheduling, respectively. Additionally, our method is optimized for reward value, load balancing, runtime, and anti-interference capability.
{"title":"Deep reinforcement learning-based resource scheduling for energy optimization and load balancing in SDN-driven edge computing","authors":"Xu Zhou , Jing Yang , Yijun Li , Shaobo Li , Zhidong Su","doi":"10.1016/j.comcom.2024.107925","DOIUrl":"10.1016/j.comcom.2024.107925","url":null,"abstract":"<div><p>Traditional techniques for edge computing resource scheduling may result in large amounts of wasted server resources and energy consumption; thus, exploring new approaches to achieve higher resource and energy efficiency is a new challenge. Deep reinforcement learning (DRL) offers a promising solution by balancing resource utilization, latency, and energy optimization. However, current methods often focus solely on energy optimization for offloading and computing tasks, neglecting the impact of server numbers and resource operation status on energy efficiency and load balancing. On the other hand, prioritizing latency optimization may result in resource imbalance and increased energy waste. To address these challenges, we propose a novel energy optimization method coupled with a load balancing strategy. Our approach aims to minimize overall energy consumption and achieve server load balancing under latency constraints. This is achieved by controlling the number of active servers and individual server load states through a two stage DRL-based energy and resource optimization algorithm. Experimental results demonstrate that our scheme can save an average of 19.84% energy compared to mainstream reinforcement learning methods and 49.60% and 45.33% compared to Round Robin (RR) and random scheduling, respectively. Additionally, our method is optimized for reward value, load balancing, runtime, and anti-interference capability.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107925"},"PeriodicalIF":4.5,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142084289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1016/j.comcom.2024.107924
Thanh Trung Nguyen , Minh Hai Vu , Thi Ha Ly Dinh , Thanh Hung Nguyen , Phi Le Nguyen , Kien Nguyen
In the 5G and beyond era, multipath transport protocols, including MPQUIC, are necessary in various use cases. In MPQUIC, one of the most critical issues is efficiently scheduling the upcoming transmission packets on several paths considering path dynamicity. To this end, this paper introduces FQ-SAT - a novel Fuzzy Q-learning-based MPQUIC scheduler for data transmission optimization, including download time, in heterogeneous wireless networks. Different from previous works, FQ-SAT combines Q-learning and Fuzzy logic in an MPQUIC scheduler to determine optimal transmission on heterogeneous paths. FQ-SAT leverages the self-learning ability of reinforcement learning (i.e., in a Q-learning model) to deal with heterogeneity. Moreover, FQ-SAT facilitates Fuzzy logic to dynamically adjust the proposed Q-learning model’s hyper-parameters along with the networks’ rapid changes. We evaluate FQ-SAT extensively in various scenarios in both simulated and actual networks. The results show that FQ-SAT reduces the single-file download time by 3.2%–13.5% in simulation and by 4.1%–13.8% in actual network, reduces the download time of all resources up to 20.4% in web browsing evaluation, and reaches percentage of on-time segments up to 97.5% in video streaming, compared to state-of-the-art MPQUIC schedulers.
{"title":"FQ-SAT: A fuzzy Q-learning-based MPQUIC scheduler for data transmission optimization","authors":"Thanh Trung Nguyen , Minh Hai Vu , Thi Ha Ly Dinh , Thanh Hung Nguyen , Phi Le Nguyen , Kien Nguyen","doi":"10.1016/j.comcom.2024.107924","DOIUrl":"10.1016/j.comcom.2024.107924","url":null,"abstract":"<div><p>In the 5G and beyond era, multipath transport protocols, including MPQUIC, are necessary in various use cases. In MPQUIC, one of the most critical issues is efficiently scheduling the upcoming transmission packets on several paths considering path dynamicity. To this end, this paper introduces FQ-SAT - a novel Fuzzy Q-learning-based MPQUIC scheduler for data transmission optimization, including download time, in heterogeneous wireless networks. Different from previous works, FQ-SAT combines Q-learning and Fuzzy logic in an MPQUIC scheduler to determine optimal transmission on heterogeneous paths. FQ-SAT leverages the self-learning ability of reinforcement learning (i.e., in a Q-learning model) to deal with heterogeneity. Moreover, FQ-SAT facilitates Fuzzy logic to dynamically adjust the proposed Q-learning model’s hyper-parameters along with the networks’ rapid changes. We evaluate FQ-SAT extensively in various scenarios in both simulated and actual networks. The results show that FQ-SAT reduces the single-file download time by 3.2%–13.5% in simulation and by 4.1%–13.8% in actual network, reduces the download time of all resources up to 20.4% in web browsing evaluation, and reaches percentage of on-time segments up to 97.5% in video streaming, compared to state-of-the-art MPQUIC schedulers.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107924"},"PeriodicalIF":4.5,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Byzantine Fault Tolerance (BFT) consensus protocols are widely used in consortium blockchain to ensure data consistency. However, BFT protocols are generally static which means that the dynamic joining and exiting of nodes will lead to the reconfiguration of the consortium blockchain system. Moreover, most BFT protocols cannot support the clearing operation of slow, crashed, or faulty nodes, which limits the application of consortium blockchain. In order to solve these problems, this paper proposes a new Dynamic Scalable BFT (D-SBFT) protocol. D-SBFT optimizes SBFT by using Distributed Key Generation (DKG) technology and BLS aggregate signature scheme. On the basis of SBFT, we add Join, Exit, and Clear algorithms. Among them, Join and Exit algorithms enable nodes to actively join and exit the consortium blockchain more flexibly. Clear can remove slow, crashed or faulty nodes from the consortium blockchain. Experimental results show that our D-SBFT protocol can efficiently implement node dynamic change while exhibiting good performance in consensus process.
{"title":"Improved dynamic Byzantine Fault Tolerant consensus mechanism","authors":"Fei Tang , Jinlan Peng , Ping Wang , Huihui Zhu , Tingxian Xu","doi":"10.1016/j.comcom.2024.08.004","DOIUrl":"10.1016/j.comcom.2024.08.004","url":null,"abstract":"<div><p>The Byzantine Fault Tolerance (BFT) consensus protocols are widely used in consortium blockchain to ensure data consistency. However, BFT protocols are generally static which means that the dynamic joining and exiting of nodes will lead to the reconfiguration of the consortium blockchain system. Moreover, most BFT protocols cannot support the clearing operation of slow, crashed, or faulty nodes, which limits the application of consortium blockchain. In order to solve these problems, this paper proposes a new Dynamic Scalable BFT (D-SBFT) protocol. D-SBFT optimizes SBFT by using Distributed Key Generation (DKG) technology and BLS aggregate signature scheme. On the basis of SBFT, we add <em>Join</em>, <em>Exit</em>, and <em>Clear</em> algorithms. Among them, <em>Join</em> and <em>Exit</em> algorithms enable nodes to actively join and exit the consortium blockchain more flexibly. <em>Clear</em> can remove slow, crashed or faulty nodes from the consortium blockchain. Experimental results show that our D-SBFT protocol can efficiently implement node dynamic change while exhibiting good performance in consensus process.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107922"},"PeriodicalIF":4.5,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142020471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}