Scalability is one of the important parameters for mobile communication networks of the present generation and further to the future 5G and beyond networks. When a user is in motion transferring from one cell site to another, then the handover procedure becomes important in the sense that it ensures that a user gets consistent connection without interruption. Nevertheless, the classic handover process in cellular networks has some sort of drawback like causing service interruptions, affecting packet transmission, and increased latency which is highly uncongenial to the evolving applications which have stringent requirement to latency. To overcome these challenges and improve the mobile handover in 5G and future mobile networks, this article puts forth a predictive handover mechanism using reinforcement learning algorithm. The RL algorithm outperforms the ML algorithm in several aspects. Compared to ML, RL has a higher handover success rate (∼95% vs. ∼90%), lower latency (∼30 ms vs. ∼40 ms), reduced failure rate (∼5% vs. ∼10%), and shorter disconnection time (∼50 ms vs. ∼70 ms). This demonstrates the RL algorithm's superior ability to adapt to dynamic network conditions.
{"title":"Predictive handover mechanism for seamless mobility in 5G and beyond networks","authors":"Thafer H. Sulaiman, Hamed S. Al-Raweshidy","doi":"10.1049/cmu2.12878","DOIUrl":"https://doi.org/10.1049/cmu2.12878","url":null,"abstract":"<p>Scalability is one of the important parameters for mobile communication networks of the present generation and further to the future 5G and beyond networks. When a user is in motion transferring from one cell site to another, then the handover procedure becomes important in the sense that it ensures that a user gets consistent connection without interruption. Nevertheless, the classic handover process in cellular networks has some sort of drawback like causing service interruptions, affecting packet transmission, and increased latency which is highly uncongenial to the evolving applications which have stringent requirement to latency. To overcome these challenges and improve the mobile handover in 5G and future mobile networks, this article puts forth a predictive handover mechanism using reinforcement learning algorithm. The RL algorithm outperforms the ML algorithm in several aspects. Compared to ML, RL has a higher handover success rate (∼95% vs. ∼90%), lower latency (∼30 ms vs. ∼40 ms), reduced failure rate (∼5% vs. ∼10%), and shorter disconnection time (∼50 ms vs. ∼70 ms). This demonstrates the RL algorithm's superior ability to adapt to dynamic network conditions.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12878","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reconfigurable intelligent surfaces (RISs) have emerged as propitious solution to configure random wireless channel into suitable propagation environment by adjusting a large number of low-cost passive reflecting elements. It is considered that narrowband downlink millimeter wave (mmWave) multiple-input multiple-output (MIMO) communication is aided by deploying an RIS. Large antenna arrays are used to counter the huge propagation loss suffered by the mmWave signals. Hybrid precoding in which precoding is performed in digital and analog domains is employed to reduce the number of costly and power-consuming radio frequency (RF) chains. Passive beamforming at RIS is designed together with precoder and combiner through joint optimization problem to minimize the mean square error between the transmit signal and the estimate of signal at the receiver. The optimization problem is solved by an iterative procedure in which solution to the non-convex reflecting coefficients design problem is approximated by extracting the phases of the solution to unconstrained problem without unit amplitude constraint of the reflecting elements. It is shown that the proposed design principle also applies to the wideband channel. Simulation results show that the proposed design delivers performance better than existing state-of-the-art solutions, but at lower complexity.
{"title":"MMSE-based passive beamforming for reconfigurable intelligent surface aided millimeter wave MIMO","authors":"Prabhat Raj Gautam, Li Zhang, Pingzhi Fan","doi":"10.1049/cmu2.12873","DOIUrl":"https://doi.org/10.1049/cmu2.12873","url":null,"abstract":"<p>Reconfigurable intelligent surfaces (RISs) have emerged as propitious solution to configure random wireless channel into suitable propagation environment by adjusting a large number of low-cost passive reflecting elements. It is considered that narrowband downlink millimeter wave (mmWave) multiple-input multiple-output (MIMO) communication is aided by deploying an RIS. Large antenna arrays are used to counter the huge propagation loss suffered by the mmWave signals. Hybrid precoding in which precoding is performed in digital and analog domains is employed to reduce the number of costly and power-consuming radio frequency (RF) chains. Passive beamforming at RIS is designed together with precoder and combiner through joint optimization problem to minimize the mean square error between the transmit signal and the estimate of signal at the receiver. The optimization problem is solved by an iterative procedure in which solution to the non-convex reflecting coefficients design problem is approximated by extracting the phases of the solution to unconstrained problem without unit amplitude constraint of the reflecting elements. It is shown that the proposed design principle also applies to the wideband channel. Simulation results show that the proposed design delivers performance better than existing state-of-the-art solutions, but at lower complexity.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12873","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143112693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nemalikanti Anand, Saifulla M A, Pavan Kumar Aakula, Raveendra Babu Ponnuru, Rizwan Patan, Chegireddy Rama Prakasha Reddy
As organizations increasingly rely on network services, the prevalence and severity of Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks have emerged as significant threats. The cornerstone of effectively addressing these challenges lies in the timely and precise detection capabilities offered by advanced intrusion detection systems (IDS). Hence, an innovative IDS framework is introduced that seamlessly integrates the extended Berkeley Packet Filter (eBPF) with powerful machine learning algorithms—specifically Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), and TwinSVM—enabling unparalleled real-time detection of DDoS attacks. This cutting-edge solution provides a robust and scalable IDS framework to combat DoS and DDoS threats with high efficiency, leveraging eBPF's capabilities within the Linux kernel to bypass typical user space constraints. The methodology encompasses several key steps: (a) Collection of data from the renowned CIC-IDS-2017 repository; (b) Processing the raw data through a meticulous series of steps, including transmission, cleaning, reduction, and discretization; (c) Utilizing an ANOVA F-test for the extraction of critical features from the preprocessed data; (d) Application of various ML algorithms (DT, RF, SVM, and TwinSVM) to analyze the extracted features for potential intrusion; (e) Implementing an eBPF program to capture network traffic and harness trained model parameters for efficient attack detection directly within the kernel. The experimental results reveal outstanding accuracy rates of 99.38%, 99.44%, 88.73%, and 93.82% for DT, RF, SVM, and TwinSVM, respectively, alongside remarkable precision values of 99.71%, 99.65%, 84.31%, and 98.49%. This high-speed, accurate detection model is ideally suited for high-traffic environments such as data centers. Furthermore, its foundational architecture paves the way for future advancements, including the potential integration of eBPF with XDP to achieve even lower-latency packet processing. The experimental code is available at the GitHub repository link: https://github.com/NemalikantiAnand/Project.
{"title":"Enhancing intrusion detection against denial of service and distributed denial of service attacks: Leveraging extended Berkeley packet filter and machine learning algorithms","authors":"Nemalikanti Anand, Saifulla M A, Pavan Kumar Aakula, Raveendra Babu Ponnuru, Rizwan Patan, Chegireddy Rama Prakasha Reddy","doi":"10.1049/cmu2.12879","DOIUrl":"https://doi.org/10.1049/cmu2.12879","url":null,"abstract":"<p>As organizations increasingly rely on network services, the prevalence and severity of Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks have emerged as significant threats. The cornerstone of effectively addressing these challenges lies in the timely and precise detection capabilities offered by advanced intrusion detection systems (IDS). Hence, an innovative IDS framework is introduced that seamlessly integrates the extended Berkeley Packet Filter (eBPF) with powerful machine learning algorithms—specifically Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), and TwinSVM—enabling unparalleled real-time detection of DDoS attacks. This cutting-edge solution provides a robust and scalable IDS framework to combat DoS and DDoS threats with high efficiency, leveraging eBPF's capabilities within the Linux kernel to bypass typical user space constraints. The methodology encompasses several key steps: (a) Collection of data from the renowned CIC-IDS-2017 repository; (b) Processing the raw data through a meticulous series of steps, including transmission, cleaning, reduction, and discretization; (c) Utilizing an ANOVA F-test for the extraction of critical features from the preprocessed data; (d) Application of various ML algorithms (DT, RF, SVM, and TwinSVM) to analyze the extracted features for potential intrusion; (e) Implementing an eBPF program to capture network traffic and harness trained model parameters for efficient attack detection directly within the kernel. The experimental results reveal outstanding accuracy rates of 99.38%, 99.44%, 88.73%, and 93.82% for DT, RF, SVM, and TwinSVM, respectively, alongside remarkable precision values of 99.71%, 99.65%, 84.31%, and 98.49%. This high-speed, accurate detection model is ideally suited for high-traffic environments such as data centers. Furthermore, its foundational architecture paves the way for future advancements, including the potential integration of eBPF with XDP to achieve even lower-latency packet processing. The experimental code is available at the GitHub repository link: https://github.com/NemalikantiAnand/Project.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12879","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143112694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mustafa Raad Kadhim, Guangxi Lu, Yinong Shi, Jianbo Wang, Wu Kui
Advanced wireless communication is important in distribution systems for sharing information among Internet of Things (IoT) edges. Artificial intelligence (AI) analyzed the generated IoT data to make these decisions, ensuring efficient and effective operations. These technologies face significant security challenges, such as eavesdropping and adversarial attacks. Recent studies addressed this issue by using clustering analysis (CA) to uncover hidden patterns to provide AI models with clear interpretations. The high volume of overlapped samples in IoT data affects partitioning, interpretation, and reliability of CAs. Recent CA models have integrated machine learning techniques to address these issues, but struggle in the limited resources of IoT environments. These challenges are addressed by proposing a novel unsupervised lightweight distance clustering (DC) model based on data separation (