Hooman Bavarsad Salehpour, Hamid Haj Seyyed Javadi, Parvaneh Asghari, Mohammad Ebrahim Shiri Ahmad Abadi
In the domain of data mining, the extraction of frequent patterns from expansive datasets remains a daunting task, compounded by the intricacies of temporal and spatial dimensions. While the Apriori algorithm is seminal in this area, its constraints are accentuated when navigating larger datasets. In response, we introduce an avant-garde solution that leverages parallel network topologies and GPUs. At the heart of our method are two salient features: (1) the use of parallel processing to expedite the realization of optimal results and (2) the integration of the cat and mouse-based optimizer (CMBO) algorithm, an astute algorithm mirroring the instinctual dynamics between predatory cats and evasive mice. This optimizer is structured around a biphasic model: an initial aggressive pursuit by the cats and a subsequent calculated evasion by the mice. This structure is enriched by classifying agents using their objective function scores. Complementing this, our architectural blueprint seamlessly amalgamates dual Nvidia graphics cards in a parallel configuration, establishing a marked ascendancy over conventional CPUs. In amalgamation, our approach not only rectifies the inherent shortfalls of the Apriori algorithm but also accentuates the extraction of association rules, pinpointing frequent patterns with enhanced precision. A comprehensive evaluation across a spectrum of network topologies explains their respective merits and demerits. Set against the benchmark of the Apriori algorithm, our method conspicuously outperforms in terms of speed and effectiveness, heralding a significant stride forward in data mining research.
{"title":"Improvement of Apriori Algorithm Using Parallelization Technique on Multi-CPU and GPU Topology","authors":"Hooman Bavarsad Salehpour, Hamid Haj Seyyed Javadi, Parvaneh Asghari, Mohammad Ebrahim Shiri Ahmad Abadi","doi":"10.1155/2024/7716976","DOIUrl":"https://doi.org/10.1155/2024/7716976","url":null,"abstract":"In the domain of data mining, the extraction of frequent patterns from expansive datasets remains a daunting task, compounded by the intricacies of temporal and spatial dimensions. While the Apriori algorithm is seminal in this area, its constraints are accentuated when navigating larger datasets. In response, we introduce an avant-garde solution that leverages parallel network topologies and GPUs. At the heart of our method are two salient features: (1) the use of parallel processing to expedite the realization of optimal results and (2) the integration of the cat and mouse-based optimizer (CMBO) algorithm, an astute algorithm mirroring the instinctual dynamics between predatory cats and evasive mice. This optimizer is structured around a biphasic model: an initial aggressive pursuit by the cats and a subsequent calculated evasion by the mice. This structure is enriched by classifying agents using their objective function scores. Complementing this, our architectural blueprint seamlessly amalgamates dual Nvidia graphics cards in a parallel configuration, establishing a marked ascendancy over conventional CPUs. In amalgamation, our approach not only rectifies the inherent shortfalls of the Apriori algorithm but also accentuates the extraction of association rules, pinpointing frequent patterns with enhanced precision. A comprehensive evaluation across a spectrum of network topologies explains their respective merits and demerits. Set against the benchmark of the Apriori algorithm, our method conspicuously outperforms in terms of speed and effectiveness, heralding a significant stride forward in data mining research.","PeriodicalId":501499,"journal":{"name":"Wireless Communications and Mobile Computing","volume":"137 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141189265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shoney Sebastian, Iyyappan. MIn, Sultan Ahmad, Mohammad Maqbool Waris, Hikmat A. M. Abdeljaber, Jabeen Nazeer
Cloud computing has received a resounding welcome. It was created following methodical and thorough study in web services, distributed computing, utility computing, and virtualization, and it offers several benefits, including lower costs, less space required, and easier management. These advantages bring in a significant number of new users to the cloud platform every day. In addition, because cloud computing is an Internet-based computing paradigm, it must deal with the issue of overwhelming demands through effective load-balancing. A very small number of studies only focus on load-balancing problems in cloud computing platforms, while the majority of load-balancing research is accessible in many domains, including parallel, distributed, and grid computing. Infrastructure as a Service (IaaS), Software as a Service, and Platform as a Service are the three basic categories under which cloud computing falls. For these models, there are notable differences in the load-balancing techniques used. This work compared the outcome with the current method and presented a hybrid agent-based load-balancing approach for the IaaS platform.
{"title":"Hybrid Agent-Based Load-Balancing Approach Used in an IaaS Platform","authors":"Shoney Sebastian, Iyyappan. MIn, Sultan Ahmad, Mohammad Maqbool Waris, Hikmat A. M. Abdeljaber, Jabeen Nazeer","doi":"10.1155/2024/2357142","DOIUrl":"https://doi.org/10.1155/2024/2357142","url":null,"abstract":"Cloud computing has received a resounding welcome. It was created following methodical and thorough study in web services, distributed computing, utility computing, and virtualization, and it offers several benefits, including lower costs, less space required, and easier management. These advantages bring in a significant number of new users to the cloud platform every day. In addition, because cloud computing is an Internet-based computing paradigm, it must deal with the issue of overwhelming demands through effective load-balancing. A very small number of studies only focus on load-balancing problems in cloud computing platforms, while the majority of load-balancing research is accessible in many domains, including parallel, distributed, and grid computing. Infrastructure as a Service (IaaS), Software as a Service, and Platform as a Service are the three basic categories under which cloud computing falls. For these models, there are notable differences in the load-balancing techniques used. This work compared the outcome with the current method and presented a hybrid agent-based load-balancing approach for the IaaS platform.","PeriodicalId":501499,"journal":{"name":"Wireless Communications and Mobile Computing","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140804785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chang Liu, Yue Hong, Jin Wang, Chang Liu Sr., Li Tian, Jiangpei Xu
To solve the problem of jitter and low network throughput caused by the impact of background flows on IQ traffic in mobile fronthaul network, this paper proposed a new scheduling model for background flows, named hierarchical crossover traffic scheduling mechanism based on time-aware shaper (HC-TAS) by improving the traditional counterpart. Then, in this new model, we designed an inbound scheduling algorithm based on frame length matching and an outbound scheduling algorithm based on queue status, making sure that smaller data frames will not be blocked by large data frames. This greatly improves the utilization of timeslots in the scheduling process and reduces the jitter impact of background flows. To verify its performance, we conducted experiments in a simulated fronthaul network conforming to IEEE 802.1CM. The experimental results show that, under the condition that the jitter is guaranteed to be zero, compared with two mainstream scheduling schemes, Comb-FITting and TAS + Preemption, our proposed scheme can achieve lower maximum end-to-end delay and higher link utilization. The proposed HC-TAS meets the requirements of low jitter and high bandwidth utilization in 5G fronthaul network, and the research results provide a technical basis for the application and development of general time-sensitive networks as well.
为了解决移动前传网络中背景流对 IQ 流量的影响所导致的抖动和网络吞吐量低的问题,本文通过改进传统的背景流调度模型,提出了一种新的背景流调度模型,命名为基于时间感知整形器的分层交叉流量调度机制(HC-TAS)。在这个新模型中,我们设计了基于帧长匹配的入站调度算法和基于队列状态的出站调度算法,确保较小的数据帧不会被大数据帧阻塞。这大大提高了调度过程中时隙的利用率,并减少了背景流的抖动影响。为了验证其性能,我们在符合 IEEE 802.1CM 标准的模拟前线网络中进行了实验。实验结果表明,在保证抖动为零的条件下,与 Comb-FITting 和 TAS + Preemption 这两种主流调度方案相比,我们提出的方案能实现更低的最大端到端延迟和更高的链路利用率。所提出的 HC-TAS 方案满足了 5G 前传网络低抖动和高带宽利用率的要求,其研究成果也为一般时敏网络的应用和发展提供了技术基础。
{"title":"Hierarchical Cross Traffic Scheduling Based on Time-Aware Shapers for Mobile Time-Sensitive Fronthaul Network","authors":"Chang Liu, Yue Hong, Jin Wang, Chang Liu Sr., Li Tian, Jiangpei Xu","doi":"10.1155/2024/8882006","DOIUrl":"https://doi.org/10.1155/2024/8882006","url":null,"abstract":"To solve the problem of jitter and low network throughput caused by the impact of background flows on IQ traffic in mobile fronthaul network, this paper proposed a new scheduling model for background flows, named hierarchical crossover traffic scheduling mechanism based on time-aware shaper (HC-TAS) by improving the traditional counterpart. Then, in this new model, we designed an inbound scheduling algorithm based on frame length matching and an outbound scheduling algorithm based on queue status, making sure that smaller data frames will not be blocked by large data frames. This greatly improves the utilization of timeslots in the scheduling process and reduces the jitter impact of background flows. To verify its performance, we conducted experiments in a simulated fronthaul network conforming to IEEE 802.1CM. The experimental results show that, under the condition that the jitter is guaranteed to be zero, compared with two mainstream scheduling schemes, Comb-FITting and TAS + Preemption, our proposed scheme can achieve lower maximum end-to-end delay and higher link utilization. The proposed HC-TAS meets the requirements of low jitter and high bandwidth utilization in 5G fronthaul network, and the research results provide a technical basis for the application and development of general time-sensitive networks as well.","PeriodicalId":501499,"journal":{"name":"Wireless Communications and Mobile Computing","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140588630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jorge F. Arellano, Carlos Daniel Altamirano, Henry Ramiro Carvajal Mora, Nathaly Verónica Orozco Garzón, Fernando Darío Almeida García
Massive multiple-input-multiple-output (M-MIMO) offers remarkable advantages in terms of spectral, energy, and hardware efficiency for future wireless systems. However, its performance relies on the accuracy of channel state information (CSI) available at the transceivers. This makes channel estimation pivotal in the context of M-MIMO systems. Prior research has focused on evaluating channel estimation methods under the assumption of spatially uncorrelated fading channel models. In this study, we evaluate the performance of the minimum-mean-square-error (MMSE) estimator in terms of the normalized mean square error (NMSE) in the uplink of M-MIMO systems over spatially correlated Rician fading. The NMSE allows for easy comparison of different M-MIMO configurations, serving as a relative performance indicator. Besides, it is an advantageous metric due to its normalization, scale invariance, and consistent performance indication across diverse scenarios. In the system model, we assume imperfections in channel estimation and that the random angles in the correlation model follow a Gaussian distribution. For this scenario, we derive an accurate closed-form expression for calculating the NMSE, which is validated via Monte-Carlo simulations. Our numerical results reveal that as the Rician -factor decreases, approaching Rayleigh fading conditions, the NMSE improves. Additionally, spatial correlation and a reduction in the antenna array interelement spacing lead to a reduction in NMSE, further enhancing the overall system performance.
{"title":"On the Performance of MMSE Channel Estimation in Massive MIMO Systems over Spatially Correlated Rician Fading Channels","authors":"Jorge F. Arellano, Carlos Daniel Altamirano, Henry Ramiro Carvajal Mora, Nathaly Verónica Orozco Garzón, Fernando Darío Almeida García","doi":"10.1155/2024/5445725","DOIUrl":"https://doi.org/10.1155/2024/5445725","url":null,"abstract":"Massive multiple-input-multiple-output (M-MIMO) offers remarkable advantages in terms of spectral, energy, and hardware efficiency for future wireless systems. However, its performance relies on the accuracy of channel state information (CSI) available at the transceivers. This makes channel estimation pivotal in the context of M-MIMO systems. Prior research has focused on evaluating channel estimation methods under the assumption of spatially uncorrelated fading channel models. In this study, we evaluate the performance of the minimum-mean-square-error (MMSE) estimator in terms of the normalized mean square error (NMSE) in the uplink of M-MIMO systems over spatially correlated Rician fading. The NMSE allows for easy comparison of different M-MIMO configurations, serving as a relative performance indicator. Besides, it is an advantageous metric due to its normalization, scale invariance, and consistent performance indication across diverse scenarios. In the system model, we assume imperfections in channel estimation and that the random angles in the correlation model follow a Gaussian distribution. For this scenario, we derive an accurate closed-form expression for calculating the NMSE, which is validated via Monte-Carlo simulations. Our numerical results reveal that as the Rician <span><svg height=\"9.25986pt\" style=\"vertical-align:-0.2455397pt\" version=\"1.1\" viewbox=\"-0.0498162 -9.01432 13.2658 9.25986\" width=\"13.2658pt\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"><g transform=\"matrix(.013,0,0,-0.013,0,0)\"></path></g></svg>-</span>factor decreases, approaching Rayleigh fading conditions, the NMSE improves. Additionally, spatial correlation and a reduction in the antenna array interelement spacing lead to a reduction in NMSE, further enhancing the overall system performance.","PeriodicalId":501499,"journal":{"name":"Wireless Communications and Mobile Computing","volume":"2017 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140588349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In contemporary wireless communication systems, multicarrier modulation schemes have become widely adopted over single-carrier techniques due to their improved capacity to address challenges posed by multipath fading channels, leading to enhanced spectral efficiency. Orthogonal frequency division multiplexing (OFDM), a prevalent multicarrier scheme in 4G, is favored for its ease of implementation, interference resilience, and high data rate provision. But it falls short of meeting the requirements for 5G and beyond due to limitations such as out-of-band (OOB) emissions and cyclic prefixes. This paper introduces the filter bank multicarrier (FBMC) and universal filtered multicarrier (UFMC) with quadrature amplitude modulation (QAM) and phase shift keying (PSK) waveforms through Additive White Gaussian Noise channel (AWGN), Rayleigh fading channel and Rician channel. The objective of this paper is to enhance the performance of UFMC with reduced complexity through the new filtering approach for achieving optimal outcomes. The proposed scheme, incorporating Tukey filtering technique, demonstrates superior performance in reducing peak-to-average power ratio (PAPR) and improving bit error ratio (BER) compared to the original UFMC signal without necessitating additional power increments. Specifically, the UFMC system with Tukey filtering achieves a notable net gain of 5 dB. Simulation results demonstrate that utilizing various filter types in FBMC and UFMC systems, combined with QAM modulation, significantly reduces OOB emissions compared to conventional systems. In aspect to BER, Tukey window showed almost 10−6 at 15 dB SNR in UFMC which is better than FBMC.
{"title":"Analysis of Filtered Multicarrier Modulation Techniques Using Different Windows for 5G and Beyond Wireless Systems","authors":"Sourav Debnath, Samin Ahmed, S. M. Shamsul Alam","doi":"10.1155/2024/9428292","DOIUrl":"https://doi.org/10.1155/2024/9428292","url":null,"abstract":"In contemporary wireless communication systems, multicarrier modulation schemes have become widely adopted over single-carrier techniques due to their improved capacity to address challenges posed by multipath fading channels, leading to enhanced spectral efficiency. Orthogonal frequency division multiplexing (OFDM), a prevalent multicarrier scheme in 4G, is favored for its ease of implementation, interference resilience, and high data rate provision. But it falls short of meeting the requirements for 5G and beyond due to limitations such as out-of-band (OOB) emissions and cyclic prefixes. This paper introduces the filter bank multicarrier (FBMC) and universal filtered multicarrier (UFMC) with quadrature amplitude modulation (QAM) and phase shift keying (PSK) waveforms through Additive White Gaussian Noise channel (AWGN), Rayleigh fading channel and Rician channel. The objective of this paper is to enhance the performance of UFMC with reduced complexity through the new filtering approach for achieving optimal outcomes. The proposed scheme, incorporating Tukey filtering technique, demonstrates superior performance in reducing peak-to-average power ratio (PAPR) and improving bit error ratio (BER) compared to the original UFMC signal without necessitating additional power increments. Specifically, the UFMC system with Tukey filtering achieves a notable net gain of 5 dB. Simulation results demonstrate that utilizing various filter types in FBMC and UFMC systems, combined with QAM modulation, significantly reduces OOB emissions compared to conventional systems. In aspect to BER, Tukey window showed almost 10<sup>−6</sup> at 15 dB SNR in UFMC which is better than FBMC.","PeriodicalId":501499,"journal":{"name":"Wireless Communications and Mobile Computing","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140317062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Susovan Chanda, Ashish Kr. Luhach, J. Sharmila Anand Francis, Indranil Sengupta, Diptendu Sinha Roy
The exponential growth of the Internet of Things (IoT) has led to a surge in data generation, critical for business decisions. Ensuring data authenticity and integrity over unsecured channels is vital, especially due to potential catastrophic consequences of tampered data. However, IoT’s resource constraints and heterogeneous ecosystem present unique security challenges. Traditional public key infrastructure offers strong security but is resource intensive, while existing cloud-based solutions lack comprehensive security and rise to latency and unwanted wastage of energy. In this paper, we propose a universal authentication scheme using edge computing, incorporating fully hashed Elliptic Curve Menezes–Qu–Vanstone (ECMQV) and PUF. This approach provides a scalable and reliable solution. It also provides security against active attacks, addressing man-in-the-middle and impersonation threats. Experimental validation on a Zybo board confirms its effectiveness, offering a robust security solution for the IoT landscape.
{"title":"An Elliptic Curve Menezes–Qu–Vanston-Based Authentication and Encryption Protocol for IoT","authors":"Susovan Chanda, Ashish Kr. Luhach, J. Sharmila Anand Francis, Indranil Sengupta, Diptendu Sinha Roy","doi":"10.1155/2024/5998163","DOIUrl":"https://doi.org/10.1155/2024/5998163","url":null,"abstract":"The exponential growth of the Internet of Things (IoT) has led to a surge in data generation, critical for business decisions. Ensuring data authenticity and integrity over unsecured channels is vital, especially due to potential catastrophic consequences of tampered data. However, IoT’s resource constraints and heterogeneous ecosystem present unique security challenges. Traditional public key infrastructure offers strong security but is resource intensive, while existing cloud-based solutions lack comprehensive security and rise to latency and unwanted wastage of energy. In this paper, we propose a universal authentication scheme using edge computing, incorporating fully hashed Elliptic Curve Menezes–Qu–Vanstone (ECMQV) and PUF. This approach provides a scalable and reliable solution. It also provides security against active attacks, addressing man-in-the-middle and impersonation threats. Experimental validation on a Zybo board confirms its effectiveness, offering a robust security solution for the IoT landscape.","PeriodicalId":501499,"journal":{"name":"Wireless Communications and Mobile Computing","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140199008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data collection and energy consumption are critical concerns in Wireless sensor networks (WSNs). To address these issues, both clustering and routing algorithms are utilized. Therefore, this paper proposes an intelligent energy-efficient data routing scheme for WSNs utilizing a mobile sink (MS) to save energy and prolong network lifetime. The proposed scheme operates in two major modes: configure and operational modes. During the configure mode, a novel clustering mechanism is applied once, and a prescheduling cluster head (CH) selection is introduced to ensure uniform energy expenditure among sensor nodes (SNs). The scheduling technique selects successive CHs for each cluster throughout the WSNs’ lifetime rounds, managed at the base station (BS) to minimize SN energy consumption. In the operational mode, two main objectives are achieved: sensing and gathering data by each CH with minimal message overhead, and establishing an optimal path for the MS using the genetic algorithm. Finally, the MS uploads the gathered data to the BS. Extensive simulations are conducted to verify the efficiency of the proposed scheme in terms of stability period, network lifetime, average energy consumption, data transmission latency, message overhead, and throughput. The results demonstrate that the proposed scheme outperforms the most recent state-of-the-art methods significantly. The results are substantiated through statistical validation via hypothesis testing utilizing ANOVA, as well as post hoc analysis.
数据收集和能源消耗是无线传感器网络(WSN)的关键问题。为了解决这些问题,需要使用聚类和路由算法。因此,本文提出了一种利用移动汇(MS)的 WSN 智能节能数据路由方案,以节约能源并延长网络寿命。所提方案有两种主要运行模式:配置模式和运行模式。在配置模式下,一次应用新颖的聚类机制,并引入预调度簇头(CH)选择,以确保传感器节点(SN)之间能量消耗均匀。调度技术在 WSN 的整个生命周期中为每个簇选择连续的 CH,由基站(BS)管理,以最大限度地减少 SN 的能量消耗。在运行模式下,要实现两个主要目标:每个 CH 以最小的信息开销感知和收集数据,并使用遗传算法为 MS 建立最佳路径。最后,MS 将收集到的数据上传到 BS。我们进行了大量仿真,从稳定期、网络寿命、平均能耗、数据传输延迟、信息开销和吞吐量等方面验证了所提方案的效率。结果表明,所提出的方案明显优于最新的先进方法。利用方差分析进行假设检验和事后分析,通过统计验证证实了上述结果。
{"title":"An Intelligent Energy-Efficient Data Routing Scheme for Wireless Sensor Networks Utilizing Mobile Sink","authors":"Hassan Al-Mahdi, Mohamed Elshrkawey, Shymaa Saad, Safa Abdelaziz","doi":"10.1155/2024/7384537","DOIUrl":"https://doi.org/10.1155/2024/7384537","url":null,"abstract":"Data collection and energy consumption are critical concerns in Wireless sensor networks (WSNs). To address these issues, both clustering and routing algorithms are utilized. Therefore, this paper proposes an intelligent energy-efficient data routing scheme for WSNs utilizing a mobile sink (MS) to save energy and prolong network lifetime. The proposed scheme operates in two major modes: configure and operational modes. During the configure mode, a novel clustering mechanism is applied once, and a prescheduling cluster head (CH) selection is introduced to ensure uniform energy expenditure among sensor nodes (SNs). The scheduling technique selects successive CHs for each cluster throughout the WSNs’ lifetime rounds, managed at the base station (BS) to minimize SN energy consumption. In the operational mode, two main objectives are achieved: sensing and gathering data by each CH with minimal message overhead, and establishing an optimal path for the MS using the genetic algorithm. Finally, the MS uploads the gathered data to the BS. Extensive simulations are conducted to verify the efficiency of the proposed scheme in terms of stability period, network lifetime, average energy consumption, data transmission latency, message overhead, and throughput. The results demonstrate that the proposed scheme outperforms the most recent state-of-the-art methods significantly. The results are substantiated through statistical validation via hypothesis testing utilizing ANOVA, as well as post hoc analysis.","PeriodicalId":501499,"journal":{"name":"Wireless Communications and Mobile Computing","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karthic Sundaram, Yuvaraj Natarajan, Anitha Perumalsamy, Ahmed Abdi Yusuf Ali
The rapid growth of the Internet of Things (IoT) has created a situation where a huge amount of sensitive data is constantly being created and sent through many devices, making data security a top priority. In the complex network of IoT, detecting intrusions becomes a key part of strengthening security. Since IoT environments can be easily affected by a wide range of cyber threats, intrusion detection systems (IDS) are crucial for quickly finding and dealing with potential intrusions as they happen. IDS datasets can have a wide range of features, from just a few to several hundreds or even thousands. Managing such large datasets is a big challenge, requiring a lot of computer power and leading to long processing times. To build an efficient IDS, this article introduces a combined feature selection strategy using recursive feature elimination and information gain. Then, a cascaded long–short-term memory is used to improve attack classifications. This method achieved an accuracy of 98.96% and 99.30% on the NSL-KDD and UNSW-NB15 datasets, respectively, for performing binary classification. This research provides a practical strategy for improving the effectiveness and accuracy of intrusion detection in IoT networks.
{"title":"A Novel Hybrid Feature Selection with Cascaded LSTM: Enhancing Security in IoT Networks","authors":"Karthic Sundaram, Yuvaraj Natarajan, Anitha Perumalsamy, Ahmed Abdi Yusuf Ali","doi":"10.1155/2024/5522431","DOIUrl":"https://doi.org/10.1155/2024/5522431","url":null,"abstract":"The rapid growth of the Internet of Things (IoT) has created a situation where a huge amount of sensitive data is constantly being created and sent through many devices, making data security a top priority. In the complex network of IoT, detecting intrusions becomes a key part of strengthening security. Since IoT environments can be easily affected by a wide range of cyber threats, intrusion detection systems (IDS) are crucial for quickly finding and dealing with potential intrusions as they happen. IDS datasets can have a wide range of features, from just a few to several hundreds or even thousands. Managing such large datasets is a big challenge, requiring a lot of computer power and leading to long processing times. To build an efficient IDS, this article introduces a combined feature selection strategy using recursive feature elimination and information gain. Then, a cascaded long–short-term memory is used to improve attack classifications. This method achieved an accuracy of 98.96% and 99.30% on the NSL-KDD and UNSW-NB15 datasets, respectively, for performing binary classification. This research provides a practical strategy for improving the effectiveness and accuracy of intrusion detection in IoT networks.","PeriodicalId":501499,"journal":{"name":"Wireless Communications and Mobile Computing","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140117324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Wang, Sijie Tao, Lindong Zhao, Dengyou Zhou, Zhe Liu, Yanbing Sun
This paper focuses on the resource allocation problem of multiplexing two different service scenarios, enhanced mobile broadband (eMBB) and ultrareliable low latency (URLLC) in 5G New Radio, based on dynamic numerology structure, mini-time slot scheduling, and puncturing to achieve optimal resource allocation. To obtain the optimal channel resource allocation under URLLC user constraints, this paper establishes a relevant channel model divided into two convex optimization problems: (a) eMBB resource allocation and (b) URLLC scheduling. We also determine the numerology values at the beginning of each time slot with the help of deep reinforcement learning to achieve flexible resource scheduling. The proposed algorithm is verified in simulation software, and the simulation results show that the dynamic selection of numerologies proposed in this paper can better improve the data transmission rate of eMBB users and reduce the latency of URLLC services compared with the fixed numerology scheme for the same URLLC packet arrival, while the reasonable resource allocation ensures the reliability of URLLC and eMBB communication.
{"title":"Resource Scheduling in URLLC and eMBB Coexistence Based on Dynamic Selection Numerology","authors":"Lei Wang, Sijie Tao, Lindong Zhao, Dengyou Zhou, Zhe Liu, Yanbing Sun","doi":"10.1155/2024/9480388","DOIUrl":"https://doi.org/10.1155/2024/9480388","url":null,"abstract":"This paper focuses on the resource allocation problem of multiplexing two different service scenarios, enhanced mobile broadband (eMBB) and ultrareliable low latency (URLLC) in 5G New Radio, based on dynamic numerology structure, mini-time slot scheduling, and puncturing to achieve optimal resource allocation. To obtain the optimal channel resource allocation under URLLC user constraints, this paper establishes a relevant channel model divided into two convex optimization problems: (a) eMBB resource allocation and (b) URLLC scheduling. We also determine the numerology values at the beginning of each time slot with the help of deep reinforcement learning to achieve flexible resource scheduling. The proposed algorithm is verified in simulation software, and the simulation results show that the dynamic selection of numerologies proposed in this paper can better improve the data transmission rate of eMBB users and reduce the latency of URLLC services compared with the fixed numerology scheme for the same URLLC packet arrival, while the reasonable resource allocation ensures the reliability of URLLC and eMBB communication.","PeriodicalId":501499,"journal":{"name":"Wireless Communications and Mobile Computing","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140002451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liangbin Zhu, Ying Shang, Jinglei Li, Yiming Jia, Qinghai Yang
The development of the internet of things (IoT) and 6G has given rise to numerous computation-intensive and latency-sensitive applications, which can be represented as directed acyclic graphs (DAGs). However, achieving these applications poses a huge challenge for user equipment (UE) that are constrained in computational power and battery capacity. In this paper, considering different requirements in various task scenarios, we aim to optimize the execution latency and energy consumption of the entire mobile edge computing (MEC) system. The system consists of single UE and multiple heterogeneous MEC servers to improve the execution efficiency of a DAG application. In addition, the execution reliability of a DAG application is viewed as a constraint. Based on the strong search capability and Pareto optimality theory of the cuckoo search (CS) algorithm and our previously proposed improved multiobjective cuckoo search (IMOCS) algorithm, we improve the initialization process and the update strategy of the external archive, and propose a reliability-constrained multiobjective cuckoo search (RCMOCS) algorithm. According to the simulation results, our proposed RCMOCS algorithm is able to obtain better Pareto frontiers and achieve satisfactory performance while ensuring execution reliability.
{"title":"Reliability-Constrained Task Scheduling for DAG Applications in Mobile Edge Computing","authors":"Liangbin Zhu, Ying Shang, Jinglei Li, Yiming Jia, Qinghai Yang","doi":"10.1155/2024/6980514","DOIUrl":"https://doi.org/10.1155/2024/6980514","url":null,"abstract":"The development of the internet of things (IoT) and 6G has given rise to numerous computation-intensive and latency-sensitive applications, which can be represented as directed acyclic graphs (DAGs). However, achieving these applications poses a huge challenge for user equipment (UE) that are constrained in computational power and battery capacity. In this paper, considering different requirements in various task scenarios, we aim to optimize the execution latency and energy consumption of the entire mobile edge computing (MEC) system. The system consists of single UE and multiple heterogeneous MEC servers to improve the execution efficiency of a DAG application. In addition, the execution reliability of a DAG application is viewed as a constraint. Based on the strong search capability and Pareto optimality theory of the cuckoo search (CS) algorithm and our previously proposed improved multiobjective cuckoo search (IMOCS) algorithm, we improve the initialization process and the update strategy of the external archive, and propose a reliability-constrained multiobjective cuckoo search (RCMOCS) algorithm. According to the simulation results, our proposed RCMOCS algorithm is able to obtain better Pareto frontiers and achieve satisfactory performance while ensuring execution reliability.","PeriodicalId":501499,"journal":{"name":"Wireless Communications and Mobile Computing","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139578084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}