Network slicing is one 5G network enabler that may be used to enhance the requirements of mission-critical Machine Type Communications (mcMTC) in critical IoT applications. But, in applications with high mobility support, the network slicing will also be influenced by users’ movement, which is necessary to handle the dynamicity of the system, especially for critical slices that require fast and reliable delivery from End to End (E2E). To fulfill the desired service quality (QoS) of critical slices due to their users’ movement. This paper presents mobility awareness for such types of applications through mobility prediction, in which the network can determine which cell the user is in near real-time. Furthermore, the proposed next-cell mobility prediction framework is developed as a multi-classification task, where we exploited Long Short-Term Memory (LSTM) and the collected historical mobility profiles of moving users to allow more accurate short- and long-term predictions of the candidate next-cell. Then, within the scope of high mobility mission-critical use cases, we evaluate the effectiveness of the proposed LSTM classifier in vehicular networks. We have used a real vehicle mobility dataset that is obtained from SUMO deployed in Bejaia, Algeria urban environment. Ultimately, we conducted a set of experiments on the classifier using datasets with various history lengths, and the results have validated the effectiveness of the performed predictions on short-term mobility prediction. Our experiments show that the proposed classifier performs better on longer history datasets. While compared to traditional Machine Learning (ML) algorithms used for classification, the proposed LSTM model outperformed ML methods with the best accurate prediction results.
{"title":"Next-cell prediction with LSTM based on vehicle mobility for 5G mc-IoT slices","authors":"Asma Belhadj, Karim Akilal, Siham Bouchelaghem, Mawloud Omar, Sofiane Aissani","doi":"10.1007/s11235-024-01214-6","DOIUrl":"https://doi.org/10.1007/s11235-024-01214-6","url":null,"abstract":"<p>Network slicing is one 5G network enabler that may be used to enhance the requirements of mission-critical Machine Type Communications (mcMTC) in critical IoT applications. But, in applications with high mobility support, the network slicing will also be influenced by users’ movement, which is necessary to handle the dynamicity of the system, especially for critical slices that require fast and reliable delivery from End to End (E2E). To fulfill the desired service quality (QoS) of critical slices due to their users’ movement. This paper presents mobility awareness for such types of applications through mobility prediction, in which the network can determine which cell the user is in near real-time. Furthermore, the proposed next-cell mobility prediction framework is developed as a multi-classification task, where we exploited Long Short-Term Memory (LSTM) and the collected historical mobility profiles of moving users to allow more accurate short- and long-term predictions of the candidate next-cell. Then, within the scope of high mobility mission-critical use cases, we evaluate the effectiveness of the proposed LSTM classifier in vehicular networks. We have used a real vehicle mobility dataset that is obtained from SUMO deployed in Bejaia, Algeria urban environment. Ultimately, we conducted a set of experiments on the classifier using datasets with various history lengths, and the results have validated the effectiveness of the performed predictions on short-term mobility prediction. Our experiments show that the proposed classifier performs better on longer history datasets. While compared to traditional Machine Learning (ML) algorithms used for classification, the proposed LSTM model outperformed ML methods with the best accurate prediction results.</p>","PeriodicalId":51194,"journal":{"name":"Telecommunication Systems","volume":"20 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1007/s11235-024-01213-7
Xiuwu Yu, Xun Wang, Yong Liu
In wireless sensor networks, the location of nodes is closely related to all tasks, so the accuracy and security of node localization are highly required. The core of the DV-HOP algorithm is based on the number of hops between nodes for localization, but wormhole attacks replay information through wormholes, and this attack greatly affects the parameter hop count. To address this security flaw, the DV-HOP algorithm is improved by first detecting the presence of wormhole attacks based on the communication characteristics between nodes, then the affected beacon nodes use a correction formula to fix the incorrect hop count information and transmit the correct information again, and finally the sensor nodes further evaluate and determine the location of the wormhole connection to prevent it in subsequent applications. Through experimental simulations, the proposed method improves the average localization accuracy by about 51.3 and 12.7(%), respectively, compared with the DV-HOP and LBDV algorithms without security improvements, which confirms that the proposed method is robust to wormhole attacks and reduces the localization errors affected by wormhole attacks.
{"title":"Secure positioning of wireless sensor networks against wormhole attacks","authors":"Xiuwu Yu, Xun Wang, Yong Liu","doi":"10.1007/s11235-024-01213-7","DOIUrl":"https://doi.org/10.1007/s11235-024-01213-7","url":null,"abstract":"<p>In wireless sensor networks, the location of nodes is closely related to all tasks, so the accuracy and security of node localization are highly required. The core of the DV-HOP algorithm is based on the number of hops between nodes for localization, but wormhole attacks replay information through wormholes, and this attack greatly affects the parameter hop count. To address this security flaw, the DV-HOP algorithm is improved by first detecting the presence of wormhole attacks based on the communication characteristics between nodes, then the affected beacon nodes use a correction formula to fix the incorrect hop count information and transmit the correct information again, and finally the sensor nodes further evaluate and determine the location of the wormhole connection to prevent it in subsequent applications. Through experimental simulations, the proposed method improves the average localization accuracy by about 51.3 and 12.7<span>(%)</span>, respectively, compared with the DV-HOP and LBDV algorithms without security improvements, which confirms that the proposed method is robust to wormhole attacks and reduces the localization errors affected by wormhole attacks.</p>","PeriodicalId":51194,"journal":{"name":"Telecommunication Systems","volume":"9 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The applications of IoHT have adapted a lot of contemplation as a result of recent IoT (Internet of Things) advancements. Irrespective of several fields in IoHT such as remote medical professional assistance, history of health-charting, integrated care management, decreased cost, disease management, disability management, home care management, individual healthcare assistance, health tracking, drug availability management, healthcare tracking management, and telesurgery. The evolvement in the field of the IoHT network has shown a drastic advancement in the standard of living. Despite the numerous fields of application in the IoHT network, the balance of security and privacy is one of the most pressing problems as far as life-critical solutions are concerned. There are several solutions to maintain security in the IoHT network. The most recent security enhancement schemes in the IoHT have been addressed in this paper. Furthermore, the latest possible challenges in the IoHT network are discussed. Moreover, an extensive survey on future research directions in the field of IoHT network security is illustrated. Additionally, we have proposed a security architecture based on trust assessment for IoHT systems to ameliorate the security of the network. The trust assessment is based on the artificial intelligence mechanism such that the security of the IoHT network is enhanced adaptively. This paper presents a novel IoHT security framework that integrates trust evaluation to dynamically address security challenges. It offers practical solutions for applications like telesurgery by adjusting measures based on real-time trust assessments, setting a new standard in IoHT security and guiding future research.
{"title":"Safeguarding the Internet of Health Things: advancements, challenges, and trust-based solution","authors":"Misbah Shafi, Rakesh Kumar Jha, Sanjeev Jain, Mantisha Gupta, Zeenat Zahra","doi":"10.1007/s11235-024-01211-9","DOIUrl":"https://doi.org/10.1007/s11235-024-01211-9","url":null,"abstract":"<p>The applications of IoHT have adapted a lot of contemplation as a result of recent IoT (Internet of Things) advancements. Irrespective of several fields in IoHT such as remote medical professional assistance, history of health-charting, integrated care management, decreased cost, disease management, disability management, home care management, individual healthcare assistance, health tracking, drug availability management, healthcare tracking management, and telesurgery. The evolvement in the field of the IoHT network has shown a drastic advancement in the standard of living. Despite the numerous fields of application in the IoHT network, the balance of security and privacy is one of the most pressing problems as far as life-critical solutions are concerned. There are several solutions to maintain security in the IoHT network. The most recent security enhancement schemes in the IoHT have been addressed in this paper. Furthermore, the latest possible challenges in the IoHT network are discussed. Moreover, an extensive survey on future research directions in the field of IoHT network security is illustrated. Additionally, we have proposed a security architecture based on trust assessment for IoHT systems to ameliorate the security of the network. The trust assessment is based on the artificial intelligence mechanism such that the security of the IoHT network is enhanced adaptively. This paper presents a novel IoHT security framework that integrates trust evaluation to dynamically address security challenges. It offers practical solutions for applications like telesurgery by adjusting measures based on real-time trust assessments, setting a new standard in IoHT security and guiding future research.</p>","PeriodicalId":51194,"journal":{"name":"Telecommunication Systems","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1007/s11235-024-01216-4
Mahdi Fallah, Pedram Salehpour
Edge computing is gaining prominence as a solution for IoT data management and processing. Task offloading, which distributes the processing load across edge devices, is a key strategy to enhance the efficiency of edge computing. However, traditional methods often overlook the dynamic nature of the edge environment and the interactions between devices. While reinforcement learning-based task offloading shows promise, it can sometimes lead to an imbalance by favoring weaker servers. To address these issues, this paper presents a novel task offloading method for federated learning that leverages the β-skeleton graph in edge computing. This model takes into account spatial and temporal dynamics, optimizing task assignments based on both the processing and communication capabilities of the edge devices. The proposed method significantly outperforms five state-of-the-art methods, showcasing substantial improvements in both initial and long-term performance. Specifically, this method demonstrates a 63.46% improvement over the Binary-SPF-EC method in the initial rounds and achieves an average improvement of 76.518% after 400 rounds. Moreover, it excels in sub-rewards and total latency reduction, underscoring its effectiveness in optimizing edge computing communication and processing tasks. These results underscore the superiority of the proposed method, highlighting its potential to enhance the efficiency and scalability of edge computing systems. This approach, by effectively addressing the dynamic nature of the edge environment and optimizing task offloading, contributes to the development of more robust and efficient edge computing frameworks. This work paves the way for future advancements in federated learning and edge computing integration, promising better management and utilization of IoT data.
{"title":"Optimized task offloading for federated learning based on β-skeleton graph in edge computing","authors":"Mahdi Fallah, Pedram Salehpour","doi":"10.1007/s11235-024-01216-4","DOIUrl":"https://doi.org/10.1007/s11235-024-01216-4","url":null,"abstract":"<p>Edge computing is gaining prominence as a solution for IoT data management and processing. Task offloading, which distributes the processing load across edge devices, is a key strategy to enhance the efficiency of edge computing. However, traditional methods often overlook the dynamic nature of the edge environment and the interactions between devices. While reinforcement learning-based task offloading shows promise, it can sometimes lead to an imbalance by favoring weaker servers. To address these issues, this paper presents a novel task offloading method for federated learning that leverages the β-skeleton graph in edge computing. This model takes into account spatial and temporal dynamics, optimizing task assignments based on both the processing and communication capabilities of the edge devices. The proposed method significantly outperforms five state-of-the-art methods, showcasing substantial improvements in both initial and long-term performance. Specifically, this method demonstrates a 63.46% improvement over the Binary-SPF-EC method in the initial rounds and achieves an average improvement of 76.518% after 400 rounds. Moreover, it excels in sub-rewards and total latency reduction, underscoring its effectiveness in optimizing edge computing communication and processing tasks. These results underscore the superiority of the proposed method, highlighting its potential to enhance the efficiency and scalability of edge computing systems. This approach, by effectively addressing the dynamic nature of the edge environment and optimizing task offloading, contributes to the development of more robust and efficient edge computing frameworks. This work paves the way for future advancements in federated learning and edge computing integration, promising better management and utilization of IoT data.</p>","PeriodicalId":51194,"journal":{"name":"Telecommunication Systems","volume":"6 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-06DOI: 10.1007/s11235-024-01212-8
Sanil Joshi, Mohit Dua
Like any other biometric systems, Automatic Speaker Verification (ASV) systems are also vulnerable to the spoofing attacks. Hence, it is important to develop the countermeasures in order to handle these attacks. In spoofing mainly two types of attacks are considered, logical access attacks and presentation attacks. In the last few decades, several systems have been proposed by various researchers for handling these kinds of attacks. However, noise handling capability of ASV systems is of major concern, as the presence of noise may make an ASV system to falsely evaluate the original human voice as the spoofed audio. Hence, the main objective of this paper is to review and analyze the various noise robust ASV systems proposed by different researchers in recent years. The paper discusses the various front end and back-end approaches that have been used to develop these systems with putting emphasis on the noise handling techniques. Various kinds of noises such as babble, white, background noises, pop noise, channel noises etc. affect the development of an ASV system. This survey starts with discussion about the various components of ASV system. Then, the paper classifies and discusses various enhanced front end feature extraction techniques like phase based, deep learning based, magnitude-based feature extraction techniques etc., which have been proven to be robust in handling noise. Secondly, the survey highlights the various deep learning and other baseline models that are used in backend, for classification of the audio correctly. Finally, it highlights the challenges and issues that still exist in noise handling and detection, while developing noise robust ASV systems. Therefore, on the basis of the proposed survey it can be interpreted that the noise robustness of ASV system is the challenging issue. Hence the researchers should consider the robustness of ASV against noise along with spoofing attacks.
与其他生物识别系统一样,自动语音验证(ASV)系统也容易受到欺骗攻击。因此,开发应对这些攻击的措施非常重要。在欺骗攻击中,主要有两类攻击,即逻辑访问攻击和呈现攻击。在过去的几十年里,不同的研究人员已经提出了几种系统来处理这类攻击。然而,ASV 系统的噪声处理能力是一个主要问题,因为噪声的存在可能会使 ASV 系统错误地将原始人声评估为欺骗音频。因此,本文的主要目的是回顾和分析近年来不同研究人员提出的各种抗噪声 ASV 系统。本文讨论了用于开发这些系统的各种前端和后端方法,并重点讨论了噪声处理技术。各种噪声,如嗡嗡声、白噪声、背景噪声、流行噪声、信道噪声等,都会影响 ASV 系统的开发。本研究首先讨论了 ASV 系统的各个组成部分。然后,本文对各种增强型前端特征提取技术进行了分类和讨论,如基于相位的特征提取技术、基于深度学习的特征提取技术、基于幅度的特征提取技术等,这些技术已被证明在处理噪声方面具有鲁棒性。其次,调查重点介绍了后端使用的各种深度学习和其他基线模型,以便对音频进行正确分类。最后,它强调了在开发噪声稳健型 ASV 系统时,噪声处理和检测方面仍然存在的挑战和问题。因此,根据所提议的调查,可以认为 ASV 系统的噪声鲁棒性是一个具有挑战性的问题。因此,研究人员应考虑 ASV 对噪声和欺骗攻击的鲁棒性。
{"title":"Noise robust automatic speaker verification systems: review and analysis","authors":"Sanil Joshi, Mohit Dua","doi":"10.1007/s11235-024-01212-8","DOIUrl":"https://doi.org/10.1007/s11235-024-01212-8","url":null,"abstract":"<p>Like any other biometric systems, Automatic Speaker Verification (ASV) systems are also vulnerable to the spoofing attacks. Hence, it is important to develop the countermeasures in order to handle these attacks. In spoofing mainly two types of attacks are considered, logical access attacks and presentation attacks. In the last few decades, several systems have been proposed by various researchers for handling these kinds of attacks. However, noise handling capability of ASV systems is of major concern, as the presence of noise may make an ASV system to falsely evaluate the original human voice as the spoofed audio. Hence, the main objective of this paper is to review and analyze the various noise robust ASV systems proposed by different researchers in recent years. The paper discusses the various front end and back-end approaches that have been used to develop these systems with putting emphasis on the noise handling techniques. Various kinds of noises such as babble, white, background noises, pop noise, channel noises etc. affect the development of an ASV system. This survey starts with discussion about the various components of ASV system. Then, the paper classifies and discusses various enhanced front end feature extraction techniques like phase based, deep learning based, magnitude-based feature extraction techniques etc., which have been proven to be robust in handling noise. Secondly, the survey highlights the various deep learning and other baseline models that are used in backend, for classification of the audio correctly. Finally, it highlights the challenges and issues that still exist in noise handling and detection, while developing noise robust ASV systems. Therefore, on the basis of the proposed survey it can be interpreted that the noise robustness of ASV system is the challenging issue. Hence the researchers should consider the robustness of ASV against noise along with spoofing attacks.</p>","PeriodicalId":51194,"journal":{"name":"Telecommunication Systems","volume":"12 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Drones can be used to create wireless communication networks in swarms using Artificial intelligence (AI). Their mobility and line-of-sight capability have made them key solutions for civil and military applications. AI is also developing rapidly nowadays and is being successfully applied due to the huge amount of data available. This has led to the integration of AI into networks and its application to solve problems associated with drone swarms. Since AI systems have to process huge amounts of information in real time, this leads to increased data packet loss and possible loss of communication with the control center. This article is devoted to the calculation of packet losses and the impact of traffic parameters on the data exchange in swarms. Original swarm models were created with the help of MATLAB and NetCracker packages. Dependences of data packet losses on the transaction size are calculated for different drone number in a swarm using NetCracker software. Data traffic with different parameters and statistical distribution laws was considered. The effect of different distances to drones on the base station workload has been simulated. Data transmission in a swarm was studied using MATLAB software depending on the signal-to-noise ratio, nonlinearity levels of base station amplifier, signal modulation types, base station antenna diameters, and signal phase offsets. The data obtained allows foresee the operation of drone communication channels in swarms.
{"title":"Studying data loss, nonlinearity, and modulation effects in drone swarm channels with artificial intelligence","authors":"Volodymyr Kharchenko, Andrii Grekhov, Vasyl Kondratiuk","doi":"10.1007/s11235-024-01210-w","DOIUrl":"https://doi.org/10.1007/s11235-024-01210-w","url":null,"abstract":"<p>Drones can be used to create wireless communication networks in swarms using Artificial intelligence (AI). Their mobility and line-of-sight capability have made them key solutions for civil and military applications. AI is also developing rapidly nowadays and is being successfully applied due to the huge amount of data available. This has led to the integration of AI into networks and its application to solve problems associated with drone swarms. Since AI systems have to process huge amounts of information in real time, this leads to increased data packet loss and possible loss of communication with the control center. This article is devoted to the calculation of packet losses and the impact of traffic parameters on the data exchange in swarms. Original swarm models were created with the help of MATLAB and NetCracker packages. Dependences of data packet losses on the transaction size are calculated for different drone number in a swarm using NetCracker software. Data traffic with different parameters and statistical distribution laws was considered. The effect of different distances to drones on the base station workload has been simulated. Data transmission in a swarm was studied using MATLAB software depending on the signal-to-noise ratio, nonlinearity levels of base station amplifier, signal modulation types, base station antenna diameters, and signal phase offsets. The data obtained allows foresee the operation of drone communication channels in swarms.</p>","PeriodicalId":51194,"journal":{"name":"Telecommunication Systems","volume":"5 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-18DOI: 10.1007/s11235-024-01202-w
Mohammed Jaber Alam, Ritesh Chugh, Salahuddin Azad, Md Rahat Hossain
Cellular networks are moving towards increasing heterogeneity by deploying more small cells into macro base station (MBS) to meet rapidly growing traffic demands. To leverage the advantages of these small cells, mobile users should be offloaded onto small base stations (BSs), which will typically be lightly populated and can give a higher data rate by presenting the mobile users with many more channels than the MBS. Likewise, a more balanced cell association will lessen the pressure on the MBS, allowing it to serve its remaining users more effectively. This paper addresses the cell association challenge for Quality of Service (QoS) provisioning in terms of throughput and load-balancing for 5G and future generation networks. This problem is quite challenging because BSs have varying backhaul capacities and users have varying QoS needs. Most of the previous studies are based on reference signal received power (RSRP), signal to interference and noise ratio (SINR) or its variants and most importantly majority of them are not load-aware. Therefore, a modified load-aware biased cell association scheme based on distance is proposed to attain better QoS provisioning in terms of throughput and load-balancing. Simulation results depict that the proposed load-aware-based method outperforms conventional cell association schemes based on RSRP and its variants, and in terms of throughput and load-balancing. Furthermore, the algorithm’s complexity has been assessed through a comparison and analysis of computational time, demonstrating better performance compared to state-of-the-art techniques.
{"title":"Optimizing cell association in 5G and beyond networks: a modified load-aware biased technique","authors":"Mohammed Jaber Alam, Ritesh Chugh, Salahuddin Azad, Md Rahat Hossain","doi":"10.1007/s11235-024-01202-w","DOIUrl":"https://doi.org/10.1007/s11235-024-01202-w","url":null,"abstract":"<p>Cellular networks are moving towards increasing heterogeneity by deploying more small cells into macro base station (MBS) to meet rapidly growing traffic demands. To leverage the advantages of these small cells, mobile users should be offloaded onto small base stations (BSs), which will typically be lightly populated and can give a higher data rate by presenting the mobile users with many more channels than the MBS. Likewise, a more balanced cell association will lessen the pressure on the MBS, allowing it to serve its remaining users more effectively. This paper addresses the cell association challenge for Quality of Service (QoS) provisioning in terms of throughput and load-balancing for 5G and future generation networks. This problem is quite challenging because BSs have varying backhaul capacities and users have varying QoS needs. Most of the previous studies are based on reference signal received power (RSRP), signal to interference and noise ratio (SINR) or its variants and most importantly majority of them are not load-aware. Therefore, a modified load-aware biased cell association scheme based on distance is proposed to attain better QoS provisioning in terms of throughput and load-balancing. Simulation results depict that the proposed load-aware-based method outperforms conventional cell association schemes based on RSRP and its variants, and in terms of throughput and load-balancing. Furthermore, the algorithm’s complexity has been assessed through a comparison and analysis of computational time, demonstrating better performance compared to state-of-the-art techniques.</p>","PeriodicalId":51194,"journal":{"name":"Telecommunication Systems","volume":"8 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1007/s11235-024-01208-4
Bo Gao, Baowang Lian, Chengkai Tang
Due to the limited onboard resources on Micro Aerial Vehicles (MAVs), the poor real-time performance has always been an urgent problem to be solved in the practical applications of visual inertial odometry (VIO). Therefore, a lightweight omnidirectional visual-inertial odometry (LOVIO) for MAVs based on improved keyframe tracking and marginalization was proposed. In the front-end processing of LOVIO, wide field-of-view (FOV) images are captured by an omnidirectional camera, frames are tracked by semi-direct method combining of direct method with rapidity and feature-based method with accuracy. In the back-end optimization, the Hessian matrix corresponding to the error optimization equation is stepwise marginalized, so the high-dimensional matrix is decomposed and the operating efficiency is improved. Experimental results on the dataset TUM-VI show that LOVIO can significantly reduce running time consumption without loss of precision and robustness, that means LOVIO has better real-time and practicability for MAVs.
{"title":"Lightweight omnidirectional visual-inertial odometry for MAVs based on improved keyframe tracking and marginalization","authors":"Bo Gao, Baowang Lian, Chengkai Tang","doi":"10.1007/s11235-024-01208-4","DOIUrl":"https://doi.org/10.1007/s11235-024-01208-4","url":null,"abstract":"<p>Due to the limited onboard resources on Micro Aerial Vehicles (MAVs), the poor real-time performance has always been an urgent problem to be solved in the practical applications of visual inertial odometry (VIO). Therefore, a lightweight omnidirectional visual-inertial odometry (LOVIO) for MAVs based on improved keyframe tracking and marginalization was proposed. In the front-end processing of LOVIO, wide field-of-view (FOV) images are captured by an omnidirectional camera, frames are tracked by semi-direct method combining of direct method with rapidity and feature-based method with accuracy. In the back-end optimization, the Hessian matrix corresponding to the error optimization equation is stepwise marginalized, so the high-dimensional matrix is decomposed and the operating efficiency is improved. Experimental results on the dataset TUM-VI show that LOVIO can significantly reduce running time consumption without loss of precision and robustness, that means LOVIO has better real-time and practicability for MAVs.</p>","PeriodicalId":51194,"journal":{"name":"Telecommunication Systems","volume":"7 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-06DOI: 10.1007/s11235-024-01204-8
Mingjun Tang, Rui Gao, Lan Guo
Signal recognition is a key technology in wireless networks, with broad applications in both military and civilian fields. Accurately recognizing the modulation scheme of an incoming unknown signal can significantly enhance the performance of communication systems. As global digitization and intelligence advance, the rapid development of wireless communication imposes higher standards for signal recognition: (1) Accurate and efficient recognition of various modulation modes, and (2) Lightweight recognition compatible with intelligent hardware. To meet these demands, we have designed a hybrid signal recognition model based on a convolutional neural network and a gated recurrent unit (CnGr). By integrating spatial and temporal modules, we enhance the multi-dimensional extraction of the original signal, significantly improving recognition accuracy. Additionally, we propose a lightweight signal recognition method that combines pruning and depthwise separable convolution. This approach effectively reduces the network size while maintaining recognition accuracy, facilitating deployment and implementation on edge devices. Extensive experiments demonstrate that our proposed method significantly improves recognition accuracy and reduces the model size without compromising performance.
{"title":"Lightweight signal recognition based on hybrid model in wireless networks","authors":"Mingjun Tang, Rui Gao, Lan Guo","doi":"10.1007/s11235-024-01204-8","DOIUrl":"https://doi.org/10.1007/s11235-024-01204-8","url":null,"abstract":"<p>Signal recognition is a key technology in wireless networks, with broad applications in both military and civilian fields. Accurately recognizing the modulation scheme of an incoming unknown signal can significantly enhance the performance of communication systems. As global digitization and intelligence advance, the rapid development of wireless communication imposes higher standards for signal recognition: (1) Accurate and efficient recognition of various modulation modes, and (2) Lightweight recognition compatible with intelligent hardware. To meet these demands, we have designed a hybrid signal recognition model based on a convolutional neural network and a gated recurrent unit (CnGr). By integrating spatial and temporal modules, we enhance the multi-dimensional extraction of the original signal, significantly improving recognition accuracy. Additionally, we propose a lightweight signal recognition method that combines pruning and depthwise separable convolution. This approach effectively reduces the network size while maintaining recognition accuracy, facilitating deployment and implementation on edge devices. Extensive experiments demonstrate that our proposed method significantly improves recognition accuracy and reduces the model size without compromising performance.</p>","PeriodicalId":51194,"journal":{"name":"Telecommunication Systems","volume":"19 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141969443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-04DOI: 10.1007/s11235-024-01209-3
Leandro R. Ximenes
The growing number of mobile stations requires relaying protocols that can efficiently transmit large volumes of data with minimal computational complexity. Systems that combine joint symbol and channel estimation with Amplify-and-Forward Multiway Relay (MWR) systems provide a highly effective solution to this problem. Thus, this study introduces a new Nested PARAFAC-based MWR system model as its primary contribution. Then, a non-iterative semi-blind receiver is designed to allow simultaneous estimation of symbols and channels. This computationally efficient approach is validated using Monte Carlo computational simulations, showing that the proposed receiver can achieve lower bit error rate values at lower computational complexity than some of its state-of-the-art competitors.
{"title":"Non-iterative sem-blind receiver for multi-way relay (MWR) systems","authors":"Leandro R. Ximenes","doi":"10.1007/s11235-024-01209-3","DOIUrl":"https://doi.org/10.1007/s11235-024-01209-3","url":null,"abstract":"<p>The growing number of mobile stations requires relaying protocols that can efficiently transmit large volumes of data with minimal computational complexity. Systems that combine joint symbol and channel estimation with Amplify-and-Forward Multiway Relay (MWR) systems provide a highly effective solution to this problem. Thus, this study introduces a new Nested PARAFAC-based MWR system model as its primary contribution. Then, a non-iterative semi-blind receiver is designed to allow simultaneous estimation of symbols and channels. This computationally efficient approach is validated using Monte Carlo computational simulations, showing that the proposed receiver can achieve lower bit error rate values at lower computational complexity than some of its state-of-the-art competitors.\u0000</p>","PeriodicalId":51194,"journal":{"name":"Telecommunication Systems","volume":"6 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}