Zahra Mohammadi, Mohammad Ali Amirabadi, Mohammad Hossein Kahaei
In this study, an innovative architecture is proposed to enhance the performance of semantic communication networks by leveraging deep learning and joint source-channel coding. A fundamental challenge in this field is the strong dependence of conventional networks on a fixed signal-to-noise ratio (SNR) during training, which leads to performance degradation under varying channel conditions. To address this limitation, we introduce a novel attention-based approach that enables dynamic adaptation to different SNR levels, ensuring more stable and optimized communication performance. The proposed model learns more generalized features that exhibit greater resilience to channel variations. To evaluate its effectiveness, extensive simulations were conducted, comparing the performance of the proposed architecture with DeepSC, a state-of-the-art benchmark model in the field. While the baseline model, trained at a single SNR, experiences performance drops under mismatched conditions, the proposed model, trained across a range of SNRs, achieves improvement of 16.2%, 30.8%, 42.8%, and 53.8% for 1, 2, 3, and 4-gram precisions, respectively, in bilingual evaluation understudy score and an 11.4% increase in sentence similarity across challenging low-SNR conditions. Furthermore, the model maintains robust performance with 48% less training data, highlighting its efficiency and data efficiency under practical constraints. These gains confirm the model's superior adaptability and high-quality data reconstruction under diverse conditions. The results of this study underscore the significant benefits of attention-based architectures in semantic communication, particularly in environments with unpredictable channel variations, and highlight their potential for reliable deployment in real-world applications.
{"title":"Deep Learning-Driven Semantic Communication With Attention Modules","authors":"Zahra Mohammadi, Mohammad Ali Amirabadi, Mohammad Hossein Kahaei","doi":"10.1049/cmu2.70090","DOIUrl":"https://doi.org/10.1049/cmu2.70090","url":null,"abstract":"<p>In this study, an innovative architecture is proposed to enhance the performance of semantic communication networks by leveraging deep learning and joint source-channel coding. A fundamental challenge in this field is the strong dependence of conventional networks on a fixed signal-to-noise ratio (SNR) during training, which leads to performance degradation under varying channel conditions. To address this limitation, we introduce a novel attention-based approach that enables dynamic adaptation to different SNR levels, ensuring more stable and optimized communication performance. The proposed model learns more generalized features that exhibit greater resilience to channel variations. To evaluate its effectiveness, extensive simulations were conducted, comparing the performance of the proposed architecture with DeepSC, a state-of-the-art benchmark model in the field. While the baseline model, trained at a single SNR, experiences performance drops under mismatched conditions, the proposed model, trained across a range of SNRs, achieves improvement of 16.2%, 30.8%, 42.8%, and 53.8% for 1, 2, 3, and 4-gram precisions, respectively, in bilingual evaluation understudy score and an 11.4% increase in sentence similarity across challenging low-SNR conditions. Furthermore, the model maintains robust performance with 48% less training data, highlighting its efficiency and data efficiency under practical constraints. These gains confirm the model's superior adaptability and high-quality data reconstruction under diverse conditions. The results of this study underscore the significant benefits of attention-based architectures in semantic communication, particularly in environments with unpredictable channel variations, and highlight their potential for reliable deployment in real-world applications.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70090","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145272177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sangeetha S., T. Aruldoss Albert Victoire, C. Kumar, Sourav Barua
A wireless sensor network is a collection of spatially distributed sensor nodes that wirelessly communicate to collect and transmit data. Static sink-based routing involves sending data from sensor nodes to a fixed base station (sink node). Sensor nodes on the data path can run out of energy quickly, especially those closer to the sink, and uneven energy consumption may lead to the premature failure of nodes. Mobile sink path planning involves moving the sink node to different locations to collect data that balances energy consumption, extending network lifetime, improved scalability, and enhanced fault tolerance. Still, energy-efficient routing is a challenging task. Thus, the mobile sink path planning (MSPP) by predicting the energy hole using the double deep reinforcement learning (DDRL) is introduced in this research, wherein the optimal action selection by considering the remaining energy and distance is employed using the Aquila pelican optimization (AqP) algorithm. The proposed AqP algorithm is designed by integrating the Pelican optimization-based solution updation with the Aquila Optimization algorithm for enhancing the convergence rate. The proposed AqP-DDRL MSPP accomplished delay, network lifetime, packet delivery ratio, residual energy, and throughput with 1.61 ms, 99.98%, 99.30%, 0.99 J, and 255.31 kbps respectively.
{"title":"DDRL-AqP MSPP: Double Deep Reinforcement Learning With Aquila Pelican Optimization Based Energy Hole Prediction for Mobile Sink Path Planning","authors":"Sangeetha S., T. Aruldoss Albert Victoire, C. Kumar, Sourav Barua","doi":"10.1049/cmu2.70093","DOIUrl":"https://doi.org/10.1049/cmu2.70093","url":null,"abstract":"<p>A wireless sensor network is a collection of spatially distributed sensor nodes that wirelessly communicate to collect and transmit data. Static sink-based routing involves sending data from sensor nodes to a fixed base station (sink node). Sensor nodes on the data path can run out of energy quickly, especially those closer to the sink, and uneven energy consumption may lead to the premature failure of nodes. Mobile sink path planning involves moving the sink node to different locations to collect data that balances energy consumption, extending network lifetime, improved scalability, and enhanced fault tolerance. Still, energy-efficient routing is a challenging task. Thus, the mobile sink path planning (MSPP) by predicting the energy hole using the double deep reinforcement learning (DDRL) is introduced in this research, wherein the optimal action selection by considering the remaining energy and distance is employed using the Aquila pelican optimization (AqP) algorithm. The proposed AqP algorithm is designed by integrating the Pelican optimization-based solution updation with the Aquila Optimization algorithm for enhancing the convergence rate. The proposed AqP-DDRL MSPP accomplished delay, network lifetime, packet delivery ratio, residual energy, and throughput with 1.61 ms, 99.98%, 99.30%, 0.99 J, and 255.31 kbps respectively.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70093","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Liu, Yan Zhen, Libin Zheng, Chao Huo, Yu Zhang
Mobile edge computing (MEC) serves as a feasible architecture that brings computation closer to the edge, enabling rapid response to user demands. However, most research on task offloading (TO) overlooks the scenario of repetitive requests for the same computing tasks during long time slots, and the spatiotemporal disparities in user demands. To address this gap, in this paper, we first introduce edge caching into TO and then divide base stations (BSs) into different communities based on the regional characteristics of user demands and activity areas, enabling collaborative caching among BSs within the same community. Subsequently, we design a dual timescale to update task popularity within both short and long-term time slots. To maximize cache benefits, we construct a model that transforms the caching issue into a 0–1 knapsack problem, and employ dynamic programming to obtain offloading strategies. Simulation results confirm the efficiency of the proposed task caching policy algorithm, and it effectively reduces the offloading cost and improves cache resource utilization compared to the other three baseline algorithms.In this paper, we first introduce edge caching into TO and then divide BSs into different communities based on the regional characteristics of user demands and activity areas, enabling collaborative caching among BSs within the same community. Subsequently, we design a dual timescale to update task popularity within both short and long-term time slots. To maximize cache benefits, we construct a model that transforms the caching issue into a 0–1 knapsack problem and employ dynamic programming to obtain offloading strategies.
{"title":"Cache-Assisted Offloading Optimization for Edge Computing Tasks","authors":"Hao Liu, Yan Zhen, Libin Zheng, Chao Huo, Yu Zhang","doi":"10.1049/cmu2.70089","DOIUrl":"10.1049/cmu2.70089","url":null,"abstract":"<p>Mobile edge computing (MEC) serves as a feasible architecture that brings computation closer to the edge, enabling rapid response to user demands. However, most research on task offloading (TO) overlooks the scenario of repetitive requests for the same computing tasks during long time slots, and the spatiotemporal disparities in user demands. To address this gap, in this paper, we first introduce edge caching into TO and then divide base stations (BSs) into different communities based on the regional characteristics of user demands and activity areas, enabling collaborative caching among BSs within the same community. Subsequently, we design a dual timescale to update task popularity within both short and long-term time slots. To maximize cache benefits, we construct a model that transforms the caching issue into a 0–1 knapsack problem, and employ dynamic programming to obtain offloading strategies. Simulation results confirm the efficiency of the proposed task caching policy algorithm, and it effectively reduces the offloading cost and improves cache resource utilization compared to the other three baseline algorithms.In this paper, we first introduce edge caching into TO and then divide BSs into different communities based on the regional characteristics of user demands and activity areas, enabling collaborative caching among BSs within the same community. Subsequently, we design a dual timescale to update task popularity within both short and long-term time slots. To maximize cache benefits, we construct a model that transforms the caching issue into a 0–1 knapsack problem and employ dynamic programming to obtain offloading strategies.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70089","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145111350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Hang, Chun Chen, Yifei Zhang, Jun Yang, Linchao Zhang
The transaction processing capacity of blockchain systems remains a critical barrier to adoption in real-time applications. Recent studies have explored different optimization techniques, including sharding, off-chain processing, and hybrid consensus algorithms. However, most of those techniques change the original architecture or process of the blockchain and may raise compatibility issues. Resolving these challenges calls for creative methods that can effectively balance transaction throughput with latency without compromising blockchains' core infrastructures. This paper proposes a learning to prediction framework combining a Kalman filter and artificial neural network for transaction throughput forecasting, integrated with a fuzzy logic controller embedded in smart contracts. The approach can dynamically optimize transaction traffic flow based on the predicted throughput and the observed transaction latency, thus improving blockchain performance in real-time. Deployed on a hyperledger fabric healthcare testbed and evaluated through a series of ablation experiments, the results demonstrate a significant improvement over the baseline and therefore illustrate the potential of the proposed approach in improving blockchain performance for practical applications.
{"title":"A Learning to Prediction Based Transaction Traffic Management Approach to Enhance Healthcare Blockchain Performance","authors":"Lei Hang, Chun Chen, Yifei Zhang, Jun Yang, Linchao Zhang","doi":"10.1049/cmu2.70076","DOIUrl":"10.1049/cmu2.70076","url":null,"abstract":"<p>The transaction processing capacity of blockchain systems remains a critical barrier to adoption in real-time applications. Recent studies have explored different optimization techniques, including sharding, off-chain processing, and hybrid consensus algorithms. However, most of those techniques change the original architecture or process of the blockchain and may raise compatibility issues. Resolving these challenges calls for creative methods that can effectively balance transaction throughput with latency without compromising blockchains' core infrastructures. This paper proposes a learning to prediction framework combining a Kalman filter and artificial neural network for transaction throughput forecasting, integrated with a fuzzy logic controller embedded in smart contracts. The approach can dynamically optimize transaction traffic flow based on the predicted throughput and the observed transaction latency, thus improving blockchain performance in real-time. Deployed on a hyperledger fabric healthcare testbed and evaluated through a series of ablation experiments, the results demonstrate a significant improvement over the baseline and therefore illustrate the potential of the proposed approach in improving blockchain performance for practical applications.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145102300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Sangeetha, T. Aruldoss Albert Victoire, C. Kumar, Sourav Barua
This research focuses on wireless sensor networks (WSNs) and proposes a three-phase approach to achieve energy-efficient routing. The approach consists of node deployment using Voronoi diagrams, clustering, cluster head (CH) selection using energy-efficient game theory, and a routing strategy based on improved pelican Optimisation (ImPe) segment routing. Random deployment of sensor nodes in WSNs can lead to coverage issues, and hence, in order to address this, Voronoi-based node deployment is employed to ensure uniform and balanced coverage of the monitoring area. An energy-efficient game theory-based approach is used for CH selection, which considers factors such as network conditions and energy levels to select CHs, preventing specific nodes from becoming overburdened and ensuring smoother data collection. The proposed routing mechanism utilises segment routing, which provides deterministic routing paths from CHs to the sink. Segment routing eliminates the need for route discovery and maintenance, making it energy-efficient. The ImPe algorithm that works on the characteristics of pelican search agents is employed to choose the optimal segment path for information sharing. The assessment based on delay, network lifetime, packet delivery ratio, residual energy and throughput by the proposed ImPe segment routing acquired the values of 2.1494, 98.9685, 99.1109, 0.9856, and 250.7044, respectively.
{"title":"Segment Routing for WSN Using Hybrid Optimisation With Energy Efficient Game Theory Based Clustering Technique","authors":"S. Sangeetha, T. Aruldoss Albert Victoire, C. Kumar, Sourav Barua","doi":"10.1049/cmu2.70088","DOIUrl":"10.1049/cmu2.70088","url":null,"abstract":"<p>This research focuses on wireless sensor networks (WSNs) and proposes a three-phase approach to achieve energy-efficient routing. The approach consists of node deployment using Voronoi diagrams, clustering, cluster head (CH) selection using energy-efficient game theory, and a routing strategy based on improved pelican Optimisation (ImPe) segment routing. Random deployment of sensor nodes in WSNs can lead to coverage issues, and hence, in order to address this, Voronoi-based node deployment is employed to ensure uniform and balanced coverage of the monitoring area. An energy-efficient game theory-based approach is used for CH selection, which considers factors such as network conditions and energy levels to select CHs, preventing specific nodes from becoming overburdened and ensuring smoother data collection. The proposed routing mechanism utilises segment routing, which provides deterministic routing paths from CHs to the sink. Segment routing eliminates the need for route discovery and maintenance, making it energy-efficient. The ImPe algorithm that works on the characteristics of pelican search agents is employed to choose the optimal segment path for information sharing. The assessment based on delay, network lifetime, packet delivery ratio, residual energy and throughput by the proposed ImPe segment routing acquired the values of 2.1494, 98.9685, 99.1109, 0.9856, and 250.7044, respectively.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70088","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145062380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huizhu Han, Can Li, Wei Liu, Ziliang Zuo, JinKun Zhu, Jing Lei
Clock synchronization is a critical technology in wireless sensor networks (WSNs), providing an essential foundation for data fusion, event scheduling, and collaborative operations among network nodes. However, existing clock synchronization methods face challenge of random delay in practical applications. To address these issues, this paper first proposes a pulse-based physical layer clock synchronization method, which can achieve the synchronization directly at the physical layer, thereby avoiding the random delays introduced by the upper-layer network. Specifically, the high-precision phase offset estimation between the reference node and the slave nodes is first accomplished under the one-way dissemination mechanism by leveraging the correlation property of pulse sequences. On this basis, the estimated clock information is quantized and encoded into pulse signals for transmission using pulse position modulation (PPM) technology, thereby enabling synchronization and communication between any two slave nodes at the physical layer. Simulation results demonstrate that our proposed pulse-based method significantly improves estimation accuracy, achieving the mean square error (MSE) of approximately −45 dB in synchronization precision. Compared to the benchmark schemes, the proposed pulse-based physical layer scheme exhibits notable advantages in system performance: it can reduce the MSE by approximately 16 dB compared to Benchmark Scheme 1 and by approximately 10 dB compared to Benchmark Scheme 2.
{"title":"Pulse-Based Clock Synchronization and Physical Layer Communication in Wireless Sensor Networks","authors":"Huizhu Han, Can Li, Wei Liu, Ziliang Zuo, JinKun Zhu, Jing Lei","doi":"10.1049/cmu2.70065","DOIUrl":"10.1049/cmu2.70065","url":null,"abstract":"<p>Clock synchronization is a critical technology in wireless sensor networks (WSNs), providing an essential foundation for data fusion, event scheduling, and collaborative operations among network nodes. However, existing clock synchronization methods face challenge of random delay in practical applications. To address these issues, this paper first proposes a pulse-based physical layer clock synchronization method, which can achieve the synchronization directly at the physical layer, thereby avoiding the random delays introduced by the upper-layer network. Specifically, the high-precision phase offset estimation between the reference node and the slave nodes is first accomplished under the one-way dissemination mechanism by leveraging the correlation property of pulse sequences. On this basis, the estimated clock information is quantized and encoded into pulse signals for transmission using pulse position modulation (PPM) technology, thereby enabling synchronization and communication between any two slave nodes at the physical layer. Simulation results demonstrate that our proposed pulse-based method significantly improves estimation accuracy, achieving the mean square error (MSE) of approximately −45 dB in synchronization precision. Compared to the benchmark schemes, the proposed pulse-based physical layer scheme exhibits notable advantages in system performance: it can reduce the MSE by approximately 16 dB compared to Benchmark Scheme 1 and by approximately 10 dB compared to Benchmark Scheme 2.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70065","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145038345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Fragoas F. Rodrigues, Lisandro Lovisolo, Luiz Alencar Reis da Silva Mello
The different use-case scenarios for mobile networks make radio access network (RAN) assessment using only coverage (radio link) prediction inadequate since channel capacity and latency play vital roles in some scenarios. To correctly evaluate the RAN performance, the flexible assignment of the spatial (beamforming) and time-frequency resources of the physical layer frame must be accounted for. This paper presents a RAN simulator for 5G mobile networks that can evaluate different performance indicators of the base stations (BS) arrangement supporting a user equipment (UE) distribution in the region where the mobile network operates. The BS may have multiple sectors and antenna arrays for beamforming in the simulator. The simulator supports both uplink and downlink. Each simulation round considers a physical layer frame when the UEs' positions are assumed static for the assignment between BS beams and UEs. The tool also encompasses some standard schedulers for the radio resources. Besides the UEs–BSs assignment and scheduling, which depend on the BS arrangement and the distribution of UEs and the scheduler, the simulator returns performance indicators as the capacity, throughput, and latency for each UE. The performance accounts for the interference in the radio environment. Consequently, the presented simulation tool helps with system design and evaluation. The many resources encompassed in the simulator can be configured for many different scenarios. We exemplify the simulator usage by comparing the RAN's performance for different network usages under various network configurations and resource schedulers.
{"title":"Simulation and Analysis of Mobile Access (SAMA): Cellular Radio Access Network Simulator and Performance Evaluator Applied to 5G","authors":"Christian Fragoas F. Rodrigues, Lisandro Lovisolo, Luiz Alencar Reis da Silva Mello","doi":"10.1049/cmu2.70077","DOIUrl":"10.1049/cmu2.70077","url":null,"abstract":"<p>The different use-case scenarios for mobile networks make radio access network (RAN) assessment using only coverage (radio link) prediction inadequate since channel capacity and latency play vital roles in some scenarios. To correctly evaluate the RAN performance, the flexible assignment of the spatial (beamforming) and time-frequency resources of the physical layer frame must be accounted for. This paper presents a RAN simulator for 5G mobile networks that can evaluate different performance indicators of the base stations (BS) arrangement supporting a user equipment (UE) distribution in the region where the mobile network operates. The BS may have multiple sectors and antenna arrays for beamforming in the simulator. The simulator supports both uplink and downlink. Each simulation round considers a physical layer frame when the UEs' positions are assumed static for the assignment between BS beams and UEs. The tool also encompasses some standard schedulers for the radio resources. Besides the UEs–BSs assignment and scheduling, which depend on the BS arrangement and the distribution of UEs and the scheduler, the simulator returns performance indicators as the capacity, throughput, and latency for each UE. The performance accounts for the interference in the radio environment. Consequently, the presented simulation tool helps with system design and evaluation. The many resources encompassed in the simulator can be configured for many different scenarios. We exemplify the simulator usage by comparing the RAN's performance for different network usages under various network configurations and resource schedulers.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70077","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145012742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prudence Munyaradzi Mavhemwa, Marco Zennaro, Philibert Nsengiyumva, Frederic Nzanywayingoma
The increasing use of the Internet of Medical Things (IoMT) in healthcare highlights privacy and security concerns surrounding sensitive health data. This research focuses on enhancing the security and usability of IoMT for young users through a robust, adaptive continuous authentication model using physiological biometrics on Android devices and heart rate data from smartwatches. By integrating user behavior, environmental context, and health conditions, the model dynamically determines risk, trust, and authorization decisions. Machine learning techniques analyse data related to devices, networks, locations, and user habits while considering demographics like age and medical conditions to assign suitable authenticators. The model balances accuracy and usability, favouring correct positive predictions, but faces limitations such as class imbalance, feature selection, and overfitting, with a false rejection rate (FRR) of 19%. Behavioral biometrics, personalized authentication, and continuous authentication enhance security and accessibility. However, moderate sensitivity affects its ability to capture all positive cases. Age-group analysis reveals varying engagement with technology, emphasising tailored authentication flows. Future work will explore explainable AI, context-aware analytics, and advanced risk assessments, integrating complementary smartwatch data like step count for improved accuracy. This research demonstrates the potential of risk-based adaptive authentication to deliver secure, user-friendly solutions in complex healthcare environments.
{"title":"Naïve Bayes Based Android Adaptive User Authentication Prototype for Young Internet of Medical Things Users","authors":"Prudence Munyaradzi Mavhemwa, Marco Zennaro, Philibert Nsengiyumva, Frederic Nzanywayingoma","doi":"10.1049/cmu2.70082","DOIUrl":"10.1049/cmu2.70082","url":null,"abstract":"<p>The increasing use of the Internet of Medical Things (IoMT) in healthcare highlights privacy and security concerns surrounding sensitive health data. This research focuses on enhancing the security and usability of IoMT for young users through a robust, adaptive continuous authentication model using physiological biometrics on Android devices and heart rate data from smartwatches. By integrating user behavior, environmental context, and health conditions, the model dynamically determines risk, trust, and authorization decisions. Machine learning techniques analyse data related to devices, networks, locations, and user habits while considering demographics like age and medical conditions to assign suitable authenticators. The model balances accuracy and usability, favouring correct positive predictions, but faces limitations such as class imbalance, feature selection, and overfitting, with a false rejection rate (FRR) of 19%. Behavioral biometrics, personalized authentication, and continuous authentication enhance security and accessibility. However, moderate sensitivity affects its ability to capture all positive cases. Age-group analysis reveals varying engagement with technology, emphasising tailored authentication flows. Future work will explore explainable AI, context-aware analytics, and advanced risk assessments, integrating complementary smartwatch data like step count for improved accuracy. This research demonstrates the potential of risk-based adaptive authentication to deliver secure, user-friendly solutions in complex healthcare environments.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70082","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145012882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}