Texture mapping of weft-knitted fabrics plays a crucial role in virtual try-on and digital textile design due to its computational efficiency and real-time performance. However, traditional texture mapping techniques typically adapt pre-generated textures to deformed surfaces through geometric transformations. These methods overlook the complex variations in yarn length, thickness, and loop morphology during stretching, often resulting in visual distortions. To overcome these limitations, we propose Knit-Pix2Pix, a dedicated framework for generating realistic weft-knitted fabric textures directly from knitted unit mesh maps. These maps provide grid-based representations where each cell corresponds to a physical loop region, capturing its deformation state. Knit-Pix2Pix is an integrated architecture that combines a multi-scale feature extraction module, a grid-guided attention mechanism, and a multi-scale discriminator. Together, these components address the multi-scale and deformation-aware requirements of this task. To validate our approach, we constructed a dataset of over 2000 pairs of fabric stretching images and corresponding knitted unit mesh maps, with further testing using spring-mass fabric simulation. Experiments show that, compared with traditional texture mapping methods, SSIM increased by 21.8%, PSNR by 20.9%, and LPIPS decreased by 24.3%. This integrated approach provides a practical solution for meeting the requirements of digital textile design.
{"title":"Knit-Pix2Pix: An Enhanced Pix2Pix Network for Weft-Knitted Fabric Texture Generation.","authors":"Xin Ru, Yingjie Huang, Laihu Peng, Yongchao Hou","doi":"10.3390/s26020682","DOIUrl":"10.3390/s26020682","url":null,"abstract":"<p><p>Texture mapping of weft-knitted fabrics plays a crucial role in virtual try-on and digital textile design due to its computational efficiency and real-time performance. However, traditional texture mapping techniques typically adapt pre-generated textures to deformed surfaces through geometric transformations. These methods overlook the complex variations in yarn length, thickness, and loop morphology during stretching, often resulting in visual distortions. To overcome these limitations, we propose Knit-Pix2Pix, a dedicated framework for generating realistic weft-knitted fabric textures directly from knitted unit mesh maps. These maps provide grid-based representations where each cell corresponds to a physical loop region, capturing its deformation state. Knit-Pix2Pix is an integrated architecture that combines a multi-scale feature extraction module, a grid-guided attention mechanism, and a multi-scale discriminator. Together, these components address the multi-scale and deformation-aware requirements of this task. To validate our approach, we constructed a dataset of over 2000 pairs of fabric stretching images and corresponding knitted unit mesh maps, with further testing using spring-mass fabric simulation. Experiments show that, compared with traditional texture mapping methods, SSIM increased by 21.8%, PSNR by 20.9%, and LPIPS decreased by 24.3%. This integrated approach provides a practical solution for meeting the requirements of digital textile design.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12846091/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ionic polymer-metal composite (IPMC) sensors generate voltages or currents when subjected to deformation. The magnitude and time constant of the electrical response vary significantly with ambient humidity and water content. However, most conventional physical models focus solely on cation dynamics and do not consider water dynamics. In addition to cation dynamics, Zhu's model explicitly incorporates the dynamics of water. Consequently, Zhu's model is considered one of the most promising approaches for physical modeling of IPMC sensors. This paper presents exact analytical solutions to Zhu's model of IPMC sensors for the first time. The derivation method transforms Zhu's model into the frequency domain using Laplace transform-based analysis together with linear approximation, and subsequently solves it as a boundary value problem of a set of linear ordinary differential equations. The resulting solution is expressed as a transfer function. The input variable is the applied bending deformation, and the output variables include the open-circuit voltage or short-circuit current at the sensor terminals, as well as the distributions of cations, water molecules, and electric potential within the polymer. The obtained transfer functions are represented by irrational functions, which typically arise as solutions to a system of partial differential equations. Furthermore, this paper presents analytical approximations of the step response of the sensor voltage or current by approximating the obtained transfer functions. The steady-state and maximum values of the time response are derived from these analytical approximations. Additionally, the relaxation behavior of the sensor voltage is characterized by a key parameter newly derived from the analytical approximation presented in this paper.
{"title":"The Analytical Solutions to a Cation-Water Coupled Multiphysics Model of IPMC Sensors.","authors":"Kosetsu Ishikawa, Kinji Asaka, Zicai Zhu, Toshiki Hiruta, Kentaro Takagi","doi":"10.3390/s26020695","DOIUrl":"10.3390/s26020695","url":null,"abstract":"<p><p>Ionic polymer-metal composite (IPMC) sensors generate voltages or currents when subjected to deformation. The magnitude and time constant of the electrical response vary significantly with ambient humidity and water content. However, most conventional physical models focus solely on cation dynamics and do not consider water dynamics. In addition to cation dynamics, Zhu's model explicitly incorporates the dynamics of water. Consequently, Zhu's model is considered one of the most promising approaches for physical modeling of IPMC sensors. This paper presents exact analytical solutions to Zhu's model of IPMC sensors for the first time. The derivation method transforms Zhu's model into the frequency domain using Laplace transform-based analysis together with linear approximation, and subsequently solves it as a boundary value problem of a set of linear ordinary differential equations. The resulting solution is expressed as a transfer function. The input variable is the applied bending deformation, and the output variables include the open-circuit voltage or short-circuit current at the sensor terminals, as well as the distributions of cations, water molecules, and electric potential within the polymer. The obtained transfer functions are represented by irrational functions, which typically arise as solutions to a system of partial differential equations. Furthermore, this paper presents analytical approximations of the step response of the sensor voltage or current by approximating the obtained transfer functions. The steady-state and maximum values of the time response are derived from these analytical approximations. Additionally, the relaxation behavior of the sensor voltage is characterized by a key parameter newly derived from the analytical approximation presented in this paper.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12846017/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Violeta Lazic, Biljana Stankov, Fabrizio Andreoli, Marco Pistilli, Ivano Menicucci, Christian Ulrich, Frank Schnürer, Roberto Chirico, Pasqualino Gaudio
In this work we report the results of analysis of the acoustic signal generated by the interaction of a nanosecond laser pulse (30 mJ, 1064 nm) with various residues placed on a silica wafer. The signal was captured by a unidirectional microphone placed 30 mm from the laser-generated plasma. The examined sample classes, other than the clean wafer, included particles from soils and rocks, carbonates, nitro precursors, ash, coal, smeared diesel, and particles of explosives. We tested three types of explosives, namely PETN, RDX, and HMX, having different origins. For the explosives, the acoustic signal showed a faster rise, larger amplitude, different width, and attenuation compared with the other sample classes. By subtracting the acoustic signal from the wafer at the same position, obtained after four cleaning laser pulses, the contribution of echoes was eliminated and true differences between the residue and substrate became evident. Through four different features in the subtracted signal, it was possible to classify explosives without the presence of false positives; the estimated limit of detection was 15 ng, 9.6 ng, and 18 ng for PETN, RDX, and HMX, respectively, where the mass was extrapolated from nano-printed samples and LIBS spectra acquired simultaneously. Furthermore, HMX was distinguished from the other two explosives in 90% of the cases; diesel and coal were also recognized. We also found that explosives deposited through wet transfer behaved as inert substances for the tested masses up to 30 ng.
{"title":"Acoustic Signatures in Laser-Induced Plasmas for Detection of Explosives in Traces.","authors":"Violeta Lazic, Biljana Stankov, Fabrizio Andreoli, Marco Pistilli, Ivano Menicucci, Christian Ulrich, Frank Schnürer, Roberto Chirico, Pasqualino Gaudio","doi":"10.3390/s26020672","DOIUrl":"10.3390/s26020672","url":null,"abstract":"<p><p>In this work we report the results of analysis of the acoustic signal generated by the interaction of a nanosecond laser pulse (30 mJ, 1064 nm) with various residues placed on a silica wafer. The signal was captured by a unidirectional microphone placed 30 mm from the laser-generated plasma. The examined sample classes, other than the clean wafer, included particles from soils and rocks, carbonates, nitro precursors, ash, coal, smeared diesel, and particles of explosives. We tested three types of explosives, namely PETN, RDX, and HMX, having different origins. For the explosives, the acoustic signal showed a faster rise, larger amplitude, different width, and attenuation compared with the other sample classes. By subtracting the acoustic signal from the wafer at the same position, obtained after four cleaning laser pulses, the contribution of echoes was eliminated and true differences between the residue and substrate became evident. Through four different features in the subtracted signal, it was possible to classify explosives without the presence of false positives; the estimated limit of detection was 15 ng, 9.6 ng, and 18 ng for PETN, RDX, and HMX, respectively, where the mass was extrapolated from nano-printed samples and LIBS spectra acquired simultaneously. Furthermore, HMX was distinguished from the other two explosives in 90% of the cases; diesel and coal were also recognized. We also found that explosives deposited through wet transfer behaved as inert substances for the tested masses up to 30 ng.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12846130/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriel García-Gutiérrez, Elena Aparicio-Esteve, Jesús Ureña, José Manuel Villadangos-Carrizo, Ana Jiménez-Martín, Juan Jesús García-Domínguez
Population aging is driving the need for unobtrusive, continuous monitoring solutions in residential care environments. Radio-frequency (RF)-based technologies such as Ultra-Wideband (UWB) and millimeter-wave (mmWave) radar are particularly attractive for providing detailed information on presence and movement while preserving privacy. Building on a UWB-mmWave localization system deployed in a senior living residence, this paper focuses on the data-processing methodology for extracting quantitative mobility indicators from long-term indoor monitoring data. The system combines a device-free mmWave radar setup in bedrooms and bathrooms with a tag-based UWB positioning system in common areas. For mmWave data, an adaptive short-term average/long-term average (STA/LTA) detector operating on an aggregated, normalized radar energy signal is used to classify micro- and macromovements into bedroom occupancy and non-sedentary activity episodes. For UWB data, a partially constrained Kalman filter with a nearly constant velocity dynamics model and floor-plan information yields smoothed trajectories, from which daily gait- and mobility-related metrics are derived. The approach is illustrated using one-day samples from three users as a proof of concept. The proposed methodology provides individualized indicators of bedroom occupancy, sedentary behavior, and mobility in shared spaces, supporting the feasibility of combined UWB and mmWave radar sensing for longitudinal routine analysis in real-world elderly care environments.
{"title":"A Practical Case of Monitoring Older Adults Using mmWave Radar and UWB.","authors":"Gabriel García-Gutiérrez, Elena Aparicio-Esteve, Jesús Ureña, José Manuel Villadangos-Carrizo, Ana Jiménez-Martín, Juan Jesús García-Domínguez","doi":"10.3390/s26020681","DOIUrl":"10.3390/s26020681","url":null,"abstract":"<p><p>Population aging is driving the need for unobtrusive, continuous monitoring solutions in residential care environments. Radio-frequency (RF)-based technologies such as Ultra-Wideband (UWB) and millimeter-wave (mmWave) radar are particularly attractive for providing detailed information on presence and movement while preserving privacy. Building on a UWB-mmWave localization system deployed in a senior living residence, this paper focuses on the data-processing methodology for extracting quantitative mobility indicators from long-term indoor monitoring data. The system combines a device-free mmWave radar setup in bedrooms and bathrooms with a tag-based UWB positioning system in common areas. For mmWave data, an adaptive short-term average/long-term average (STA/LTA) detector operating on an aggregated, normalized radar energy signal is used to classify micro- and macromovements into bedroom occupancy and non-sedentary activity episodes. For UWB data, a partially constrained Kalman filter with a nearly constant velocity dynamics model and floor-plan information yields smoothed trajectories, from which daily gait- and mobility-related metrics are derived. The approach is illustrated using one-day samples from three users as a proof of concept. The proposed methodology provides individualized indicators of bedroom occupancy, sedentary behavior, and mobility in shared spaces, supporting the feasibility of combined UWB and mmWave radar sensing for longitudinal routine analysis in real-world elderly care environments.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12845560/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Transcranial Magnetic Stimulation (TMS) is a non-invasive technique for neurological research and therapy, but its effectiveness depends on accurate and stable coil placement. Manual localization based on anatomical landmarks is time-consuming and operator-dependent, while state-of-the-art robotic and neuronavigation systems achieve high accuracy using optical tracking with head-mounted markers and infrared cameras, at the cost of increased system complexity and setup burden. This study presents a cost-effective, markerless robotic-assisted TMS system that combines a 3D depth camera and textile capacitive sensors to assist coil localization and contact control. Facial landmarks detected by the depth camera are used to estimate the motor cortex (C3) location without external tracking markers, while a dual textile-sensor suspension provides compliant "soft-landing" behavior, contact confirmation, and coil-tilt estimation. Experimental evaluation with five participants showed reliable C3 targeting with valid motor evoked potentials (MEPs) obtained in most trials after initial calibration, and tilt-verification experiments revealed that peak MEP amplitudes occurred near balanced sensor readings in 12 of 15 trials (80%). The system employs a collaborative robot designed in accordance with international human-robot interaction safety standards, including force-limited actuation and monitored stopping. These results suggest that the proposed approach can improve the accessibility, safety, and consistency of TMS procedures while avoiding the complexity of conventional optical tracking systems.
{"title":"Development of a Robot-Assisted TMS Localization System Using Dual Capacitive Sensors for Coil Tilt Detection.","authors":"Czaryn Diane Salazar Ompico, Julius Noel Banayo, Yamato Mashio, Masato Odagaki, Yutaka Kikuchi, Armyn Chang Sy, Hirofumi Kurosaki","doi":"10.3390/s26020693","DOIUrl":"10.3390/s26020693","url":null,"abstract":"<p><p>Transcranial Magnetic Stimulation (TMS) is a non-invasive technique for neurological research and therapy, but its effectiveness depends on accurate and stable coil placement. Manual localization based on anatomical landmarks is time-consuming and operator-dependent, while state-of-the-art robotic and neuronavigation systems achieve high accuracy using optical tracking with head-mounted markers and infrared cameras, at the cost of increased system complexity and setup burden. This study presents a cost-effective, markerless robotic-assisted TMS system that combines a 3D depth camera and textile capacitive sensors to assist coil localization and contact control. Facial landmarks detected by the depth camera are used to estimate the motor cortex (C3) location without external tracking markers, while a dual textile-sensor suspension provides compliant \"soft-landing\" behavior, contact confirmation, and coil-tilt estimation. Experimental evaluation with five participants showed reliable C3 targeting with valid motor evoked potentials (MEPs) obtained in most trials after initial calibration, and tilt-verification experiments revealed that peak MEP amplitudes occurred near balanced sensor readings in 12 of 15 trials (80%). The system employs a collaborative robot designed in accordance with international human-robot interaction safety standards, including force-limited actuation and monitored stopping. These results suggest that the proposed approach can improve the accessibility, safety, and consistency of TMS procedures while avoiding the complexity of conventional optical tracking systems.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12845610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate prediction of marine visibility is critical for ensuring safe and efficient maritime operations, particularly in dynamic and data-sparse ocean environments. Although visibility reduction is a natural and unavoidable atmospheric phenomenon, improved short-term prediction can substantially enhance navigational safety and operational planning. While deep learning methods have demonstrated strong performance in land-based visibility prediction, their effectiveness in marine environments remains constrained by the lack of fixed observation stations, rapidly changing meteorological conditions, and pronounced spatiotemporal variability. This paper introduces SeADL, a self-adaptive deep learning framework for real-time marine visibility forecasting using multi-source time-series data from onboard sensors and drone-borne atmospheric measurements. SeADL incorporates a continuous online learning mechanism that updates model parameters in real time, enabling robust adaptation to both short-term weather fluctuations and long-term environmental trends. Case studies, including a realistic storm simulation, demonstrate that SeADL achieves high prediction accuracy and maintains robust performance under diverse and extreme conditions. These results highlight the potential of combining self-adaptive deep learning with real-time sensor streams to enhance marine situational awareness and improve operational safety in dynamic ocean environments.
{"title":"SeADL: Self-Adaptive Deep Learning for Real-Time Marine Visibility Forecasting Using Multi-Source Sensor Data.","authors":"William Girard, Haiping Xu, Donghui Yan","doi":"10.3390/s26020676","DOIUrl":"10.3390/s26020676","url":null,"abstract":"<p><p>Accurate prediction of marine visibility is critical for ensuring safe and efficient maritime operations, particularly in dynamic and data-sparse ocean environments. Although visibility reduction is a natural and unavoidable atmospheric phenomenon, improved short-term prediction can substantially enhance navigational safety and operational planning. While deep learning methods have demonstrated strong performance in land-based visibility prediction, their effectiveness in marine environments remains constrained by the lack of fixed observation stations, rapidly changing meteorological conditions, and pronounced spatiotemporal variability. This paper introduces SeADL, a self-adaptive deep learning framework for real-time marine visibility forecasting using multi-source time-series data from onboard sensors and drone-borne atmospheric measurements. SeADL incorporates a continuous online learning mechanism that updates model parameters in real time, enabling robust adaptation to both short-term weather fluctuations and long-term environmental trends. Case studies, including a realistic storm simulation, demonstrate that SeADL achieves high prediction accuracy and maintains robust performance under diverse and extreme conditions. These results highlight the potential of combining self-adaptive deep learning with real-time sensor streams to enhance marine situational awareness and improve operational safety in dynamic ocean environments.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12845894/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semantic Communication (SC), driven by a deep learning (DL)-based "understand-before-transmit" paradigm, transmits lightweight semantic information (SI) instead of raw data. This approach significantly reduces data volume and communication overhead while maintaining performance, making it particularly suitable for UAV communications where the platform is constrained by size, weight, and power (SWAP) limitations. To alleviate the computational burden of semantic extraction (SE) on the UAV, this paper introduces federated learning (FL) as a distributed training framework. By establishing a collaborative architecture with edge users, computationally intensive tasks are offloaded to the edge devices, while the UAV serves as a central coordinator. We first demonstrate the feasibility of integrating FL into SC systems and then propose a novel solution based on Proximal Policy Optimization (PPO) to address the critical challenge of ensuring service fairness in UAV-assisted semantic communications. Specifically, we formulate a joint optimization problem that simultaneously designs the UAV's flight trajectory and bandwidth allocation strategy. Experimental results validate that our FL-based training framework significantly reduces computational resource consumption, while the PPO-based algorithm approach effectively minimizes both energy consumption and task completion time while ensuring equitable quality-of-service (QoS) across all edge users.
{"title":"Federated Learning Semantic Communication in UAV Systems: PPO-Based Joint Trajectory and Resource Allocation Optimization.","authors":"Shuang Du, Yue Zhang, Zhen Tao, Han Li, Haibo Mei","doi":"10.3390/s26020675","DOIUrl":"10.3390/s26020675","url":null,"abstract":"<p><p>Semantic Communication (SC), driven by a deep learning (DL)-based \"understand-before-transmit\" paradigm, transmits lightweight semantic information (SI) instead of raw data. This approach significantly reduces data volume and communication overhead while maintaining performance, making it particularly suitable for UAV communications where the platform is constrained by size, weight, and power (SWAP) limitations. To alleviate the computational burden of semantic extraction (SE) on the UAV, this paper introduces federated learning (FL) as a distributed training framework. By establishing a collaborative architecture with edge users, computationally intensive tasks are offloaded to the edge devices, while the UAV serves as a central coordinator. We first demonstrate the feasibility of integrating FL into SC systems and then propose a novel solution based on Proximal Policy Optimization (PPO) to address the critical challenge of ensuring service fairness in UAV-assisted semantic communications. Specifically, we formulate a joint optimization problem that simultaneously designs the UAV's flight trajectory and bandwidth allocation strategy. Experimental results validate that our FL-based training framework significantly reduces computational resource consumption, while the PPO-based algorithm approach effectively minimizes both energy consumption and task completion time while ensuring equitable quality-of-service (QoS) across all edge users.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12845564/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless Sensor Networks (WSNs) are pivotal for data acquisition, yet reliability is severely constrained by routing voids induced by sparsity, uneven energy, and high dynamicity. To address these challenges, the Hybrid Acoustic-Optical Adaptive Void-handling Protocol (HAO-AVP) is proposed to satisfy the requirements for highly reliable communication in complex underwater environments. First, targeting uneven energy, a reinforcement learning mechanism utilizing Gini coefficient and entropy is adopted. By optimizing energy distribution, voids are proactively avoided. Second, to address routing interruptions caused by the high dynamicity of topology, a collaborative mechanism for active prediction and real-time identification is constructed. Specifically, this mechanism integrates a Markov chain energy prediction model with on-demand hop discovery technology. Through this integration, precise anticipation and rapid localization of potential void risks are achieved. Finally, to recover damaged links at the minimum cost, a four-level progressive recovery strategy, comprising intra-medium adjustment, cross-medium hopping, path backtracking, and Autonomous Underwater Vehicle (AUV)-assisted recovery, is designed. This strategy is capable of adaptively selecting recovery measures based on the severity of the void. Simulation results demonstrate that, compared with existing mainstream protocols, the void identification rate of the proposed protocol is improved by approximately 7.6%, 8.4%, 13.8%, 19.5%, and 25.3%, respectively, and the void recovery rate is increased by approximately 4.3%, 9.6%, 12.0%, 18.4%, and 24.2%, respectively. In particular, enhanced robustness and a prolonged network life cycle are exhibited in sparse and dynamic networks.
{"title":"HAO-AVP: An Entropy-Gini Reinforcement Learning Assisted Hierarchical Void Repair Protocol for Underwater Wireless Sensor Networks.","authors":"Lijun Hao, Chunbo Ma, Jun Ao","doi":"10.3390/s26020684","DOIUrl":"10.3390/s26020684","url":null,"abstract":"<p><p>Wireless Sensor Networks (WSNs) are pivotal for data acquisition, yet reliability is severely constrained by routing voids induced by sparsity, uneven energy, and high dynamicity. To address these challenges, the Hybrid Acoustic-Optical Adaptive Void-handling Protocol (HAO-AVP) is proposed to satisfy the requirements for highly reliable communication in complex underwater environments. First, targeting uneven energy, a reinforcement learning mechanism utilizing Gini coefficient and entropy is adopted. By optimizing energy distribution, voids are proactively avoided. Second, to address routing interruptions caused by the high dynamicity of topology, a collaborative mechanism for active prediction and real-time identification is constructed. Specifically, this mechanism integrates a Markov chain energy prediction model with on-demand hop discovery technology. Through this integration, precise anticipation and rapid localization of potential void risks are achieved. Finally, to recover damaged links at the minimum cost, a four-level progressive recovery strategy, comprising intra-medium adjustment, cross-medium hopping, path backtracking, and Autonomous Underwater Vehicle (AUV)-assisted recovery, is designed. This strategy is capable of adaptively selecting recovery measures based on the severity of the void. Simulation results demonstrate that, compared with existing mainstream protocols, the void identification rate of the proposed protocol is improved by approximately 7.6%, 8.4%, 13.8%, 19.5%, and 25.3%, respectively, and the void recovery rate is increased by approximately 4.3%, 9.6%, 12.0%, 18.4%, and 24.2%, respectively. In particular, enhanced robustness and a prolonged network life cycle are exhibited in sparse and dynamic networks.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12846151/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid advancement of communication systems has heightened the demand for efficient and robust modulation recognition. Conventional deep learning-based methods, however, often struggle in practical few-shot scenarios where acquiring sufficient labeled training data is prohibitive. To bridge this gap, this paper proposes a hybrid transfer learning (HTL) approach that synergistically combines the representation power of deep feature extraction with the flexibility and stability of traditional machine learning (ML) classifiers. The proposed method capitalizes on knowledge transferred from large-scale auxiliary datasets through pre-training, followed by few-shot adaptation using simple ML classifiers. Multiple classical ML classifiers are incorporated and evaluated within the HTL framework for few-shot modulation recognition (FSMR). Comprehensive experiments demonstrate that HTL consistently outperforms existing baseline methods in such data-scarce settings. Furthermore, a detailed analysis of several key parameters is conducted to assess their impact on performance and to inform deployment in practical environments. Notably, the results indicate that the K-nearest neighbor classifier, owing to its instance-based and non-parametric nature, delivers the most robust and generalizable performance within the HTL paradigm, offering a promising solution for reliable FSMR in real-world applications.
{"title":"Leveraging Machine Learning Classifiers in Transfer Learning for Few-Shot Modulation Recognition.","authors":"Song Li, Yong Wang, Jun Xiong, Xia Wang","doi":"10.3390/s26020674","DOIUrl":"10.3390/s26020674","url":null,"abstract":"<p><p>The rapid advancement of communication systems has heightened the demand for efficient and robust modulation recognition. Conventional deep learning-based methods, however, often struggle in practical few-shot scenarios where acquiring sufficient labeled training data is prohibitive. To bridge this gap, this paper proposes a hybrid transfer learning (HTL) approach that synergistically combines the representation power of deep feature extraction with the flexibility and stability of traditional machine learning (ML) classifiers. The proposed method capitalizes on knowledge transferred from large-scale auxiliary datasets through pre-training, followed by few-shot adaptation using simple ML classifiers. Multiple classical ML classifiers are incorporated and evaluated within the HTL framework for few-shot modulation recognition (FSMR). Comprehensive experiments demonstrate that HTL consistently outperforms existing baseline methods in such data-scarce settings. Furthermore, a detailed analysis of several key parameters is conducted to assess their impact on performance and to inform deployment in practical environments. Notably, the results indicate that the K-nearest neighbor classifier, owing to its instance-based and non-parametric nature, delivers the most robust and generalizable performance within the HTL paradigm, offering a promising solution for reliable FSMR in real-world applications.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12846240/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: This study aimed to explore the effects of neuromuscular electrical stimulation (NMES) combined with water-based resistance training on muscle activation and coordination during freestyle kicking. Methods: Thirty National Level male freestyle swimmers were randomly assigned to an experimental group (NMES + water-based training) or a control group (water-based training only) for a 12-week intervention. The experimental group received NMES pretreatment before each session. Underwater surface electromyography (sEMG) synchronized with high-speed video was used to collect muscle activation data and corresponding kinematic information during the freestyle kick. The sEMG signals were then processed using time-domain analysis, including integrated electromyography (iEMG), which reflects the cumulative electrical activity of muscles, and root mean square amplitude (RMS), which indicates the intensity of muscle activation. Non-negative matrix factorization (NMF) was further applied to extract and characterize muscle synergy patterns. Results: The experimental group showed significantly higher iEMG and RMS values in key muscles during both kicking phases. Within the core propulsion synergy, muscle weighting of vastus medialis and biceps femoris increased significantly, while activation duration of the postural adjustment synergy was shortened. The number of synergies showed no significant difference. Conclusions: NMES combined with water-based resistance training enhances muscle activation and optimizes neuromuscular coordination strategies, offering a novel approach to improving sport-specific performance.
{"title":"Effects of NMES Combined with Water-Based Resistance Training on Muscle Coordination in Freestyle Kick Movement.","authors":"Yaohao Guo, Tingyan Gao, Jun Liu","doi":"10.3390/s26020673","DOIUrl":"10.3390/s26020673","url":null,"abstract":"<p><p><b>Background:</b> This study aimed to explore the effects of neuromuscular electrical stimulation (NMES) combined with water-based resistance training on muscle activation and coordination during freestyle kicking. <b>Methods:</b> Thirty National Level male freestyle swimmers were randomly assigned to an experimental group (NMES + water-based training) or a control group (water-based training only) for a 12-week intervention. The experimental group received NMES pretreatment before each session. Underwater surface electromyography (sEMG) synchronized with high-speed video was used to collect muscle activation data and corresponding kinematic information during the freestyle kick. The sEMG signals were then processed using time-domain analysis, including integrated electromyography (iEMG), which reflects the cumulative electrical activity of muscles, and root mean square amplitude (RMS), which indicates the intensity of muscle activation. Non-negative matrix factorization (NMF) was further applied to extract and characterize muscle synergy patterns. <b>Results:</b> The experimental group showed significantly higher iEMG and RMS values in key muscles during both kicking phases. Within the core propulsion synergy, muscle weighting of vastus medialis and biceps femoris increased significantly, while activation duration of the postural adjustment synergy was shortened. The number of synergies showed no significant difference. <b>Conclusions:</b> NMES combined with water-based resistance training enhances muscle activation and optimizes neuromuscular coordination strategies, offering a novel approach to improving sport-specific performance.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12845746/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}