Jiancheng Jin, Gang Wang, Yuanhang Qiu, Siyuan Gong, Bo Ren
Accurate picking of microseismic P-wave arrival times is essential for the localization and monitoring of mining-induced seismic events. Conventional Short-Term Average/Long-Term Average (STA/LTA) detectors, while computationally efficient, are highly susceptible to noise interference. Conversely, deep learning approaches exhibit superior noise robustness but often involve substantial computational redundancy and compromised real-time performance. To address these limitations, we propose a novel two-stage picking framework that integrates STA/LTA with a lightweight U-Net, enabling rapid preliminary detection followed by fine-grained refinement. In the first stage, STA/LTA rapidly scans continuous waveforms to identify candidate windows potentially containing P-wave arrivals. In the second stage, a lightweight U-Net performs sample-level regression within each candidate window to refine arrival-time estimates with high precision. This coarse-to-fine paradigm effectively balances computational efficiency and picking accuracy. Experimental validation on 500 Hz microseismic data acquired from a coal mine in Gansu Province demonstrates that the proposed method achieves a hit rate of 63.21% within a tolerance window of ±0.01 s. This represents performance improvements of 25.42% and 40.47% over convolutional neural network (CNN) and STA/LTA methods, respectively, while reducing the mean absolute error to 0.0130 s. Furthermore, the model exhibits consistent performance on independent test sets, confirming its generalization capability and noise robustness. By combining the computational efficiency of STA/LTA with the representational power of deep learning, the proposed approach demonstrates significant potential for real-time industrial deployment.
{"title":"Two-Stage Microseismic P-Wave Arrival Picking via STA/LTA-Guided Lightweight U-Net.","authors":"Jiancheng Jin, Gang Wang, Yuanhang Qiu, Siyuan Gong, Bo Ren","doi":"10.3390/s26051693","DOIUrl":"10.3390/s26051693","url":null,"abstract":"<p><p>Accurate picking of microseismic P-wave arrival times is essential for the localization and monitoring of mining-induced seismic events. Conventional Short-Term Average/Long-Term Average (STA/LTA) detectors, while computationally efficient, are highly susceptible to noise interference. Conversely, deep learning approaches exhibit superior noise robustness but often involve substantial computational redundancy and compromised real-time performance. To address these limitations, we propose a novel two-stage picking framework that integrates STA/LTA with a lightweight U-Net, enabling rapid preliminary detection followed by fine-grained refinement. In the first stage, STA/LTA rapidly scans continuous waveforms to identify candidate windows potentially containing P-wave arrivals. In the second stage, a lightweight U-Net performs sample-level regression within each candidate window to refine arrival-time estimates with high precision. This coarse-to-fine paradigm effectively balances computational efficiency and picking accuracy. Experimental validation on 500 Hz microseismic data acquired from a coal mine in Gansu Province demonstrates that the proposed method achieves a hit rate of 63.21% within a tolerance window of ±0.01 s. This represents performance improvements of 25.42% and 40.47% over convolutional neural network (CNN) and STA/LTA methods, respectively, while reducing the mean absolute error to 0.0130 s. Furthermore, the model exhibits consistent performance on independent test sets, confirming its generalization capability and noise robustness. By combining the computational efficiency of STA/LTA with the representational power of deep learning, the proposed approach demonstrates significant potential for real-time industrial deployment.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987285/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fernando Daniel Farfán, Ana Lía Albarracín, Leonardo Ariel Cano, Eduardo Fernández
Time-frequency (TF) characterization of electromyographic (EMG) bursts is essential for accurately assessing muscle function, particularly when the signals exhibit a high degree of nonstationarity. In this exploratory study, we investigated the temporal dynamics of the spectral components associated with short-latency EMG bursts using several TF analysis techniques. Specifically, we compared the performance and interpretability of spectrograms obtained via the short-time Fourier transform (STFT), the continuous wavelet transform (CWT), and noise-assisted multivariate empirical mode decomposition (NA-MEMD), applied to EMG signals recorded from the biceps femoris muscle of freely moving rats in an animal model of Parkinson's disease, acquired using chronically implanted bipolar electrodes during treadmill locomotion. For each method, we evaluated its effectiveness in capturing transient variations in frequency content, the stability of extracted features across bursts, and the extent to which these features reflect physiologically meaningful aspects of muscle activation. The results show that TF approaches reveal complementary information about burst structure; NA-MEMD provides greater adaptability to nonlinear and nonstationary components, whereas STFT- and CWT-based representations offer more controlled and comparable analyses. Overall, these findings highlight the value of TF analysis as a methodological tool for evaluating muscle function and provide a solid foundation for selecting analytical strategies in studies where EMG bursts exhibit complex and highly variable spectral profiles.
{"title":"Assessing Time-Frequency Analysis Methods for Non-Stationary EMG Bursts: Application to an Animal Model of Parkinson's Disease.","authors":"Fernando Daniel Farfán, Ana Lía Albarracín, Leonardo Ariel Cano, Eduardo Fernández","doi":"10.3390/s26051688","DOIUrl":"10.3390/s26051688","url":null,"abstract":"<p><p>Time-frequency (TF) characterization of electromyographic (EMG) bursts is essential for accurately assessing muscle function, particularly when the signals exhibit a high degree of nonstationarity. In this exploratory study, we investigated the temporal dynamics of the spectral components associated with short-latency EMG bursts using several TF analysis techniques. Specifically, we compared the performance and interpretability of spectrograms obtained via the short-time Fourier transform (STFT), the continuous wavelet transform (CWT), and noise-assisted multivariate empirical mode decomposition (NA-MEMD), applied to EMG signals recorded from the biceps femoris muscle of freely moving rats in an animal model of Parkinson's disease, acquired using chronically implanted bipolar electrodes during treadmill locomotion. For each method, we evaluated its effectiveness in capturing transient variations in frequency content, the stability of extracted features across bursts, and the extent to which these features reflect physiologically meaningful aspects of muscle activation. The results show that TF approaches reveal complementary information about burst structure; NA-MEMD provides greater adaptability to nonlinear and nonstationary components, whereas STFT- and CWT-based representations offer more controlled and comparable analyses. Overall, these findings highlight the value of TF analysis as a methodological tool for evaluating muscle function and provide a solid foundation for selecting analytical strategies in studies where EMG bursts exhibit complex and highly variable spectral profiles.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12986578/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mouhamed Aghiad Raslan, Martin Schmidhammer, Ibrahim Rashdan, Fabian de Ponte Müller, Tobias Uhlich, Andreas Becker
The increasing risk to Vulnerable Road Users (VRUs) at urban intersections necessitates advanced safety mechanisms capable of operating effectively under diverse conditions, including adverse weather like heavy rain. While optical sensors such as cameras and LiDAR often degrade in poor visibility, Radio Frequency (RF)-based systems offer resilient, all-weather tracking. This paper presents a novel approach to enhancing VRU protection by fusing two RF modalities: radar sensors and Ultra-Wideband (UWB) technology, a strong candidate for Joint Communication and Sensing (JCS). The research, conducted as part of the VIDETEC-2 project, addresses the limitations of existing vehicle-based and infrastructure-based systems, particularly in scenarios involving occlusions and blind spots. By leveraging radar's environmental robustness alongside UWB's precise, cost-effective short-range communication and localization, the proposed system delivers the framework for continuous vehicle and VRU tracking. The fusion of these sensor modalities, managed through a hybrid Kalman filter approach integrating an Unscented Kalman Filter (UKF) and an Extended Kalman Filter (EKF), allows reliable VRU tracking even in challenging urban scenarios. The experimental results demonstrate a reduction in tracking uncertainty and highlight the system's potential to serve as a more accurate and responsive safety mechanism for VRUs at intersections. This work contributes to the development of intelligent road infrastructures, laying the foundation for future advancements in urban traffic safety.
{"title":"Robust Localization and Tracking of VRUs with Radar and Ultra-Wideband Sensors for Traffic Safety.","authors":"Mouhamed Aghiad Raslan, Martin Schmidhammer, Ibrahim Rashdan, Fabian de Ponte Müller, Tobias Uhlich, Andreas Becker","doi":"10.3390/s26051690","DOIUrl":"10.3390/s26051690","url":null,"abstract":"<p><p>The increasing risk to Vulnerable Road Users (VRUs) at urban intersections necessitates advanced safety mechanisms capable of operating effectively under diverse conditions, including adverse weather like heavy rain. While optical sensors such as cameras and LiDAR often degrade in poor visibility, Radio Frequency (RF)-based systems offer resilient, all-weather tracking. This paper presents a novel approach to enhancing VRU protection by fusing two RF modalities: radar sensors and Ultra-Wideband (UWB) technology, a strong candidate for Joint Communication and Sensing (JCS). The research, conducted as part of the VIDETEC-2 project, addresses the limitations of existing vehicle-based and infrastructure-based systems, particularly in scenarios involving occlusions and blind spots. By leveraging radar's environmental robustness alongside UWB's precise, cost-effective short-range communication and localization, the proposed system delivers the framework for continuous vehicle and VRU tracking. The fusion of these sensor modalities, managed through a hybrid Kalman filter approach integrating an Unscented Kalman Filter (UKF) and an Extended Kalman Filter (EKF), allows reliable VRU tracking even in challenging urban scenarios. The experimental results demonstrate a reduction in tracking uncertainty and highlight the system's potential to serve as a more accurate and responsive safety mechanism for VRUs at intersections. This work contributes to the development of intelligent road infrastructures, laying the foundation for future advancements in urban traffic safety.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987293/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinyu Zhang, Xiaopeng Yan, Xinhong Hao, Tai An, Erwa Dong, Jian Dai
The automatic modulation recognition (AMR) of low probability intercept (LPI) signals has received a considerable amount of interest from many researchers who have done much work on electronic reconnaissance. This recognition technology aims to design a classifier that enables the identification of signals with different modulation types. Based on deep learning models such as a convolutional neural network (CNN), the time-frequency images (TFIs) of the signal can be input to further extract features for classification. To improve recognition accuracy, especially under low signal-to-noise ratios (SNRs), we propose an AMR method for radio frequency proximity sensor signals based on a TFI enhancement network. The TFIs are denoised based on a per-pixel kernel prediction network (KPN), which can improve the quality of TFIs and achieves comparable denoising performance to traditional TFI reconstruction methods (e.g., sparse representation-based methods and low-rank approximation methods), while requiring significantly less computational overhead. The denoised TFIs, with enhanced signal quality and reduced noise, are then fed into the RetinalNet-based classifier as high-quality input features. This enhancement is crucial for the subsequent recognition stage, as it significantly improves the modulation recognition accuracy, particularly under challenging low SNR conditions. Simulation results show that the proposed method can accurately identify the modulation types of different radio frequency proximity sensors that are aliased in the time-frequency domain under low SNRs, and the average recognition accuracy rate of the signal remains above 97% when the signal-to-noise ratio is above -10 dB.
{"title":"Automatic Modulation Recognition for Radio Mixed Proximity Sensor Signals Based on a Time-Frequency Image Enhancement Network.","authors":"Jinyu Zhang, Xiaopeng Yan, Xinhong Hao, Tai An, Erwa Dong, Jian Dai","doi":"10.3390/s26051677","DOIUrl":"10.3390/s26051677","url":null,"abstract":"<p><p>The automatic modulation recognition (AMR) of low probability intercept (LPI) signals has received a considerable amount of interest from many researchers who have done much work on electronic reconnaissance. This recognition technology aims to design a classifier that enables the identification of signals with different modulation types. Based on deep learning models such as a convolutional neural network (CNN), the time-frequency images (TFIs) of the signal can be input to further extract features for classification. To improve recognition accuracy, especially under low signal-to-noise ratios (SNRs), we propose an AMR method for radio frequency proximity sensor signals based on a TFI enhancement network. The TFIs are denoised based on a per-pixel kernel prediction network (KPN), which can improve the quality of TFIs and achieves comparable denoising performance to traditional TFI reconstruction methods (e.g., sparse representation-based methods and low-rank approximation methods), while requiring significantly less computational overhead. The denoised TFIs, with enhanced signal quality and reduced noise, are then fed into the RetinalNet-based classifier as high-quality input features. This enhancement is crucial for the subsequent recognition stage, as it significantly improves the modulation recognition accuracy, particularly under challenging low SNR conditions. Simulation results show that the proposed method can accurately identify the modulation types of different radio frequency proximity sensors that are aliased in the time-frequency domain under low SNRs, and the average recognition accuracy rate of the signal remains above 97% when the signal-to-noise ratio is above -10 dB.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987066/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marcin Słomiany, Jacek Dybała, Grzegorz Gawdzik, Mateusz Maciaś, Arkadiusz Orłowski
This paper presents a method for material classification of objects detected by a laser scanner (LiDAR) used in autonomous mobile robot navigation. The proposed approach operates on a single-frame LiDAR scan composed of single-beam echoes and addresses materials with different reflective properties, including transparent glass surfaces. Material classification is performed by comparing measured reflection intensity profiles, defined as functions of distance and beam incidence angle, with reference profiles constructed for selected material classes. In addition to normalized reflection intensity, the gradient of the intensity profile is used to support discrimination in regions where material-dependent characteristics overlap. Experimental results obtained in indoor environments containing glass surfaces demonstrate that the proposed method enables reliable material type classification without multi-scan data accumulation or multi-sensor fusion.
{"title":"Material Identification of Scanned Objects Based on the Classification of the Laser Reflection Intensity Profile.","authors":"Marcin Słomiany, Jacek Dybała, Grzegorz Gawdzik, Mateusz Maciaś, Arkadiusz Orłowski","doi":"10.3390/s26051666","DOIUrl":"10.3390/s26051666","url":null,"abstract":"<p><p>This paper presents a method for material classification of objects detected by a laser scanner (LiDAR) used in autonomous mobile robot navigation. The proposed approach operates on a single-frame LiDAR scan composed of single-beam echoes and addresses materials with different reflective properties, including transparent glass surfaces. Material classification is performed by comparing measured reflection intensity profiles, defined as functions of distance and beam incidence angle, with reference profiles constructed for selected material classes. In addition to normalized reflection intensity, the gradient of the intensity profile is used to support discrimination in regions where material-dependent characteristics overlap. Experimental results obtained in indoor environments containing glass surfaces demonstrate that the proposed method enables reliable material type classification without multi-scan data accumulation or multi-sensor fusion.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12986717/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing use of telerehabilitation has intensified the need for validated smartphone sensor-based tools capable of accurately capturing joint range of motion (ROM). This study examined the criterion validity of the PhysioMaster application compared with a universal goniometer during in-person assessments and evaluated the inter-method reliability between in-person and online PhysioMaster measurements. Thirty healthy young adults underwent standardized hip, knee, and ankle ROM testing using both approaches. The criterion validity was limited for most joints, with only ankle plantarflexion demonstrating the highest validity and dorsiflexion showing a moderate association; in contrast, hip and knee ROM exhibited poor agreement with goniometric values. Despite limited absolute agreement, PhysioMaster demonstrated moderate to good inter-method reliability for hip and knee ROM, indicating consistency across assessment modes. These findings suggest that while PhysioMaster may not serve as a direct substitute for in-person goniometry, it shows potential as a consistent tool for tracking ROM changes remotely, particularly for hip and knee movements. The application may support remote musculoskeletal monitoring within telerehabilitation contexts where repeated, standardized assessments are required.
{"title":"Criterion Validity and Inter-Method Reliability of a Smartphone Sensor-Based Application for Lower-Limb Range of Motion: In-Person vs. Tele-Assessment.","authors":"Rehab Aljuhni, Zainab Aldarwish, Shroug Almutairi","doi":"10.3390/s26051661","DOIUrl":"10.3390/s26051661","url":null,"abstract":"<p><p>The increasing use of telerehabilitation has intensified the need for validated smartphone sensor-based tools capable of accurately capturing joint range of motion (ROM). This study examined the criterion validity of the PhysioMaster application compared with a universal goniometer during in-person assessments and evaluated the inter-method reliability between in-person and online PhysioMaster measurements. Thirty healthy young adults underwent standardized hip, knee, and ankle ROM testing using both approaches. The criterion validity was limited for most joints, with only ankle plantarflexion demonstrating the highest validity and dorsiflexion showing a moderate association; in contrast, hip and knee ROM exhibited poor agreement with goniometric values. Despite limited absolute agreement, PhysioMaster demonstrated moderate to good inter-method reliability for hip and knee ROM, indicating consistency across assessment modes. These findings suggest that while PhysioMaster may not serve as a direct substitute for in-person goniometry, it shows potential as a consistent tool for tracking ROM changes remotely, particularly for hip and knee movements. The application may support remote musculoskeletal monitoring within telerehabilitation contexts where repeated, standardized assessments are required.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987158/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
By 2050, the global population is expected to reach approximately 10 billion, leading to a projected 50% increase in food demand relative to 2013 levels. If not adequately anticipated, this growing demand will place significant strain on agri-food systems worldwide, with disproportionate impacts on low- and middle-income countries. Moreover, current projections may underestimate the accelerating effects of climate change, political instability, and civil unrest, which continue to disrupt food production and distribution systems. In this context, technological advancements offer a promising pathway to enhance efficiency, improve transparency, and mitigate risks related to food safety, adulteration, and counterfeiting. Emerging innovations can decouple food production from environmental degradation while strengthening monitoring, verification, and accountability across supply chains. This review examines state-of-the-art technologies developed to support traceability and anti-counterfeiting in agri-food supply chains, considering their application across the full spectrum of stakeholders. To provide a system-level perspective, the review adopts a five-layer socio-technical traceability and anti-counterfeiting framework, comprising identity, sensing, intelligence, integrity, and interaction layers, which is used to map enabling technologies and reinterpret the evolution of traceability systems (TS 1.0-TS 4.0) as a progression of functional capabilities rather than isolated technological upgrades. Using this framework, the review analyzes the advantages and limitations of current solutions and clarifies how traceability and anti-counterfeiting functions emerge through technology integration. It further identifies gaps that hinder large-scale and equitable adoption. Finally, future research directions are outlined to address current technical, economic, and governance challenges and to guide the development of more resilient, trustworthy, and sustainable agri-food traceability systems.
{"title":"Traceability and Anti-Counterfeiting in Agri-Food Supply Chains: A Review of RFID, IoT, Blockchain, and AI Technologies.","authors":"Mohamed Riad Sebti, Ultan McCarthy, Anastasia Ktenioudaki, Mariateresa Russo, Massimo Merenda","doi":"10.3390/s26051685","DOIUrl":"10.3390/s26051685","url":null,"abstract":"<p><p>By 2050, the global population is expected to reach approximately 10 billion, leading to a projected 50% increase in food demand relative to 2013 levels. If not adequately anticipated, this growing demand will place significant strain on agri-food systems worldwide, with disproportionate impacts on low- and middle-income countries. Moreover, current projections may underestimate the accelerating effects of climate change, political instability, and civil unrest, which continue to disrupt food production and distribution systems. In this context, technological advancements offer a promising pathway to enhance efficiency, improve transparency, and mitigate risks related to food safety, adulteration, and counterfeiting. Emerging innovations can decouple food production from environmental degradation while strengthening monitoring, verification, and accountability across supply chains. This review examines state-of-the-art technologies developed to support traceability and anti-counterfeiting in agri-food supply chains, considering their application across the full spectrum of stakeholders. To provide a system-level perspective, the review adopts a five-layer socio-technical traceability and anti-counterfeiting framework, comprising identity, sensing, intelligence, integrity, and interaction layers, which is used to map enabling technologies and reinterpret the evolution of traceability systems (TS 1.0-TS 4.0) as a progression of functional capabilities rather than isolated technological upgrades. Using this framework, the review analyzes the advantages and limitations of current solutions and clarifies how traceability and anti-counterfeiting functions emerge through technology integration. It further identifies gaps that hinder large-scale and equitable adoption. Finally, future research directions are outlined to address current technical, economic, and governance challenges and to guide the development of more resilient, trustworthy, and sustainable agri-food traceability systems.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987207/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicholaus Zilinski, Ash M Parameswaran, Bonnie L Gray, Teresa Cheung
Optically pumped magnetometers (OPMs) provide a non-cryogenic alternative to superconducting quantum interference devices (SQUIDs) for detecting weak biomagnetic fields. We report the design, construction, and characterization of a single-cell intrinsic OPM gradiometer. The gradiometer employs a rubidium-87 vapor cell in an orthogonal pump and probe beam configuration. The pump beam was split to illuminate two parallel sensing regions of the cell, separated by a baseline of 3 cm, with opposing circular polarization. A linearly polarized probe beam propagated through both regions and was captured by a balanced polarimeter whose output directly measured the spatial magnetic gradient. This prototype achieved a common-mode rejection ratio exceeding 50 dB and a sensitivity of 267 pT/cm/√Hz without passive magnetic shielding, using active ambient-field coils. As a proof of concept, we recorded preliminary cardiac-synchronous magnetic measurements using an optical pulse sensor for beat segmentation. After bandpass filtering and ensemble averaging, a cardiac-synchronous waveform was observed, consistent with cardiac timing. Unlike many multi-cell gradiometers that require complex calibration, modulation, and passive shielding, this single-cell design reduces cost and complexity.
{"title":"A Single-Cell Optically Pumped Intrinsic Gradiometer.","authors":"Nicholaus Zilinski, Ash M Parameswaran, Bonnie L Gray, Teresa Cheung","doi":"10.3390/s26051678","DOIUrl":"10.3390/s26051678","url":null,"abstract":"<p><p>Optically pumped magnetometers (OPMs) provide a non-cryogenic alternative to superconducting quantum interference devices (SQUIDs) for detecting weak biomagnetic fields. We report the design, construction, and characterization of a single-cell intrinsic OPM gradiometer. The gradiometer employs a rubidium-87 vapor cell in an orthogonal pump and probe beam configuration. The pump beam was split to illuminate two parallel sensing regions of the cell, separated by a baseline of 3 cm, with opposing circular polarization. A linearly polarized probe beam propagated through both regions and was captured by a balanced polarimeter whose output directly measured the spatial magnetic gradient. This prototype achieved a common-mode rejection ratio exceeding 50 dB and a sensitivity of 267 pT/cm/√Hz without passive magnetic shielding, using active ambient-field coils. As a proof of concept, we recorded preliminary cardiac-synchronous magnetic measurements using an optical pulse sensor for beat segmentation. After bandpass filtering and ensemble averaging, a cardiac-synchronous waveform was observed, consistent with cardiac timing. Unlike many multi-cell gradiometers that require complex calibration, modulation, and passive shielding, this single-cell design reduces cost and complexity.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987060/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In autonomous driving, motion sickness (MS) arises from physical or visual stimuli, or a combination of both. However, objective quantification of MS level (MSL) remains limited beyond questionnaire-based assessments. Using multimodal human signals (physiological and behavioral) collected in an autonomous driving simulator, this study addresses the association between these signals and MSL, across these MS types, by (i) screening and curating a decade of human-signal MS studies (HS-Set) to establish a data-driven foundation for selecting target sensor domains and features, (ii) constructing a dataset with subjective measures of MSL (fast motion sickness scale and simulator sickness questionnaire (SSQ)), alongside human signals (electroencephalogram (EEG), photoplethysmogram (PPG), electrodermal activity (EDA), skin temperature, and head/eye movement), (iii) conducting a correlation analysis between MSL and the identified features from HS-Set, and (iv) quantifying multivariable contributions at the feature and sensor domains through an explainable boosting machine (EBM). Key correlations include head amplitude/energy (pitch/surge) with SSQ total/oculomotor, eye entropy with nausea/oculomotor (positive), and EDA with nausea (negative). The EBM-based contribution analysis highlights EEG connectivity and head kinematics as dominant contributors; excluding EEG, the interpretability of single-domain models remains limited. Additionally, a combination of Head, PPG, and EDA domains retains over 80% of the full model's interpretability.
{"title":"A Study on Autonomous Driving Motion Sickness from the Perspective of Multimodal Human Signals.","authors":"Su Young Kim, Yoon Sang Kim","doi":"10.3390/s26051675","DOIUrl":"10.3390/s26051675","url":null,"abstract":"<p><p>In autonomous driving, motion sickness (MS) arises from physical or visual stimuli, or a combination of both. However, objective quantification of MS level (MSL) remains limited beyond questionnaire-based assessments. Using multimodal human signals (physiological and behavioral) collected in an autonomous driving simulator, this study addresses the association between these signals and MSL, across these MS types, by (i) screening and curating a decade of human-signal MS studies (HS-Set) to establish a data-driven foundation for selecting target sensor domains and features, (ii) constructing a dataset with subjective measures of MSL (fast motion sickness scale and simulator sickness questionnaire (SSQ)), alongside human signals (electroencephalogram (EEG), photoplethysmogram (PPG), electrodermal activity (EDA), skin temperature, and head/eye movement), (iii) conducting a correlation analysis between MSL and the identified features from HS-Set, and (iv) quantifying multivariable contributions at the feature and sensor domains through an explainable boosting machine (EBM). Key correlations include head amplitude/energy (pitch/surge) with SSQ total/oculomotor, eye entropy with nausea/oculomotor (positive), and EDA with nausea (negative). The EBM-based contribution analysis highlights EEG connectivity and head kinematics as dominant contributors; excluding EEG, the interpretability of single-domain models remains limited. Additionally, a combination of Head, PPG, and EDA domains retains over 80% of the full model's interpretability.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987010/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huajun Meng, Zijie Yu, Cheng Li, Chao Li, Xiaojun Liu
4D millimeter-wave radar provides a promising solution for robust perception in adverse weather. Existing detectors still struggle with sparse and noisy point clouds, and maintaining real-time inference while achieving competitive accuracy remains challenging. We propose SGE-Flow, a streamlined PointPillars-based 4D radar 3D detector that embeds lightweight spatiotemporal geometric enhancements into the voxelization front-end. Velocity Displacement Compensation (VDC) leverages compensated radial velocity to align accumulated points in physical space and improve geometric consistency. Distribution-Aware Density (DAD) enables fast density feature extraction by estimating per-pillar density from simple statistical moments, which also restores vertical distribution cues lost during pillarization. To compensate for the absence of tangential velocity measurements, a Transformer-based Inter-frame Flow (IFF) module infers latent motion from frame-to-frame pillar occupancy changes. Evaluations on the View-of-Delft (VoD) dataset show that SGE-Flow achieves 53.23% 3D mean Average Precision (mAP) while running at 72 frames per second (FPS) on an NVIDIA RTX 3090. The proposed modules are plug-and-play and can also improve strong baselines such as MAFF-Net.
{"title":"SGE-Flow: 4D mmWave Radar 3D Object Detection via Spatiotemporal Geometric Enhancement and Inter-Frame Flow.","authors":"Huajun Meng, Zijie Yu, Cheng Li, Chao Li, Xiaojun Liu","doi":"10.3390/s26051679","DOIUrl":"10.3390/s26051679","url":null,"abstract":"<p><p>4D millimeter-wave radar provides a promising solution for robust perception in adverse weather. Existing detectors still struggle with sparse and noisy point clouds, and maintaining real-time inference while achieving competitive accuracy remains challenging. We propose SGE-Flow, a streamlined PointPillars-based 4D radar 3D detector that embeds lightweight spatiotemporal geometric enhancements into the voxelization front-end. Velocity Displacement Compensation (VDC) leverages compensated radial velocity to align accumulated points in physical space and improve geometric consistency. Distribution-Aware Density (DAD) enables fast density feature extraction by estimating per-pillar density from simple statistical moments, which also restores vertical distribution cues lost during pillarization. To compensate for the absence of tangential velocity measurements, a Transformer-based Inter-frame Flow (IFF) module infers latent motion from frame-to-frame pillar occupancy changes. Evaluations on the View-of-Delft (VoD) dataset show that SGE-Flow achieves 53.23% 3D mean Average Precision (mAP) while running at 72 frames per second (FPS) on an NVIDIA RTX 3090. The proposed modules are plug-and-play and can also improve strong baselines such as MAFF-Net.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12986809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}