Krzysztof Gierłowski, Michał Hoeft, Andrzej Bęben, Maciej Sosnowski
The popularity of unmanned vehicles in numerous areas of employment, combined with the diversity and continuing evolution of their payloads, make the communication solutions utilized by such vehicles the element of a particular importance. In our previous publication, we confirmed a general applicability of wireless local area network (WLAN) technologies as solutions suitable to provide a control loop communication of unmanned surface vehicles (USVs). At the same time, our research indicated that WLAN technologies provide communication resources in excess of what is required for the above task. In this paper, we aim to verify if a WLAN-based USV communication solution can be reliably utilized for both time-sensitive control loop and high-throughput payload communication simultaneously, which could provide significant advantages during USV construction and operation. For this purpose, we analyzed traffic parameters of popular USV payloads, designed a test system to monitor the impact of such traffic sharing a WLAN link with a USV control loop communication and conducted laboratory and field experiments. As initial results indicated the significant impact of payload traffic on the quality of control communication, we have also proposed a method of employing Commercial Off The Shelf (COTS) hardware for this purpose, in a manner which allows the above-mentioned link sharing to operate reliably in changing real-world conditions. The subsequent verification, first in the laboratory and then during a real-world USV field deployment, confirmed the effectiveness of the proposed method.
{"title":"Wireless Local Area Network Link Sharing in Unmanned Surface Vehicle Control Scenarios.","authors":"Krzysztof Gierłowski, Michał Hoeft, Andrzej Bęben, Maciej Sosnowski","doi":"10.3390/s26020751","DOIUrl":"https://doi.org/10.3390/s26020751","url":null,"abstract":"<p><p>The popularity of unmanned vehicles in numerous areas of employment, combined with the diversity and continuing evolution of their payloads, make the communication solutions utilized by such vehicles the element of a particular importance. In our previous publication, we confirmed a general applicability of wireless local area network (WLAN) technologies as solutions suitable to provide a control loop communication of unmanned surface vehicles (USVs). At the same time, our research indicated that WLAN technologies provide communication resources in excess of what is required for the above task. In this paper, we aim to verify if a WLAN-based USV communication solution can be reliably utilized for both time-sensitive control loop and high-throughput payload communication simultaneously, which could provide significant advantages during USV construction and operation. For this purpose, we analyzed traffic parameters of popular USV payloads, designed a test system to monitor the impact of such traffic sharing a WLAN link with a USV control loop communication and conducted laboratory and field experiments. As initial results indicated the significant impact of payload traffic on the quality of control communication, we have also proposed a method of employing Commercial Off The Shelf (COTS) hardware for this purpose, in a manner which allows the above-mentioned link sharing to operate reliably in changing real-world conditions. The subsequent verification, first in the laboratory and then during a real-world USV field deployment, confirmed the effectiveness of the proposed method.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Annamarie Guth, Marissa Dauner, Evan R Coffey, Michael Hannigan
Prescribed burning is a highly effective way to reduce wildfire risk; however, prescribed fires release harmful pollutants. Quantifying emissions from prescribed fires is valuable for atmospheric modeling and understanding impacts on nearby communities. Emissions are commonly reported as emission factors, which are traditionally calculated cumulatively over an entire combustion event. However, cumulative emission factors do not capture variability in emissions throughout a combustion event. Reliable emission factor calculations require knowledge of the state of the plume, which is unavailable when equipment is deployed for multiple days. In this study, we evaluated two different methods used to detect prescribed fire plumes: the event detection algorithm and a random forest model. Results show that the random forest model outperformed the event detection algorithm, with a detection accuracy of 61% and a 3% false positive rate, compared to 51% accuracy and a 31% false positive rate for the event detection algorithm. Overall, the random forest model provides more robust emission factor calculations and a promising framework for plume detection on future prescribed fires. This work provides a unique approach to fenceline monitoring, as it is one of the only projects to our knowledge using fenceline monitoring to measure emissions from prescribed fire plumes.
{"title":"Using Low-Cost Sensors for Fenceline Monitoring to Measure Emissions from Prescribed Fires.","authors":"Annamarie Guth, Marissa Dauner, Evan R Coffey, Michael Hannigan","doi":"10.3390/s26020745","DOIUrl":"https://doi.org/10.3390/s26020745","url":null,"abstract":"<p><p>Prescribed burning is a highly effective way to reduce wildfire risk; however, prescribed fires release harmful pollutants. Quantifying emissions from prescribed fires is valuable for atmospheric modeling and understanding impacts on nearby communities. Emissions are commonly reported as emission factors, which are traditionally calculated cumulatively over an entire combustion event. However, cumulative emission factors do not capture variability in emissions throughout a combustion event. Reliable emission factor calculations require knowledge of the state of the plume, which is unavailable when equipment is deployed for multiple days. In this study, we evaluated two different methods used to detect prescribed fire plumes: the event detection algorithm and a random forest model. Results show that the random forest model outperformed the event detection algorithm, with a detection accuracy of 61% and a 3% false positive rate, compared to 51% accuracy and a 31% false positive rate for the event detection algorithm. Overall, the random forest model provides more robust emission factor calculations and a promising framework for plume detection on future prescribed fires. This work provides a unique approach to fenceline monitoring, as it is one of the only projects to our knowledge using fenceline monitoring to measure emissions from prescribed fire plumes.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanping Cui, Xiaoxu He, Zhe Wu, Qiang Zhang, Yachao Cao
Non-stationary, multi-component vibration signals in rotating machinery are easily contaminated by strong background noise, which masks weak fault features and degrades diagnostic reliability. This paper proposes a joint denoising method that combines an improved cordyceps fungus optimization algorithm (ICFO), successive variational mode decomposition (SVMD), and an improved wavelet thresholding scheme. ICFO, enhanced by Chebyshev chaotic initialization, a longitudinal-transverse crossover fusion mutation operator, and a thinking innovation strategy, is used to adaptively optimize the SVMD penalty factor and number of modes. The optimized SVMD decomposes the noisy signal into intrinsic mode functions, which are classified into effective and noise-dominated components via the Pearson correlation coefficient. An improved wavelet threshold function, whose threshold is modulated by the sub-band signal-to-noise ratio, is then applied to the effective components, and the denoised signal is reconstructed. Simulation experiments on nonlinear, non-stationary signals with different noise levels (SNR = 1-20 dB) show that the proposed method consistently achieves the highest SNR and lowest RMSE compared to VMD, SVMD, VMD-WTD, CFO-SVMD, and WTD. Tests on CWRU bearing data and gearbox vibration signals with added -2 dB Gaussian white noise further confirm that the method yields the lowest residual variance ratio and highest signal energy ratio while preserving key fault characteristic frequencies.
{"title":"Vibration Signal Denoising Method Based on ICFO-SVMD and Improved Wavelet Thresholding.","authors":"Yanping Cui, Xiaoxu He, Zhe Wu, Qiang Zhang, Yachao Cao","doi":"10.3390/s26020750","DOIUrl":"https://doi.org/10.3390/s26020750","url":null,"abstract":"<p><p>Non-stationary, multi-component vibration signals in rotating machinery are easily contaminated by strong background noise, which masks weak fault features and degrades diagnostic reliability. This paper proposes a joint denoising method that combines an improved cordyceps fungus optimization algorithm (ICFO), successive variational mode decomposition (SVMD), and an improved wavelet thresholding scheme. ICFO, enhanced by Chebyshev chaotic initialization, a longitudinal-transverse crossover fusion mutation operator, and a thinking innovation strategy, is used to adaptively optimize the SVMD penalty factor and number of modes. The optimized SVMD decomposes the noisy signal into intrinsic mode functions, which are classified into effective and noise-dominated components via the Pearson correlation coefficient. An improved wavelet threshold function, whose threshold is modulated by the sub-band signal-to-noise ratio, is then applied to the effective components, and the denoised signal is reconstructed. Simulation experiments on nonlinear, non-stationary signals with different noise levels (SNR = 1-20 dB) show that the proposed method consistently achieves the highest SNR and lowest RMSE compared to VMD, SVMD, VMD-WTD, CFO-SVMD, and WTD. Tests on CWRU bearing data and gearbox vibration signals with added -2 dB Gaussian white noise further confirm that the method yields the lowest residual variance ratio and highest signal energy ratio while preserving key fault characteristic frequencies.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate, subject-specific estimation of cervical muscle forces is a critical prerequisite for advancing spinal biomechanics and clinical diagnostics. However, this task remains challenging due to substantial inter-individual anatomical variability and the invasiveness of direct measurement techniques. In this study, we propose a novel data-driven biomechanical framework that addresses these limitations by integrating massive-scale personalized musculoskeletal simulations with an efficient Feedforward Neural Network (FNN) model. We generated an unprecedented dataset comprising one million personalized OpenSim cervical models, systematically varying key anthropometric parameters (neck length, shoulder width, head mass) to robustly capture human morphological diversity. A random subset was selected for inverse dynamics simulations to establish a comprehensive, physics-based training dataset. Subsequently, an FNN was trained to learn a robust, nonlinear mapping from non-invasive kinematic and anthropometric inputs to the forces of 72 cervical muscles. The model's accuracy was validated on a test set, achieving a coefficient of determination (R2) exceeding 0.95 for all 72 muscle forces. This approach effectively transforms a computationally intensive biomechanical problem into a rapid tool. Additionally, the framework incorporates a functional assessment module that evaluates motion deficits by comparing observed head trajectories against a simulated idealized motion envelope. Validation using data from a healthy subject and a patient with restricted mobility demonstrated the framework's ability to accurately track muscle force trends and precisely identify regions of functional limitations. This methodology offers a scalable and clinically translatable solution for personalized cervical muscle evaluation, supporting targeted rehabilitation and injury risk assessment based on readily obtainable sensor data.
{"title":"From Simplified Markers to Muscle Function: A Deep Learning Approach for Personalized Cervical Biomechanics Assessment Powered by Massive Musculoskeletal Simulation.","authors":"Yuanyuan He, Siyu Liu, Miao Li","doi":"10.3390/s26020752","DOIUrl":"https://doi.org/10.3390/s26020752","url":null,"abstract":"<p><p>Accurate, subject-specific estimation of cervical muscle forces is a critical prerequisite for advancing spinal biomechanics and clinical diagnostics. However, this task remains challenging due to substantial inter-individual anatomical variability and the invasiveness of direct measurement techniques. In this study, we propose a novel data-driven biomechanical framework that addresses these limitations by integrating massive-scale personalized musculoskeletal simulations with an efficient Feedforward Neural Network (FNN) model. We generated an unprecedented dataset comprising one million personalized OpenSim cervical models, systematically varying key anthropometric parameters (neck length, shoulder width, head mass) to robustly capture human morphological diversity. A random subset was selected for inverse dynamics simulations to establish a comprehensive, physics-based training dataset. Subsequently, an FNN was trained to learn a robust, nonlinear mapping from non-invasive kinematic and anthropometric inputs to the forces of 72 cervical muscles. The model's accuracy was validated on a test set, achieving a coefficient of determination (R<sup>2</sup>) exceeding 0.95 for all 72 muscle forces. This approach effectively transforms a computationally intensive biomechanical problem into a rapid tool. Additionally, the framework incorporates a functional assessment module that evaluates motion deficits by comparing observed head trajectories against a simulated idealized motion envelope. Validation using data from a healthy subject and a patient with restricted mobility demonstrated the framework's ability to accurately track muscle force trends and precisely identify regions of functional limitations. This methodology offers a scalable and clinically translatable solution for personalized cervical muscle evaluation, supporting targeted rehabilitation and injury risk assessment based on readily obtainable sensor data.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianxiong Gao, Shuyan Zhang, Wutao Yao, Erping Shang, Jin Yang, Yong Ma, Yan Ma
To address the scarcity of sub-meter remote sensing samples and structural inconsistencies such as edge blur and contour distortion in super-resolution reconstruction, this paper proposes SRCT, a super-resolution method tailored for sub-meter remote sensing imagery. The method consists of two parts: external structure guidance and internal structure optimization. External structure guidance is jointly realized by the structure encoder (SE) and structure guidance module (SGM): the SE extracts key structural features from high-resolution images, and the SGM injects these structural features into the super-resolution network layer by layer, achieving structural transfer from external priors to the reconstruction network. Internal structure optimization is handled by the backbone network SGCT, which introduces a dual-branch residual dense group (DBRDG): one branch uses window-based multi-head self-attention to model global geometric structures, and the other branch uses lightweight convolutions to model local texture features, enabling the network to adaptively balance structure and texture reconstruction internally. Experimental results show that SRCT clearly outperforms existing methods on structure-related metrics, with DISTS reduced by 8.7% and LPIPS reduced by 7.2%, and significantly improves reconstruction quality in structure-sensitive regions such as building contours and road continuity, providing a new technical route for sub-meter remote sensing image super-resolution reconstruction.
{"title":"SRCT: Structure-Preserving Method for Sub-Meter Remote Sensing Image Super-Resolution.","authors":"Tianxiong Gao, Shuyan Zhang, Wutao Yao, Erping Shang, Jin Yang, Yong Ma, Yan Ma","doi":"10.3390/s26020733","DOIUrl":"https://doi.org/10.3390/s26020733","url":null,"abstract":"<p><p>To address the scarcity of sub-meter remote sensing samples and structural inconsistencies such as edge blur and contour distortion in super-resolution reconstruction, this paper proposes SRCT, a super-resolution method tailored for sub-meter remote sensing imagery. The method consists of two parts: external structure guidance and internal structure optimization. External structure guidance is jointly realized by the structure encoder (SE) and structure guidance module (SGM): the SE extracts key structural features from high-resolution images, and the SGM injects these structural features into the super-resolution network layer by layer, achieving structural transfer from external priors to the reconstruction network. Internal structure optimization is handled by the backbone network SGCT, which introduces a dual-branch residual dense group (DBRDG): one branch uses window-based multi-head self-attention to model global geometric structures, and the other branch uses lightweight convolutions to model local texture features, enabling the network to adaptively balance structure and texture reconstruction internally. Experimental results show that SRCT clearly outperforms existing methods on structure-related metrics, with DISTS reduced by 8.7% and LPIPS reduced by 7.2%, and significantly improves reconstruction quality in structure-sensitive regions such as building contours and road continuity, providing a new technical route for sub-meter remote sensing image super-resolution reconstruction.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Decisions made by pilots and drivers suffering from depression can endanger the lives of hundreds of people, as demonstrated by the tragedies of Germanwings flight 9525 and Air India flight 171. Since the detection of depression is currently based largely on subjective self-reporting, there is an urgent need for fast, objective, and reliable detection methods. In our study, we present an artificial intelligence-based system that combines iris-based identification with the analysis of pupillometric and eye movement biomarkers, enabling the real-time detection of physiological signs of depression before driving or flying. The two-module model was evaluated based on data from 242 participants: the iris identification module operated with an Equal Error Rate of less than 0.5%, while the depression-detecting CNN-LSTM network achieved 89% accuracy and an AUC value of 0.94. Compared to the neutral state, depressed individuals responded to negative news with significantly greater pupil dilation (+27.9% vs. +18.4%), while showing a reduced or minimal response to positive stimuli (-1.3% vs. +6.2%). This was complemented by slower saccadic movement and longer fixation time, which is consistent with the cognitive distortions characteristic of depression. Our results indicate that pupillometric deviations relative to individual baselines can be reliably detected and used with high accuracy for depression screening. The presented system offers a preventive safety solution that could reduce the number of accidents caused by human error related to depression in road and air traffic in the future.
患有抑郁症的飞行员和司机所做的决定可能危及数百人的生命,德国之翼9525航班和印度航空171航班的悲剧就证明了这一点。由于目前抑郁症的检测主要基于主观的自我报告,因此迫切需要快速、客观、可靠的检测方法。在我们的研究中,我们提出了一种基于人工智能的系统,该系统将基于虹膜的识别与瞳孔测量和眼动生物标志物的分析相结合,能够在驾驶或飞行前实时检测抑郁症的生理体征。基于242名参与者的数据对两模块模型进行了评估:虹膜识别模块的平均错误率小于0.5%,而抑郁检测CNN-LSTM网络的准确率为89%,AUC值为0.94。与中性状态相比,抑郁个体对负面新闻的反应明显更大(+27.9% vs +18.4%),而对积极刺激的反应则减少或最小(-1.3% vs +6.2%)。这与较慢的跳眼运动和较长的注视时间相辅相成,这与抑郁症的认知扭曲特征相一致。我们的研究结果表明,相对于个体基线的瞳孔测量偏差可以可靠地检测出来,并以高精度用于抑郁症筛查。该系统提供了一种预防性安全解决方案,可以减少未来道路和空中交通中因人为错误造成的事故数量。
{"title":"Artificial Intelligence-Based Depression Detection.","authors":"Gabor Kiss, Patrik Viktor","doi":"10.3390/s26020748","DOIUrl":"https://doi.org/10.3390/s26020748","url":null,"abstract":"<p><p>Decisions made by pilots and drivers suffering from depression can endanger the lives of hundreds of people, as demonstrated by the tragedies of Germanwings flight 9525 and Air India flight 171. Since the detection of depression is currently based largely on subjective self-reporting, there is an urgent need for fast, objective, and reliable detection methods. In our study, we present an artificial intelligence-based system that combines iris-based identification with the analysis of pupillometric and eye movement biomarkers, enabling the real-time detection of physiological signs of depression before driving or flying. The two-module model was evaluated based on data from 242 participants: the iris identification module operated with an Equal Error Rate of less than 0.5%, while the depression-detecting CNN-LSTM network achieved 89% accuracy and an AUC value of 0.94. Compared to the neutral state, depressed individuals responded to negative news with significantly greater pupil dilation (+27.9% vs. +18.4%), while showing a reduced or minimal response to positive stimuli (-1.3% vs. +6.2%). This was complemented by slower saccadic movement and longer fixation time, which is consistent with the cognitive distortions characteristic of depression. Our results indicate that pupillometric deviations relative to individual baselines can be reliably detected and used with high accuracy for depression screening. The presented system offers a preventive safety solution that could reduce the number of accidents caused by human error related to depression in road and air traffic in the future.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zelong Yao, Xiuping Ran, Chenbo Yang, Ping Li, Rutian Bi
Accurate determination of Soil Organic Carbon (SOC), which is the foundation of soil health and safeguards ecological and food security, is crucial in local agricultural production. We aimed to investigate the influence of soil texture on hyperspectral models for predicting SOC content and to evaluate the role of different preprocessing methods and feature band selection algorithms in improving modeling efficiency. Laboratory-determined SOC content and hyperspectral reflectance data were obtained using soil samples from daylily cultivation areas in Yunzhou District, Datong City. Mathematical transformations, including Savitzky-Golay smoothing (SG), First Derivative (FD), Second Derivative (SD), Multiplicative Scatter Correction (MSC), and Standard Normal Variate (SNV), were applied to the spectral reflectance data. Feature bands extracted based on the successive projection algorithm (SPA) and Competitive Adaptive Reweighted Sampling (CARS) were used to establish SOC content inversion models employing four algorithms: partial least-squares regression (PLSR), Random Forest (RF), Backpropagation Neural Network (BP), and Convolutional Neural Network (CNN). The results indicate the following: (1) Preprocessing can effectively increase the correlation between the soil spectral reflectance process and SOC content. (2) SPA and CARS effectively screened the characteristic bands of SOC in daylily cultivated soil from the spectral curves. The SPA algorithm and CARS selected 4-11 and 9-122 bands, respectively, and both algorithms facilitated model construction. (3) Among all the constructed models, the FD-CARS-PLSR performed most prominently, with coefficients of determination (R2) for the training and validation sets reaching 0.93 and 0.83, respectively, demonstrating high model stability and reliability. (4) Incorporating soil texture as an auxiliary variable into the PLSR inversion model improved the inversion accuracy, with accuracy gains ranging between 0.01 and 0.05.
{"title":"Hyperspectral Inversion of Soil Organic Carbon in Daylily Cultivation Areas of Yunzhou District.","authors":"Zelong Yao, Xiuping Ran, Chenbo Yang, Ping Li, Rutian Bi","doi":"10.3390/s26020740","DOIUrl":"https://doi.org/10.3390/s26020740","url":null,"abstract":"<p><p>Accurate determination of Soil Organic Carbon (SOC), which is the foundation of soil health and safeguards ecological and food security, is crucial in local agricultural production. We aimed to investigate the influence of soil texture on hyperspectral models for predicting SOC content and to evaluate the role of different preprocessing methods and feature band selection algorithms in improving modeling efficiency. Laboratory-determined SOC content and hyperspectral reflectance data were obtained using soil samples from daylily cultivation areas in Yunzhou District, Datong City. Mathematical transformations, including Savitzky-Golay smoothing (SG), First Derivative (FD), Second Derivative (SD), Multiplicative Scatter Correction (MSC), and Standard Normal Variate (SNV), were applied to the spectral reflectance data. Feature bands extracted based on the successive projection algorithm (SPA) and Competitive Adaptive Reweighted Sampling (CARS) were used to establish SOC content inversion models employing four algorithms: partial least-squares regression (PLSR), Random Forest (RF), Backpropagation Neural Network (BP), and Convolutional Neural Network (CNN). The results indicate the following: (1) Preprocessing can effectively increase the correlation between the soil spectral reflectance process and SOC content. (2) SPA and CARS effectively screened the characteristic bands of SOC in daylily cultivated soil from the spectral curves. The SPA algorithm and CARS selected 4-11 and 9-122 bands, respectively, and both algorithms facilitated model construction. (3) Among all the constructed models, the FD-CARS-PLSR performed most prominently, with coefficients of determination (R<sup>2</sup>) for the training and validation sets reaching 0.93 and 0.83, respectively, demonstrating high model stability and reliability. (4) Incorporating soil texture as an auxiliary variable into the PLSR inversion model improved the inversion accuracy, with accuracy gains ranging between 0.01 and 0.05.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magnetic resonance imaging (MRI) super-resolution (SR) enables high-resolution reconstruction from low-resolution acquisitions, reducing scan time and easing hardware demands. However, most deep learning-based SR models are large and computationally heavy, limiting deployment in clinical workstations, real-time pipelines, and resource-restricted platforms such as low-field and portable MRI. We introduce CHARMS, a lightweight convolutional-Transformer hybrid with attention regularization optimized for MRI SR. CHARMS employs a Reverse Residual Attention Fusion backbone for hierarchical local feature extraction, Pixel-Channel and Enhanced Spatial Attention for fine-grained feature calibration, and a Multi-Depthwise Dilated Transformer Attention block for efficient long-range dependency modeling. Novel attention regularization suppresses redundant activations, stabilizes training, and enhances generalization across contrasts and field strengths. Across IXI, Human Connectome Project Young Adult, and paired 3T/7T datasets, CHARMS (~1.9M parameters; ~30 GFLOPs for 256 × 256) surpasses leading lightweight and hybrid baselines (EDSR, PAN, W2AMSN-S, and FMEN) by 0.1-0.6 dB PSNR and up to 1% SSIM at ×2/×4 upscaling, while reducing inference time ~40%. Cross-field fine-tuning yields 7T-like reconstructions from 3T inputs with ~6 dB PSNR and 0.12 SSIM gains over native 3T. With near-real-time performance (~11 ms/slice, ~1.6-1.9 s per 3D volume on RTX 4090), CHARMS offers a compelling fidelity-efficiency balance for clinical workflows, accelerated protocols, and portable MRI.
{"title":"CHARMS: A CNN-Transformer Hybrid with Attention Regularization for MRI Super-Resolution.","authors":"Xia Li, Haicheng Sun, Tie-Qiang Li","doi":"10.3390/s26020738","DOIUrl":"https://doi.org/10.3390/s26020738","url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI) super-resolution (SR) enables high-resolution reconstruction from low-resolution acquisitions, reducing scan time and easing hardware demands. However, most deep learning-based SR models are large and computationally heavy, limiting deployment in clinical workstations, real-time pipelines, and resource-restricted platforms such as low-field and portable MRI. We introduce CHARMS, a lightweight convolutional-Transformer hybrid with attention regularization optimized for MRI SR. CHARMS employs a Reverse Residual Attention Fusion backbone for hierarchical local feature extraction, Pixel-Channel and Enhanced Spatial Attention for fine-grained feature calibration, and a Multi-Depthwise Dilated Transformer Attention block for efficient long-range dependency modeling. Novel attention regularization suppresses redundant activations, stabilizes training, and enhances generalization across contrasts and field strengths. Across IXI, Human Connectome Project Young Adult, and paired 3T/7T datasets, CHARMS (~1.9M parameters; ~30 GFLOPs for 256 × 256) surpasses leading lightweight and hybrid baselines (EDSR, PAN, W2AMSN-S, and FMEN) by 0.1-0.6 dB PSNR and up to 1% SSIM at ×2/×4 upscaling, while reducing inference time ~40%. Cross-field fine-tuning yields 7T-like reconstructions from 3T inputs with ~6 dB PSNR and 0.12 SSIM gains over native 3T. With near-real-time performance (~11 ms/slice, ~1.6-1.9 s per 3D volume on RTX 4090), CHARMS offers a compelling fidelity-efficiency balance for clinical workflows, accelerated protocols, and portable MRI.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robotic total stations are multi-sensor integrated instruments widely used in displacement monitoring. The principles of polar coordinate or forward intersection systems are usually utilized for calculating monitoring results. However, the polar coordinate method lacks redundant observations, leading to unreliable results sometimes. Forward intersection requires two instruments for automated monitoring, doubling the cost. In this regard, this paper proposes a novel automated displacement monitoring method using the robotic total station assisted by a fixed-length track. By setting up two station points at both ends of a fixed-length track, the robotic total station is driven to move back and forth on the track and obtain observations at both station points. Then, automated monitoring based on the principle of forward intersection with a single robotic total station is achieved. Simulation and feasibility tests show that the overall accuracy of forward intersection is better than that of polar coordinate system as the monitoring distance increases. At the same time, regardless of tracking a prism or not, the robotic total station is able to automatically find and aim at the targets when moving between station points on the track. Further practical tests show that the reliability of the monitoring results of the proposed method is superior to the polar coordinate method, which provides new ideas for ensuring the reliability of results while reducing cost in actual monitoring tasks.
{"title":"Reliable Automated Displacement Monitoring Using Robotic Total Station Assisted by a Fixed-Length Track.","authors":"Yunhui Jiang, He Gao, Jianguo Zhou","doi":"10.3390/s26020746","DOIUrl":"https://doi.org/10.3390/s26020746","url":null,"abstract":"<p><p>Robotic total stations are multi-sensor integrated instruments widely used in displacement monitoring. The principles of polar coordinate or forward intersection systems are usually utilized for calculating monitoring results. However, the polar coordinate method lacks redundant observations, leading to unreliable results sometimes. Forward intersection requires two instruments for automated monitoring, doubling the cost. In this regard, this paper proposes a novel automated displacement monitoring method using the robotic total station assisted by a fixed-length track. By setting up two station points at both ends of a fixed-length track, the robotic total station is driven to move back and forth on the track and obtain observations at both station points. Then, automated monitoring based on the principle of forward intersection with a single robotic total station is achieved. Simulation and feasibility tests show that the overall accuracy of forward intersection is better than that of polar coordinate system as the monitoring distance increases. At the same time, regardless of tracking a prism or not, the robotic total station is able to automatically find and aim at the targets when moving between station points on the track. Further practical tests show that the reliability of the monitoring results of the proposed method is superior to the polar coordinate method, which provides new ideas for ensuring the reliability of results while reducing cost in actual monitoring tasks.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In precision oncology research, achieving joint modeling of tumor grading and treatment response, together with interpretable mechanism analysis, based on multimodal medical imaging and clinical data remains a challenging and critical problem. From a sensing perspective, these imaging and clinical data can be regarded as heterogeneous sensor-derived signals acquired by medical imaging sensors and clinical monitoring systems, providing continuous and structured observations of tumor characteristics and patient states. Existing approaches typically rely on invasive pathological grading, while grading prediction and treatment response modeling are often conducted independently. Moreover, multimodal fusion procedures generally lack explicit structural constraints, which limits their practical utility in clinical decision-making. To address these issues, a grade-guided multimodal collaborative modeling framework was proposed. Built upon mature deep learning models, including 3D ResNet-18, MLP, and CNN-Transformer, tumor grading was incorporated as a weakly supervised prior into the processes of multimodal feature fusion and treatment response modeling, thereby enabling an integrated solution for non-invasive grading prediction, treatment response subtype discovery, and intrinsic mechanism interpretation. Through a grade-guided feature fusion mechanism, discriminative information that is highly correlated with tumor malignancy and treatment sensitivity is emphasized in the multimodal joint representation, while irrelevant features are suppressed to prevent interference with model learning. Within a unified framework, grading prediction and grade-conditioned treatment response modeling are jointly realized. Experimental results on real-world clinical datasets demonstrate that the proposed method achieved an accuracy of 84.6% and a kappa coefficient of 0.81 in the tumor-grading prediction task, indicating a high level of consistency with pathological grading. In the treatment response prediction task, the proposed model attained an AUC of 0.85, a precision of 0.81, and a recall of 0.79, significantly outperforming single-modality models, conventional early-fusion models, and multimodal CNN-Transformer models without grading constraints. In addition, treatment-sensitive and treatment-resistant subtypes identified under grading conditions exhibited stable and significant stratification differences in clustering consistency and survival analysis, validating the potential value of the proposed approach for clinical risk assessment and individualized treatment decision-making.
{"title":"A Sensor-Oriented Multimodal Medical Data Acquisition and Modeling Framework for Tumor Grading and Treatment Response Analysis.","authors":"Linfeng Xie, Shanhe Xiao, Bihong Ming, Zhe Xiang, Zibo Rui, Xinyi Liu, Yan Zhan","doi":"10.3390/s26020737","DOIUrl":"https://doi.org/10.3390/s26020737","url":null,"abstract":"<p><p>In precision oncology research, achieving joint modeling of tumor grading and treatment response, together with interpretable mechanism analysis, based on multimodal medical imaging and clinical data remains a challenging and critical problem. From a sensing perspective, these imaging and clinical data can be regarded as heterogeneous sensor-derived signals acquired by medical imaging sensors and clinical monitoring systems, providing continuous and structured observations of tumor characteristics and patient states. Existing approaches typically rely on invasive pathological grading, while grading prediction and treatment response modeling are often conducted independently. Moreover, multimodal fusion procedures generally lack explicit structural constraints, which limits their practical utility in clinical decision-making. To address these issues, a grade-guided multimodal collaborative modeling framework was proposed. Built upon mature deep learning models, including 3D ResNet-18, MLP, and CNN-Transformer, tumor grading was incorporated as a weakly supervised prior into the processes of multimodal feature fusion and treatment response modeling, thereby enabling an integrated solution for non-invasive grading prediction, treatment response subtype discovery, and intrinsic mechanism interpretation. Through a grade-guided feature fusion mechanism, discriminative information that is highly correlated with tumor malignancy and treatment sensitivity is emphasized in the multimodal joint representation, while irrelevant features are suppressed to prevent interference with model learning. Within a unified framework, grading prediction and grade-conditioned treatment response modeling are jointly realized. Experimental results on real-world clinical datasets demonstrate that the proposed method achieved an accuracy of 84.6% and a kappa coefficient of 0.81 in the tumor-grading prediction task, indicating a high level of consistency with pathological grading. In the treatment response prediction task, the proposed model attained an AUC of 0.85, a precision of 0.81, and a recall of 0.79, significantly outperforming single-modality models, conventional early-fusion models, and multimodal CNN-Transformer models without grading constraints. In addition, treatment-sensitive and treatment-resistant subtypes identified under grading conditions exhibited stable and significant stratification differences in clustering consistency and survival analysis, validating the potential value of the proposed approach for clinical risk assessment and individualized treatment decision-making.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146066036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}