Candela Melendreras, Jesús Montero, José M Costa-Fernández, Ana Soldado, Francisco Ferrero, Francisco Fernández Linera, Marta Valledor, Juan Carlos Campo
There is an increasing need to establish reliable safety controls in the food industry and to protect public health. Consequently, there are numerous efforts to develop sensitive, robust, and selective analytical strategies. As regulatory requirements for food and the concentration for target biomarkers in clinical analysis evolve, the food and health sectors are showing a growing interest in developing non-destructive, rapid, on-site, and environmentally safe methodologies. One alternative that meets the conditions is non-destructive spectroscopic sensors, such as those based on vibrational spectroscopy (Raman, surface-enhanced Raman-SERS, mid- and near-infrared spectroscopy, and hyperspectral imaging built on those techniques). The use of vibrational spectroscopy in food safety and health applications is expanding rapidly, moving beyond the laboratory bench to include on-the-go and in-line deployment. The dominant trends include the following: (1) the miniaturisation and portability of instruments; (2) surface-enhanced Raman spectroscopy (SERS) and nanostructured substrates for the detection of trace contaminants; (3) hyperspectral imaging (HSI) and deep learning for the spatial screening of quality and contamination; (4) the stronger integration of chemometrics and machine learning for robust classification and quantification; (5) growing attention to calibration transfer, validation, and regulatory readiness. These advances will bring together a variety of tools to create a real-time decision-making system that will address the issue in question. This article review aims to highlight the trends in vibrational spectroscopy tools for health and food safety control, with a particular focus on handheld and miniaturised instruments.
{"title":"Trends in Vibrational Spectroscopy: NIRS and Raman Techniques for Health and Food Safety Control.","authors":"Candela Melendreras, Jesús Montero, José M Costa-Fernández, Ana Soldado, Francisco Ferrero, Francisco Fernández Linera, Marta Valledor, Juan Carlos Campo","doi":"10.3390/s26030989","DOIUrl":"10.3390/s26030989","url":null,"abstract":"<p><p>There is an increasing need to establish reliable safety controls in the food industry and to protect public health. Consequently, there are numerous efforts to develop sensitive, robust, and selective analytical strategies. As regulatory requirements for food and the concentration for target biomarkers in clinical analysis evolve, the food and health sectors are showing a growing interest in developing non-destructive, rapid, on-site, and environmentally safe methodologies. One alternative that meets the conditions is non-destructive spectroscopic sensors, such as those based on vibrational spectroscopy (Raman, surface-enhanced Raman-SERS, mid- and near-infrared spectroscopy, and hyperspectral imaging built on those techniques). The use of vibrational spectroscopy in food safety and health applications is expanding rapidly, moving beyond the laboratory bench to include on-the-go and in-line deployment. The dominant trends include the following: (1) the miniaturisation and portability of instruments; (2) surface-enhanced Raman spectroscopy (SERS) and nanostructured substrates for the detection of trace contaminants; (3) hyperspectral imaging (HSI) and deep learning for the spatial screening of quality and contamination; (4) the stronger integration of chemometrics and machine learning for robust classification and quantification; (5) growing attention to calibration transfer, validation, and regulatory readiness. These advances will bring together a variety of tools to create a real-time decision-making system that will address the issue in question. This article review aims to highlight the trends in vibrational spectroscopy tools for health and food safety control, with a particular focus on handheld and miniaturised instruments.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 3","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12900100/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146182367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marine scientific observation missions operate over disrupted, high-loss links and must keep heterogeneous sensor, image, and log data confidential and verifiable under fragmented, out-of-order delivery. This paper proposes an end-to-end encryption-verification co-design that integrates HMR integrity structuring with EMR hybrid encapsulation. By externalizing block boundaries and maintaining a minimal receiver-side verification state, the framework supports block-level integrity/provenance verification and selective recovery without continuous sessions, enabling multi-hop and intermittent connectivity. Experiments on a synthetic multimodal ocean dataset show reduced storage/encapsulation overhead (10.4% vs. 12.8% for SHA-256 + RSA + AES), lower hashing latency (6.8 ms vs. 12.5 ms), and 80.1 ms end-to-end encryption-decryption latency (21.2% lower than RSA + AES). Under fragmentation, verification latency scales near-linearly with block count (R2 = 0.998) while throughput drops only slightly (11.8 → 11.3 KB/ms). With 100 KB blocks, transmission latency stays below 1.024 s in extreme channels and around 0.08-0.10 s in typical ranges, with expected retransmissions < 0.25. On Raspberry Pi 4, runtime slowdown remains stable at ~3.40× versus a PC baseline, supporting deployability on resource-constrained nodes.
{"title":"A Hybrid Hash-Encryption Scheme for Secure Transmission and Verification of Marine Scientific Research Data.","authors":"Hanyu Wang, Mo Chen, Maoxu Wang, Min Yang","doi":"10.3390/s26030994","DOIUrl":"10.3390/s26030994","url":null,"abstract":"<p><p>Marine scientific observation missions operate over disrupted, high-loss links and must keep heterogeneous sensor, image, and log data confidential and verifiable under fragmented, out-of-order delivery. This paper proposes an end-to-end encryption-verification co-design that integrates HMR integrity structuring with EMR hybrid encapsulation. By externalizing block boundaries and maintaining a minimal receiver-side verification state, the framework supports block-level integrity/provenance verification and selective recovery without continuous sessions, enabling multi-hop and intermittent connectivity. Experiments on a synthetic multimodal ocean dataset show reduced storage/encapsulation overhead (10.4% vs. 12.8% for SHA-256 + RSA + AES), lower hashing latency (6.8 ms vs. 12.5 ms), and 80.1 ms end-to-end encryption-decryption latency (21.2% lower than RSA + AES). Under fragmentation, verification latency scales near-linearly with block count (R<sup>2</sup> = 0.998) while throughput drops only slightly (11.8 → 11.3 KB/ms). With 100 KB blocks, transmission latency stays below 1.024 s in extreme channels and around 0.08-0.10 s in typical ranges, with expected retransmissions < 0.25. On Raspberry Pi 4, runtime slowdown remains stable at ~3.40× versus a PC baseline, supporting deployability on resource-constrained nodes.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 3","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12899856/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146182092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sukwoo Jung, Myeongseop Kim, Jean Oh, Jonghwa Kim, Kyung-Taek Lee
Autonomous driving systems rely on vast and diverse datasets for robust object recognition. However, acquiring real-world data, especially for rare and hazardous scenarios, is prohibitively expensive and risky. While purely synthetic data offers flexibility, it often suffers from a significant reality gap due to discrepancies in visual fidelity and physics. To address these challenges, this paper proposes a novel real-virtual fusion framework for efficiently generating highly realistic augmented image datasets for autonomous driving. Our methodology leverages real-world driving data from South Korea's K-City, synchronizing it with a digital twin environment in Morai Sim (v24.R2) through a robust look-up table and fine-tuned localization approach. We then seamlessly inject diverse virtual objects (e.g., pedestrians, vehicles, traffic lights) into real image backgrounds. A critical contribution is our focus on inconsistency mitigation, employing advanced techniques such as illumination matching during virtual object injection to minimize visual discrepancies. We evaluate the proposed approach through experiments. Our results show that this real-virtual fusion strategy significantly bridges the reality gap, providing a cost-effective and safe solution for enriching autonomous driving datasets and improving the generalization capabilities of perception models.
{"title":"Toward Realistic Autonomous Driving Dataset Augmentation: A Real-Virtual Fusion Approach with Inconsistency Mitigation.","authors":"Sukwoo Jung, Myeongseop Kim, Jean Oh, Jonghwa Kim, Kyung-Taek Lee","doi":"10.3390/s26030987","DOIUrl":"10.3390/s26030987","url":null,"abstract":"<p><p>Autonomous driving systems rely on vast and diverse datasets for robust object recognition. However, acquiring real-world data, especially for rare and hazardous scenarios, is prohibitively expensive and risky. While purely synthetic data offers flexibility, it often suffers from a significant reality gap due to discrepancies in visual fidelity and physics. To address these challenges, this paper proposes a novel real-virtual fusion framework for efficiently generating highly realistic augmented image datasets for autonomous driving. Our methodology leverages real-world driving data from South Korea's K-City, synchronizing it with a digital twin environment in Morai Sim (v24.R2) through a robust look-up table and fine-tuned localization approach. We then seamlessly inject diverse virtual objects (e.g., pedestrians, vehicles, traffic lights) into real image backgrounds. A critical contribution is our focus on inconsistency mitigation, employing advanced techniques such as illumination matching during virtual object injection to minimize visual discrepancies. We evaluate the proposed approach through experiments. Our results show that this real-virtual fusion strategy significantly bridges the reality gap, providing a cost-effective and safe solution for enriching autonomous driving datasets and improving the generalization capabilities of perception models.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 3","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12899562/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146182408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visible Light Communication (VLC) is a transformative paradigm poised to revolutionize the automotive and numerous other sectors. As the demand for high data rates and low latency applications grows, the limited bandwidth of standard white LED-based lamps-typically restricted to a few MHz-presents a significant bottleneck. While high-order modulation schemes like Quadrature Amplitude Modulation (QAM) offer superior spectral efficiency, their computational complexity often hinders real-time implementation. Consequently, the existing literature lacks experimental validation of low-latency real-time VLC links. This work addresses this challenge by proposing a modified algorithm that is implemented in a resource-efficient QAM modulator/demodulator (MODEM) for an FPGA. The algorithm includes the synchronization loop. The proposed MODEM is available as open-source code and provides a scalable foundation for researchers to explore low-latency real-time VLC links. Experimental results demonstrate successful 2, 4, and 6 Mb/s links using 4-, 16-, and 64-QAM constellations, respectively, over a white-phosphor-power LED. We measured a latency of less than 1.3 μs.
{"title":"An Open-Source QAM MODEM for Visible Light Communication in FPGA for Real-Time Applications.","authors":"Stefano Ricci","doi":"10.3390/s26030992","DOIUrl":"10.3390/s26030992","url":null,"abstract":"<p><p>Visible Light Communication (VLC) is a transformative paradigm poised to revolutionize the automotive and numerous other sectors. As the demand for high data rates and low latency applications grows, the limited bandwidth of standard white LED-based lamps-typically restricted to a few MHz-presents a significant bottleneck. While high-order modulation schemes like Quadrature Amplitude Modulation (QAM) offer superior spectral efficiency, their computational complexity often hinders real-time implementation. Consequently, the existing literature lacks experimental validation of low-latency real-time VLC links. This work addresses this challenge by proposing a modified algorithm that is implemented in a resource-efficient QAM modulator/demodulator (MODEM) for an FPGA. The algorithm includes the synchronization loop. The proposed MODEM is available as open-source code and provides a scalable foundation for researchers to explore low-latency real-time VLC links. Experimental results demonstrate successful 2, 4, and 6 Mb/s links using 4-, 16-, and 64-QAM constellations, respectively, over a white-phosphor-power LED. We measured a latency of less than 1.3 μs.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 3","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12900124/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146181661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Self-supervised contrastive learning has demonstrated remarkable effectiveness in learning visual representations without labeled data, yet its application to 3D local feature learning from point clouds remains underexplored. Existing methods predominantly focus on complete object shapes, neglecting the critical challenge of recognizing partial observations commonly encountered in real-world 3D perception. We propose a momentum contrastive learning framework specifically designed to learn discriminative local features from randomly sampled point cloud regions. By adapting the MoCo architecture with PointNet++ as the feature backbone, our method treats local parts of point cloud as fundamental contrastive learning units, combined with carefully designed augmentation strategies including random dropout and translation. Experiments on ShapeNet demonstrate that our approach effectively learns transferable local features and the empirical observation that approximately 30% object local part represents a practical threshold for effective learning when simulating real-world occlusion scenarios, and achieves comparable downstream classification accuracy while reducing training time by 16%.
{"title":"3D Local Feature Learning and Analysis on Point Cloud Parts via Momentum Contrast.","authors":"Xuanmeng Sha, Tomohiro Mashita, Naoya Chiba, Liyun Zhang","doi":"10.3390/s26031007","DOIUrl":"10.3390/s26031007","url":null,"abstract":"<p><p>Self-supervised contrastive learning has demonstrated remarkable effectiveness in learning visual representations without labeled data, yet its application to 3D local feature learning from point clouds remains underexplored. Existing methods predominantly focus on complete object shapes, neglecting the critical challenge of recognizing partial observations commonly encountered in real-world 3D perception. We propose a momentum contrastive learning framework specifically designed to learn discriminative local features from randomly sampled point cloud regions. By adapting the MoCo architecture with PointNet++ as the feature backbone, our method treats local parts of point cloud as fundamental contrastive learning units, combined with carefully designed augmentation strategies including random dropout and translation. Experiments on ShapeNet demonstrate that our approach effectively learns transferable local features and the empirical observation that approximately 30% object local part represents a practical threshold for effective learning when simulating real-world occlusion scenarios, and achieves comparable downstream classification accuracy while reducing training time by 16%.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 3","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12900106/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146182174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Longyu Chen, Yi Yang, Tulin Xiong, Lin Chen, Yuqi Liu
Navigation signals are simultaneously affected by nonlinear distortion from the high-power amplifier (HPA) and linear distortion from the filter in the navigation signal transmission channel, which reduce the signal quality and degrade the performance in high-precision positioning services. To address the limitation of traditional compensation methods under nonlinear conditions, this proposes a joint compensation approach. The approach first employs an iterative piecewise optimization method to design a predistortion filter to enhance the compensation ability for linear distortion. Then a QR-decomposition recursive least squares parameter extraction algorithm is used to extract the actual HPA model and construct a lookup table, enabling adaptive compensation of nonlinear distortion. With S-curve bias (SCB) as the performance evaluation index, the results show that this method can significantly reduce the SCB and effectively compensate for the distortion. The findings indicate that the proposed method improves navigation signal quality and provides reliable support for high-precision positioning services.
{"title":"Improving S-Curve Bias Through Joint Compensation of HPA and Filter Distortions.","authors":"Longyu Chen, Yi Yang, Tulin Xiong, Lin Chen, Yuqi Liu","doi":"10.3390/s26030981","DOIUrl":"10.3390/s26030981","url":null,"abstract":"<p><p>Navigation signals are simultaneously affected by nonlinear distortion from the high-power amplifier (HPA) and linear distortion from the filter in the navigation signal transmission channel, which reduce the signal quality and degrade the performance in high-precision positioning services. To address the limitation of traditional compensation methods under nonlinear conditions, this proposes a joint compensation approach. The approach first employs an iterative piecewise optimization method to design a predistortion filter to enhance the compensation ability for linear distortion. Then a QR-decomposition recursive least squares parameter extraction algorithm is used to extract the actual HPA model and construct a lookup table, enabling adaptive compensation of nonlinear distortion. With S-curve bias (SCB) as the performance evaluation index, the results show that this method can significantly reduce the SCB and effectively compensate for the distortion. The findings indicate that the proposed method improves navigation signal quality and provides reliable support for high-precision positioning services.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 3","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12899956/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146182182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, excessive tillering caused by high temperatures during early growth has contributed to rice quality deterioration in warm regions of Japan. Accurate determination of midseason drainage timing is essential but remains difficult due to year- and cultivar-dependent variability. In this study, we developed a smartphone-based web application that estimates rice tiller number from canopy images and diagnoses the optimal timing of midseason drainage by comparing estimated tiller numbers with cultivar-specific target values. The system operates entirely on a smartphone using HTML5 canvas-based pixel extraction, JavaScript computation, and Google Apps Script-based backend processing. Field experiments conducted in Chiba Prefecture using three rice cultivars showed a strong linear relationship between estimated and observed tiller numbers (R2 = 0.9439). The root mean square error (RMSE) was 42.6 tillers m-2, with a consistent negative bias (-34.6 tillers m-2), indicating systematic underestimation. Considering typical tiller increase rates near midseason drainage (12.0-24.3 tillers m-2 day-1), these errors correspond to approximately 1-3 days of growth progression, which is acceptable for timing-based decision-making. Although the system does not aim to provide precise absolute tiller counts, it reliably captures relative growth-stage dynamics and supports threshold-based diagnosis. The proposed approach enables rapid, on-site decision support using only a smartphone, contributing to labor-saving and improved water management in rice production.
{"title":"Development and Field Validation of a Smartphone-Based Web Application for Diagnosing Optimal Timing of Mid-Season Drainage in Rice Cultivation via Canopy Image-Derived Tiller Estimation.","authors":"Yusaku Aoki, Atsushi Mochizuki, Mitsuaki Nakamura, Chikara Kuwata","doi":"10.3390/s26031000","DOIUrl":"10.3390/s26031000","url":null,"abstract":"<p><p>In recent years, excessive tillering caused by high temperatures during early growth has contributed to rice quality deterioration in warm regions of Japan. Accurate determination of midseason drainage timing is essential but remains difficult due to year- and cultivar-dependent variability. In this study, we developed a smartphone-based web application that estimates rice tiller number from canopy images and diagnoses the optimal timing of midseason drainage by comparing estimated tiller numbers with cultivar-specific target values. The system operates entirely on a smartphone using HTML5 canvas-based pixel extraction, JavaScript computation, and Google Apps Script-based backend processing. Field experiments conducted in Chiba Prefecture using three rice cultivars showed a strong linear relationship between estimated and observed tiller numbers (R<sup>2</sup> = 0.9439). The root mean square error (RMSE) was 42.6 tillers m<sup>-2</sup>, with a consistent negative bias (-34.6 tillers m<sup>-2</sup>), indicating systematic underestimation. Considering typical tiller increase rates near midseason drainage (12.0-24.3 tillers m<sup>-2</sup> day<sup>-1</sup>), these errors correspond to approximately 1-3 days of growth progression, which is acceptable for timing-based decision-making. Although the system does not aim to provide precise absolute tiller counts, it reliably captures relative growth-stage dynamics and supports threshold-based diagnosis. The proposed approach enables rapid, on-site decision support using only a smartphone, contributing to labor-saving and improved water management in rice production.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 3","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12900086/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146182157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasir Al-Ghafri, Hafiz M Asif, Zia Nadir, Naser Tarhuni
In this paper, a wireless network architecture is considered that combines double intelligent reflecting surfaces (IRSs), energy harvesting (EH), and non-orthogonal multiple access (NOMA) with cooperative relaying (C-NOMA) to leverage the performance of non-line-of-sight (NLoS) communication mainly and incorporate energy efficiency in next-generation networks. To optimize the phase shifts of both IRSs, we employ a machine learning model that offers a low-complexity alternative to traditional optimization methods. This lightweight learning-based approach is introduced to predict effective IRS phase shift configurations without relying on solver-generated labels or repeated iterations. The model learns from channel behavior and system observations, which allows it to react rapidly under dynamic channel conditions. Numerical analysis demonstrates the validity of the proposed architecture in providing considerable improvements in spectral efficiency and service reliability through the integration of energy harvesting and relay-based communication compared with conventional systems, thereby facilitating green communication systems.
{"title":"AI-Driven Real-Time Phase Optimization for Energy Harvesting-Enabled Dual-IRS Cooperative NOMA Under Non-Line-of-Sight Conditions.","authors":"Yasir Al-Ghafri, Hafiz M Asif, Zia Nadir, Naser Tarhuni","doi":"10.3390/s26030980","DOIUrl":"10.3390/s26030980","url":null,"abstract":"<p><p>In this paper, a wireless network architecture is considered that combines double intelligent reflecting surfaces (IRSs), energy harvesting (EH), and non-orthogonal multiple access (NOMA) with cooperative relaying (C-NOMA) to leverage the performance of non-line-of-sight (NLoS) communication mainly and incorporate energy efficiency in next-generation networks. To optimize the phase shifts of both IRSs, we employ a machine learning model that offers a low-complexity alternative to traditional optimization methods. This lightweight learning-based approach is introduced to predict effective IRS phase shift configurations without relying on solver-generated labels or repeated iterations. The model learns from channel behavior and system observations, which allows it to react rapidly under dynamic channel conditions. Numerical analysis demonstrates the validity of the proposed architecture in providing considerable improvements in spectral efficiency and service reliability through the integration of energy harvesting and relay-based communication compared with conventional systems, thereby facilitating green communication systems.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 3","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12900113/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146182175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tao Shi, Xuan Wang, Wei Jiang, Xiansheng Huang, Ming Cen, Shuai Cao, Hao Zhou
The Intelligent Road Side Unit (RSU) is a crucial component of Intelligent Transportation Systems (ITSs), where roadside LiDAR are widely utilized for their high precision and resolution. However, water droplets and atmospheric particles in fog significantly attenuate and scatter LiDAR beams, posing a challenge to multi-target tracking and ITS safety. To enhance the accuracy and reliability of RSU-based tracking, a collaborative RSU method that integrates denoising and tracking for multi-target tracking is proposed. The proposed approach first dynamically adjusts the filtering kernel scale based on local noise levels to effectively remove noisy point clouds using a modified bilateral filter. Subsequently, a multi-RSU cooperative tracking framework is designed, which employs a particle Probability Hypothesis Density (PHD) filter to estimate target states via measurement fusion. A multi-target tracking system for intelligent RSUs in Foggy scenarios was designed and implemented. Extensive experiments were conducted using an intelligent roadside platform in real-world fog-affected traffic environments to validate the accuracy and real-time performance of the proposed algorithm. Experimental results demonstrate that the proposed method improves the target detection accuracy by 8% and 29%, respectively, compared to statistical filtering methods after removing fog noise under thin and thick fog conditions. At the same time, this method performs well in tracking multi-class targets, surpassing existing state-of-the-art methods, especially in high-order evaluation indicators such as HOTA, MOTA, and IDs.
{"title":"Multi-Target Tracking with Collaborative Roadside Units Under Foggy Conditions.","authors":"Tao Shi, Xuan Wang, Wei Jiang, Xiansheng Huang, Ming Cen, Shuai Cao, Hao Zhou","doi":"10.3390/s26030998","DOIUrl":"10.3390/s26030998","url":null,"abstract":"<p><p>The Intelligent Road Side Unit (RSU) is a crucial component of Intelligent Transportation Systems (ITSs), where roadside LiDAR are widely utilized for their high precision and resolution. However, water droplets and atmospheric particles in fog significantly attenuate and scatter LiDAR beams, posing a challenge to multi-target tracking and ITS safety. To enhance the accuracy and reliability of RSU-based tracking, a collaborative RSU method that integrates denoising and tracking for multi-target tracking is proposed. The proposed approach first dynamically adjusts the filtering kernel scale based on local noise levels to effectively remove noisy point clouds using a modified bilateral filter. Subsequently, a multi-RSU cooperative tracking framework is designed, which employs a particle Probability Hypothesis Density (PHD) filter to estimate target states via measurement fusion. A multi-target tracking system for intelligent RSUs in Foggy scenarios was designed and implemented. Extensive experiments were conducted using an intelligent roadside platform in real-world fog-affected traffic environments to validate the accuracy and real-time performance of the proposed algorithm. Experimental results demonstrate that the proposed method improves the target detection accuracy by 8% and 29%, respectively, compared to statistical filtering methods after removing fog noise under thin and thick fog conditions. At the same time, this method performs well in tracking multi-class targets, surpassing existing state-of-the-art methods, especially in high-order evaluation indicators such as HOTA, MOTA, and IDs.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 3","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12899924/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146182263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In practical applications of deep learning-based Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems, new target categories emerge continuously. This requires the systems to learn incrementally-acquiring new knowledge while retaining previously learned information. To mitigate catastrophic forgetting in Class-Incremental Learning (CIL), this paper proposes a CIL method for SAR ATR named Multi-center Prototype Feature Distribution Reconstruction (MPFR). It has two core components. First, a Multi-scale Hybrid Attention feature extractor is designed. Trained via a feature space optimization strategy, it fuses and extracts discriminative features from both SAR amplitude images and Attribute Scattering Center data, while preserving feature space capacity for new classes. Second, each class is represented by multiple prototypes to capture complex feature distributions. Old class knowledge is retained by modeling their feature distributions through parameterized Gaussian diffusion, alleviating feature confusion in incremental phases. Experiments on public SAR datasets show MPFR achieves superior performance compared to existing approaches, including recent SAR-specific CIL methods. Ablation studies validate each component's contribution, confirming MPFR's effectiveness in addressing CIL for SAR ATR without storing historical raw data.
{"title":"Multi-Center Prototype Feature Distribution Reconstruction for Class-Incremental SAR Target Recognition.","authors":"Ke Zhang, Bin Wu, Peng Li, Zhi Kang, Lin Zhang","doi":"10.3390/s26030979","DOIUrl":"10.3390/s26030979","url":null,"abstract":"<p><p>In practical applications of deep learning-based Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems, new target categories emerge continuously. This requires the systems to learn incrementally-acquiring new knowledge while retaining previously learned information. To mitigate catastrophic forgetting in Class-Incremental Learning (CIL), this paper proposes a CIL method for SAR ATR named Multi-center Prototype Feature Distribution Reconstruction (MPFR). It has two core components. First, a Multi-scale Hybrid Attention feature extractor is designed. Trained via a feature space optimization strategy, it fuses and extracts discriminative features from both SAR amplitude images and Attribute Scattering Center data, while preserving feature space capacity for new classes. Second, each class is represented by multiple prototypes to capture complex feature distributions. Old class knowledge is retained by modeling their feature distributions through parameterized Gaussian diffusion, alleviating feature confusion in incremental phases. Experiments on public SAR datasets show MPFR achieves superior performance compared to existing approaches, including recent SAR-specific CIL methods. Ablation studies validate each component's contribution, confirming MPFR's effectiveness in addressing CIL for SAR ATR without storing historical raw data.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 3","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12900103/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146182354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}