Pub Date : 2026-01-26DOI: 10.1109/TRS.2026.3657792
Jiahao Cui;Wang Guo;Qi Wang;Keyi Zhang;Lingquan Meng;Haifeng Li
The presence of adversarial examples can cause synthetic aperture radar (SAR) image classification systems to produce incorrect predictions, severely compromising their accuracy and robustness. However, existing adversarial example generation methods for SAR images often suffer from limited physical interpretability, low perturbation energy utilization, and insufficient perturbation stealthiness. To address the above issues, this article proposes SAR-PMAE, a phase-modulation-guided adversarial example generation method for SAR images. In particular, to tackle the lack of physical interpretability, a phase-modulation strategy based on the point-target scattering model is employed, coupling adversarial perturbations with the EM scattering mechanism to enhance their physical interpretability; to improve perturbation energy utilization, a local focusing strategy is adopted, in which foreground targets are precisely segmented and centered via a watershed algorithm combined with morphological operations, concentrating perturbations in strong response regions and avoiding redundant energy dispersion into background clutter; and to enhance perturbation stealthiness, constraints are applied separately to the phase in the complex coherent domain, thereby maximally preserving SAR speckle statistics and improving perturbation concealment. Experimental results show that the proposed method achieves an average untargeted attack success rate (ASR) of 55.53% and a targeted ASR of 20.92% across ten classifiers based on convolutional neural network (CNN) and Transformer architectures, while exhibiting strong attack transferability. The complete implementation is publicly available at https://github.com/muzhengcui/SAR-PMAE
{"title":"SAR-PMAE: Phase-Modulation-Guided Adversarial Example Generation for SAR Images","authors":"Jiahao Cui;Wang Guo;Qi Wang;Keyi Zhang;Lingquan Meng;Haifeng Li","doi":"10.1109/TRS.2026.3657792","DOIUrl":"https://doi.org/10.1109/TRS.2026.3657792","url":null,"abstract":"The presence of adversarial examples can cause synthetic aperture radar (SAR) image classification systems to produce incorrect predictions, severely compromising their accuracy and robustness. However, existing adversarial example generation methods for SAR images often suffer from limited physical interpretability, low perturbation energy utilization, and insufficient perturbation stealthiness. To address the above issues, this article proposes SAR-PMAE, a phase-modulation-guided adversarial example generation method for SAR images. In particular, to tackle the lack of physical interpretability, a phase-modulation strategy based on the point-target scattering model is employed, coupling adversarial perturbations with the EM scattering mechanism to enhance their physical interpretability; to improve perturbation energy utilization, a local focusing strategy is adopted, in which foreground targets are precisely segmented and centered via a watershed algorithm combined with morphological operations, concentrating perturbations in strong response regions and avoiding redundant energy dispersion into background clutter; and to enhance perturbation stealthiness, constraints are applied separately to the phase in the complex coherent domain, thereby maximally preserving SAR speckle statistics and improving perturbation concealment. Experimental results show that the proposed method achieves an average untargeted attack success rate (ASR) of 55.53% and a targeted ASR of 20.92% across ten classifiers based on convolutional neural network (CNN) and Transformer architectures, while exhibiting strong attack transferability. The complete implementation is publicly available at <uri>https://github.com/muzhengcui/SAR-PMAE</uri>","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"407-429"},"PeriodicalIF":0.0,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-26DOI: 10.1109/TRS.2026.3657920
Bangjie Zhang;Marina S. Gashinova;Marco Martorella
Sub-terahertz (sub-THz) radar enables high-resolution and highly detailed imaging of space objects, offering substantial advantages for space domain awareness (SDA). Accurate estimation of spin motion and 3-D geometry of noncooperative space targets is essential for SDA tasks such as target characterization, autonomous rendezvous, and proximity operation (RPO), and debris removal. Interferometric ISAR (InISAR), which generates 3-D images in the form of point clouds, can also resolve the effective rotation vector—defined as the component of the total rotation vector perpendicular to the line of sight (LOS). This component is critical for 2-D image scaling and for enhancing understanding of the imaging projection plane (IPP). However, the component along the LOS remains unknown, which limits the ability to fully exploit multiperspective ISAR imaging to characterize targets. In this article, a novel spin estimation framework is proposed based on multiperspective InISAR, which leverages viewing angle diversity during on-orbit observation to resolve the total rotation vector of space targets. The proposed method performs InISAR imaging and the estimation of the effective rotation vector for each perspective, followed by the combination of effective rotation vectors to determine the total rotation vector as a unified least-squares solution. The framework requires no prior 3-D model and achieves robust spin estimation. Both simulation experiments using sub-THz radar parameters and laboratory validation using W-band radar are carried out to verify the effectiveness of the proposed method, demonstrating its potential for future SDA and on-orbit servicing applications.
{"title":"Spin Estimation and 3-D Geometry Reconstruction Using Multiperspective Space-Borne Sub-THz Interferometric Inverse Synthetic Aperture Radar","authors":"Bangjie Zhang;Marina S. Gashinova;Marco Martorella","doi":"10.1109/TRS.2026.3657920","DOIUrl":"https://doi.org/10.1109/TRS.2026.3657920","url":null,"abstract":"Sub-terahertz (sub-THz) radar enables high-resolution and highly detailed imaging of space objects, offering substantial advantages for space domain awareness (SDA). Accurate estimation of spin motion and 3-D geometry of noncooperative space targets is essential for SDA tasks such as target characterization, autonomous rendezvous, and proximity operation (RPO), and debris removal. Interferometric ISAR (InISAR), which generates 3-D images in the form of point clouds, can also resolve the effective rotation vector—defined as the component of the total rotation vector perpendicular to the line of sight (LOS). This component is critical for 2-D image scaling and for enhancing understanding of the imaging projection plane (IPP). However, the component along the LOS remains unknown, which limits the ability to fully exploit multiperspective ISAR imaging to characterize targets. In this article, a novel spin estimation framework is proposed based on multiperspective InISAR, which leverages viewing angle diversity during on-orbit observation to resolve the total rotation vector of space targets. The proposed method performs InISAR imaging and the estimation of the effective rotation vector for each perspective, followed by the combination of effective rotation vectors to determine the total rotation vector as a unified least-squares solution. The framework requires no prior 3-D model and achieves robust spin estimation. Both simulation experiments using sub-THz radar parameters and laboratory validation using W-band radar are carried out to verify the effectiveness of the proposed method, demonstrating its potential for future SDA and on-orbit servicing applications.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"473-485"},"PeriodicalIF":0.0,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1109/TRS.2026.3657154
Marcin Wachowiak;André Bourdoux;Sofie Pollin
This article investigates the range ambiguity function of near-field (NF) systems where bandwidth and NF beamfocusing jointly determine the resolution. First, the general matched filter ambiguity function is derived and the NF array factors of different antenna array geometries are introduced. Next, the NF ambiguity function is approximated as a product of the range-dependent NF array factor and the ambiguity function due to the utilized waveform and bandwidth. An approximation criterion based on the aperture–bandwidth product is formulated, and its accuracy is examined. Finally, the improvements to the ambiguity function offered by the NF beamfocusing, as compared to the far-field case, are presented. The performance gains are evaluated in terms of resolution improvement offered by beamfocusing, peak-to-sidelobe, and integrated-sidelobe-level improvement for a few popular array geometries. The gains offered by the NF regime are shown to be range-dependent and substantial only in close proximity to the array.
{"title":"Approximation of the Range Ambiguity Function in Near-Field Sensing Systems","authors":"Marcin Wachowiak;André Bourdoux;Sofie Pollin","doi":"10.1109/TRS.2026.3657154","DOIUrl":"https://doi.org/10.1109/TRS.2026.3657154","url":null,"abstract":"This article investigates the range ambiguity function of near-field (NF) systems where bandwidth and NF beamfocusing jointly determine the resolution. First, the general matched filter ambiguity function is derived and the NF array factors of different antenna array geometries are introduced. Next, the NF ambiguity function is approximated as a product of the range-dependent NF array factor and the ambiguity function due to the utilized waveform and bandwidth. An approximation criterion based on the aperture–bandwidth product is formulated, and its accuracy is examined. Finally, the improvements to the ambiguity function offered by the NF beamfocusing, as compared to the far-field case, are presented. The performance gains are evaluated in terms of resolution improvement offered by beamfocusing, peak-to-sidelobe, and integrated-sidelobe-level improvement for a few popular array geometries. The gains offered by the NF regime are shown to be range-dependent and substantial only in close proximity to the array.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"430-442"},"PeriodicalIF":0.0,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1109/TRS.2026.3656643
Sebastián M. Torres;Christopher D. Curtis;Robert J. Estes;Stephen B. Gregg
The advanced technology demonstrator (ATD) is a full-scale, S-band, dual-polarization phased-array radar (PAR) developed as a proof-of-concept to evaluate the capabilities of electronically scanned radars for weather observation. Recent upgrades to the ATD have introduced a closed-loop adaptive scanning framework and an implementation of the adaptive digital signal-processing algorithm for PAR timely scans (ADAPTSs) to enable adaptive focused observations. ADAPTS leverages the radar’s beam agility to dynamically concentrate measurements in regions with significant weather returns, reducing the total scan time while preserving data quality and meaningful volumetric coverage. The algorithm selectively enables (disables) beams with (without) significant weather echoes, enables beams in a neighborhood of beams with significant weather echoes to account for storm advection and growth, and periodically reactivates disabled beams to detect new storm development. Performance evaluations using archived and real-time data demonstrate substantial reductions in scan time, rapid detection of storm initiation, and high-fidelity coverage of active weather regions. These results highlight the potential of adaptive scanning strategies for improving weather observations using PAR systems.
{"title":"Adaptive Focused Observations on the Advanced Technology Demonstrator Phased-Array Radar","authors":"Sebastián M. Torres;Christopher D. Curtis;Robert J. Estes;Stephen B. Gregg","doi":"10.1109/TRS.2026.3656643","DOIUrl":"https://doi.org/10.1109/TRS.2026.3656643","url":null,"abstract":"The advanced technology demonstrator (ATD) is a full-scale, S-band, dual-polarization phased-array radar (PAR) developed as a proof-of-concept to evaluate the capabilities of electronically scanned radars for weather observation. Recent upgrades to the ATD have introduced a closed-loop adaptive scanning framework and an implementation of the adaptive digital signal-processing algorithm for PAR timely scans (ADAPTSs) to enable adaptive focused observations. ADAPTS leverages the radar’s beam agility to dynamically concentrate measurements in regions with significant weather returns, reducing the total scan time while preserving data quality and meaningful volumetric coverage. The algorithm selectively enables (disables) beams with (without) significant weather echoes, enables beams in a neighborhood of beams with significant weather echoes to account for storm advection and growth, and periodically reactivates disabled beams to detect new storm development. Performance evaluations using archived and real-time data demonstrate substantial reductions in scan time, rapid detection of storm initiation, and high-fidelity coverage of active weather regions. These results highlight the potential of adaptive scanning strategies for improving weather observations using PAR systems.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"463-472"},"PeriodicalIF":0.0,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sea clutter significantly impacts the radar detection of maritime targets. Existing sea clutter suppression methods often face challenges in complex, dynamic marine environments, and their generalization capabilities may be limited. This article proposes a network architecture named triplet diffusion attention multiscale Res-KAN Net (TD-AMRKNet), based on a diffusion model and triplet attention. By introducing lightweight multiscale generalized spatial convolutions (multiscale-GSConvs) and several small model networks, TD-AMRKNet effectively reduces model parameters, making it a compact and efficient network. The AMRK module, designed with a gating mechanism, captures long-range dependencies in images. It also integrates multisource knowledge through cross-resolution image fusion, thereby enhancing semantic understanding and improving the representation of details and local features. TD-AMRKNet effectively suppresses sea clutter across different radar data types, including time–frequency spectrograms from staring radar and PPI images from scanning radar. Experimental results show that the model contains only 3.46-M parameters and requires approximately 0.0305 s for overall average (OA) processing. It achieves competitive performance on six real-world sea clutter datasets, with clutter suppression effectiveness evaluated using peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), P-S, and clutter suppression ratio (CSR) metrics.
{"title":"TD-AMRKNet-Based Radar Image Processing Framework for Sea Clutter Suppression","authors":"Xiaolin Du;Di Ma;Xiaolong Chen;Guolong Cui;Jibin Zheng","doi":"10.1109/TRS.2026.3656433","DOIUrl":"https://doi.org/10.1109/TRS.2026.3656433","url":null,"abstract":"Sea clutter significantly impacts the radar detection of maritime targets. Existing sea clutter suppression methods often face challenges in complex, dynamic marine environments, and their generalization capabilities may be limited. This article proposes a network architecture named triplet diffusion attention multiscale Res-KAN Net (TD-AMRKNet), based on a diffusion model and triplet attention. By introducing lightweight multiscale generalized spatial convolutions (multiscale-GSConvs) and several small model networks, TD-AMRKNet effectively reduces model parameters, making it a compact and efficient network. The AMRK module, designed with a gating mechanism, captures long-range dependencies in images. It also integrates multisource knowledge through cross-resolution image fusion, thereby enhancing semantic understanding and improving the representation of details and local features. TD-AMRKNet effectively suppresses sea clutter across different radar data types, including time–frequency spectrograms from staring radar and PPI images from scanning radar. Experimental results show that the model contains only 3.46-M parameters and requires approximately 0.0305 s for overall average (OA) processing. It achieves competitive performance on six real-world sea clutter datasets, with clutter suppression effectiveness evaluated using peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), P-S, and clutter suppression ratio (CSR) metrics.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"387-406"},"PeriodicalIF":0.0,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High-resolution multistatic imaging radar systems pose significant challenges to the employed synchronization schemes, as such radar networks need to operate coherently. Especially high-resolution systems operating at a high center frequency push the required synchronization requirements into the single-digit picosecond regime. Within the project Imaging of Satellites in Space—Next Generation (IoSiS-NG), the task at hand is further challenged by the use of far baselines, where commonly seen approaches fail, as no line-of-sight (LOS) free-space propagation or wired method can be employed. In this article, a robust synchronization method is presented that elevates well-established global navigation satellite systems (GNSSs)-based methods by about three orders of magnitude through the coordinated reception of non-cooperative (NC) signals at all participating nodes. Exploiting the identical signal payload at all stations, the timing and phase differences of the nodes can be tracked and corrected in the post-processing stage. Here, we demonstrate our newly developed algorithm, simulative studies and real-world experiments using satellite broadcast television (TV) signals as NC signals to synchronize a high-resolution imaging radar achieving a timing standard deviation of less than 1.8 ps and a phase coherence for the X-band radar of less than 2° allowing interferometric or tomographic imaging principles to be used.
{"title":"Time and Phase Synchronization of a Broadband Multistatic Imaging Radar Network Using Non-Cooperative Signals","authors":"Fabian Hochberg;Matthias Jirousek;Simon Anger;Markus Peichl;Thomas Zwick","doi":"10.1109/TRS.2026.3654640","DOIUrl":"https://doi.org/10.1109/TRS.2026.3654640","url":null,"abstract":"High-resolution multistatic imaging radar systems pose significant challenges to the employed synchronization schemes, as such radar networks need to operate coherently. Especially high-resolution systems operating at a high center frequency push the required synchronization requirements into the single-digit picosecond regime. Within the project Imaging of Satellites in Space—Next Generation (IoSiS-NG), the task at hand is further challenged by the use of far baselines, where commonly seen approaches fail, as no line-of-sight (LOS) free-space propagation or wired method can be employed. In this article, a robust synchronization method is presented that elevates well-established global navigation satellite systems (GNSSs)-based methods by about three orders of magnitude through the coordinated reception of non-cooperative (NC) signals at all participating nodes. Exploiting the identical signal payload at all stations, the timing and phase differences of the nodes can be tracked and corrected in the post-processing stage. Here, we demonstrate our newly developed algorithm, simulative studies and real-world experiments using satellite broadcast television (TV) signals as NC signals to synchronize a high-resolution imaging radar achieving a timing standard deviation of less than 1.8 ps and a phase coherence for the X-band radar of less than 2° allowing interferometric or tomographic imaging principles to be used.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"373-386"},"PeriodicalIF":0.0,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1109/TRS.2026.3654779
Yimin Fu;Zhunga Liu;Dongxiu Guo;Longfei Wang
The acquisition of high-quality labeled synthetic aperture radar (SAR) data is challenging due to the demanding requirement for expert knowledge. Consequently, the presence of unreliable noisy labels is unavoidable, which results in performance degradation of SAR automatic target recognition (ATR). Existing research on learning with noisy labels mainly focuses on image data. However, the nonintuitive visual characteristics of SAR data are insufficient to achieve noise-robust learning. To address this problem, we propose collaborative learning of scattering and deep features (CLSDFs) for SAR ATR with noisy labels. Specifically, a multimodel feature fusion framework is designed to integrate scattering and deep features. The attributed scattering centers (ASCs) are treated as dynamic graph structure data, and the extracted physical characteristics effectively enrich the representation of deep image features. Then, the samples with clean and noisy labels are divided by modeling the loss distribution with multiple class-wise Gaussian mixture models (GMMs). Afterward, the semi-supervised learning of two divergent branches is conducted based on the data divided by each other. Moreover, a joint distribution alignment (JDA) strategy is introduced to enhance the reliability of coguessed labels. Extensive experiments have been done on the moving and stationary target acquisition and recognition (MSTAR) and SAR-ACD datasets, and the results show that the proposed method can achieve state-of-the-art performance under different operating conditions with various label noises. The code is released at https://github.com/fuyimin96/CLSDF
{"title":"Collaborative Learning of Scattering and Deep Features for SAR Target Recognition With Noisy Labels","authors":"Yimin Fu;Zhunga Liu;Dongxiu Guo;Longfei Wang","doi":"10.1109/TRS.2026.3654779","DOIUrl":"https://doi.org/10.1109/TRS.2026.3654779","url":null,"abstract":"The acquisition of high-quality labeled synthetic aperture radar (SAR) data is challenging due to the demanding requirement for expert knowledge. Consequently, the presence of unreliable noisy labels is unavoidable, which results in performance degradation of SAR automatic target recognition (ATR). Existing research on learning with noisy labels mainly focuses on image data. However, the nonintuitive visual characteristics of SAR data are insufficient to achieve noise-robust learning. To address this problem, we propose collaborative learning of scattering and deep features (CLSDFs) for SAR ATR with noisy labels. Specifically, a multimodel feature fusion framework is designed to integrate scattering and deep features. The attributed scattering centers (ASCs) are treated as dynamic graph structure data, and the extracted physical characteristics effectively enrich the representation of deep image features. Then, the samples with clean and noisy labels are divided by modeling the loss distribution with multiple class-wise Gaussian mixture models (GMMs). Afterward, the semi-supervised learning of two divergent branches is conducted based on the data divided by each other. Moreover, a joint distribution alignment (JDA) strategy is introduced to enhance the reliability of coguessed labels. Extensive experiments have been done on the moving and stationary target acquisition and recognition (MSTAR) and SAR-ACD datasets, and the results show that the proposed method can achieve state-of-the-art performance under different operating conditions with various label noises. The code is released at <uri>https://github.com/fuyimin96/CLSDF</uri>","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"359-372"},"PeriodicalIF":0.0,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1109/TRS.2026.3654820
{"title":"2025 Index IEEE Transactions on Radar Systems","authors":"","doi":"10.1109/TRS.2026.3654820","DOIUrl":"https://doi.org/10.1109/TRS.2026.3654820","url":null,"abstract":"","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"1489-1515"},"PeriodicalIF":0.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11353370","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frequency-modulated continuous wave (FMCW) radar-based hand gesture recognition (HGR) systems face deployment challenges due to variations in radar hardware, antenna layouts, and gesture classes, which cause distributional shifts across devices. These shifts limit the effectiveness of transfer learning (TL), which, while helpful for reusing knowledge, struggles with significant hardware and configuration changes and often requires substantial labeled data from the target domain. We present a radar-specific adaptation framework that enables cross-device gesture recognition with minimal labeled data. A key component is the harmonization module (HM), which performs signal-level transformations to align the range and Doppler dimensions of radar data across differing configurations. In parallel, radar-specific data augmentation techniques simulate missing antenna channels and gesture variability to improve pretraining robustness. A transformer-based foundation model is pretrained on harmonized and augmented data from a source radar and then fine-tuned using a small number of labeled samples from the target configuration. The adapted model is distilled into a lightweight architecture and deployed to clients sharing the same radar setup using an unsupervised federated learning (FL) pipeline. This enables on-device model refinement using only unlabeled data. Experiments on public datasets from Infineon and Texas Instruments radars show that our method achieves over 94% accuracy with just 20 labeled samples per class, outperforming baselines by more than 10%, and converging with four times fewer communication rounds during FL.
{"title":"Unsupervised Federated Learning With Harmonized Foundation Models for FMCW Radar-Based Hand Gesture Recognition","authors":"Tobias Sukianto;Matthias Wagner;Maximilian Strobel;Sarah Seifi;Cecilia Carbonelli;Mario Huemer","doi":"10.1109/TRS.2026.3654128","DOIUrl":"https://doi.org/10.1109/TRS.2026.3654128","url":null,"abstract":"Frequency-modulated continuous wave (FMCW) radar-based hand gesture recognition (HGR) systems face deployment challenges due to variations in radar hardware, antenna layouts, and gesture classes, which cause distributional shifts across devices. These shifts limit the effectiveness of transfer learning (TL), which, while helpful for reusing knowledge, struggles with significant hardware and configuration changes and often requires substantial labeled data from the target domain. We present a radar-specific adaptation framework that enables cross-device gesture recognition with minimal labeled data. A key component is the harmonization module (HM), which performs signal-level transformations to align the range and Doppler dimensions of radar data across differing configurations. In parallel, radar-specific data augmentation techniques simulate missing antenna channels and gesture variability to improve pretraining robustness. A transformer-based foundation model is pretrained on harmonized and augmented data from a source radar and then fine-tuned using a small number of labeled samples from the target configuration. The adapted model is distilled into a lightweight architecture and deployed to clients sharing the same radar setup using an unsupervised federated learning (FL) pipeline. This enables on-device model refinement using only unlabeled data. Experiments on public datasets from Infineon and Texas Instruments radars show that our method achieves over 94% accuracy with just 20 labeled samples per class, outperforming baselines by more than 10%, and converging with four times fewer communication rounds during FL.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"443-462"},"PeriodicalIF":0.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/TRS.2026.3652673
Qiqiang Zou;Hongda Guan;Chang Chen;Yingbin Chen;Yixiong Zhang;Caipin Li;Sailong Yang
Radar forward-looking (FL) imaging plays a crucial role in achieving high-resolution environmental perception. When a sparse array configuration is used, the imaging resolution can be improved. However, it is inevitably accompanied by severe grating lobes. Although multiframe observations can be used to mitigate the influence of grating lobes, they often introduce additional artifacts. Existing approaches remain inadequate in artifact suppression and unavoidably attenuate weak targets. To address these issues, we propose an adaptive weight strategy that leverages the statistical characteristics of multiframe data to suppress artifacts and improve the fidelity of target amplitude reconstruction. First, the energy of multichannel signals is computed for each frame, forming a multiframe energy sequence. Second, an adaptive weight is calculated based on the mean and variance of this sequence. Finally, an exponential factor of the weight is applied to enhance the spectral reconstruction by enlarging the discrimination between true targets and artifacts. The effectiveness of the proposed method is validated through simulation and measurement experiments, demonstrating its advantages over existing methods.
{"title":"Enhancement of Forward-Looking Imaging Based on Sparse MIMO Radar via Motion-Based Multiframe Inversion","authors":"Qiqiang Zou;Hongda Guan;Chang Chen;Yingbin Chen;Yixiong Zhang;Caipin Li;Sailong Yang","doi":"10.1109/TRS.2026.3652673","DOIUrl":"https://doi.org/10.1109/TRS.2026.3652673","url":null,"abstract":"Radar forward-looking (FL) imaging plays a crucial role in achieving high-resolution environmental perception. When a sparse array configuration is used, the imaging resolution can be improved. However, it is inevitably accompanied by severe grating lobes. Although multiframe observations can be used to mitigate the influence of grating lobes, they often introduce additional artifacts. Existing approaches remain inadequate in artifact suppression and unavoidably attenuate weak targets. To address these issues, we propose an adaptive weight strategy that leverages the statistical characteristics of multiframe data to suppress artifacts and improve the fidelity of target amplitude reconstruction. First, the energy of multichannel signals is computed for each frame, forming a multiframe energy sequence. Second, an adaptive weight is calculated based on the mean and variance of this sequence. Finally, an exponential factor of the weight is applied to enhance the spectral reconstruction by enlarging the discrimination between true targets and artifacts. The effectiveness of the proposed method is validated through simulation and measurement experiments, demonstrating its advantages over existing methods.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"353-358"},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}