首页 > 最新文献

IEEE Transactions on Radar Systems最新文献

英文 中文
SAR-PMAE: Phase-Modulation-Guided Adversarial Example Generation for SAR Images 相位调制制导SAR图像对抗样例生成
Pub Date : 2026-01-26 DOI: 10.1109/TRS.2026.3657792
Jiahao Cui;Wang Guo;Qi Wang;Keyi Zhang;Lingquan Meng;Haifeng Li
The presence of adversarial examples can cause synthetic aperture radar (SAR) image classification systems to produce incorrect predictions, severely compromising their accuracy and robustness. However, existing adversarial example generation methods for SAR images often suffer from limited physical interpretability, low perturbation energy utilization, and insufficient perturbation stealthiness. To address the above issues, this article proposes SAR-PMAE, a phase-modulation-guided adversarial example generation method for SAR images. In particular, to tackle the lack of physical interpretability, a phase-modulation strategy based on the point-target scattering model is employed, coupling adversarial perturbations with the EM scattering mechanism to enhance their physical interpretability; to improve perturbation energy utilization, a local focusing strategy is adopted, in which foreground targets are precisely segmented and centered via a watershed algorithm combined with morphological operations, concentrating perturbations in strong response regions and avoiding redundant energy dispersion into background clutter; and to enhance perturbation stealthiness, constraints are applied separately to the phase in the complex coherent domain, thereby maximally preserving SAR speckle statistics and improving perturbation concealment. Experimental results show that the proposed method achieves an average untargeted attack success rate (ASR) of 55.53% and a targeted ASR of 20.92% across ten classifiers based on convolutional neural network (CNN) and Transformer architectures, while exhibiting strong attack transferability. The complete implementation is publicly available at https://github.com/muzhengcui/SAR-PMAE
对抗性示例的存在会导致合成孔径雷达(SAR)图像分类系统产生不正确的预测,严重影响其准确性和鲁棒性。然而,现有的SAR图像对抗样例生成方法存在物理可解释性有限、摄动能量利用率低、摄动隐蔽性不足等问题。为了解决上述问题,本文提出了一种相位调制制导的SAR图像对抗样例生成方法SAR- pmae。特别是,为了解决物理可解释性不足的问题,采用了基于点目标散射模型的调相策略,将对抗性扰动与EM散射机制相耦合,以提高其物理可解释性;为了提高摄动能量的利用率,采用局部聚焦策略,通过结合形态学运算的分水岭算法对前景目标进行精确分割和对中,将摄动集中在强响应区域,避免冗余能量分散到背景杂波中;为了提高摄动隐身性,在复相干域中对相位单独施加约束,从而最大限度地保留SAR散斑统计量并改善摄动隐蔽性。实验结果表明,基于卷积神经网络(CNN)和Transformer架构的10种分类器的平均非目标攻击成功率(ASR)为55.53%,目标攻击成功率为20.92%,且具有较强的攻击可转移性。完整的实现可以在https://github.com/muzhengcui/SAR-PMAE上公开获得
{"title":"SAR-PMAE: Phase-Modulation-Guided Adversarial Example Generation for SAR Images","authors":"Jiahao Cui;Wang Guo;Qi Wang;Keyi Zhang;Lingquan Meng;Haifeng Li","doi":"10.1109/TRS.2026.3657792","DOIUrl":"https://doi.org/10.1109/TRS.2026.3657792","url":null,"abstract":"The presence of adversarial examples can cause synthetic aperture radar (SAR) image classification systems to produce incorrect predictions, severely compromising their accuracy and robustness. However, existing adversarial example generation methods for SAR images often suffer from limited physical interpretability, low perturbation energy utilization, and insufficient perturbation stealthiness. To address the above issues, this article proposes SAR-PMAE, a phase-modulation-guided adversarial example generation method for SAR images. In particular, to tackle the lack of physical interpretability, a phase-modulation strategy based on the point-target scattering model is employed, coupling adversarial perturbations with the EM scattering mechanism to enhance their physical interpretability; to improve perturbation energy utilization, a local focusing strategy is adopted, in which foreground targets are precisely segmented and centered via a watershed algorithm combined with morphological operations, concentrating perturbations in strong response regions and avoiding redundant energy dispersion into background clutter; and to enhance perturbation stealthiness, constraints are applied separately to the phase in the complex coherent domain, thereby maximally preserving SAR speckle statistics and improving perturbation concealment. Experimental results show that the proposed method achieves an average untargeted attack success rate (ASR) of 55.53% and a targeted ASR of 20.92% across ten classifiers based on convolutional neural network (CNN) and Transformer architectures, while exhibiting strong attack transferability. The complete implementation is publicly available at <uri>https://github.com/muzhengcui/SAR-PMAE</uri>","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"407-429"},"PeriodicalIF":0.0,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spin Estimation and 3-D Geometry Reconstruction Using Multiperspective Space-Borne Sub-THz Interferometric Inverse Synthetic Aperture Radar 基于星载亚太赫兹干涉反合成孔径雷达的自旋估计与三维几何重建
Pub Date : 2026-01-26 DOI: 10.1109/TRS.2026.3657920
Bangjie Zhang;Marina S. Gashinova;Marco Martorella
Sub-terahertz (sub-THz) radar enables high-resolution and highly detailed imaging of space objects, offering substantial advantages for space domain awareness (SDA). Accurate estimation of spin motion and 3-D geometry of noncooperative space targets is essential for SDA tasks such as target characterization, autonomous rendezvous, and proximity operation (RPO), and debris removal. Interferometric ISAR (InISAR), which generates 3-D images in the form of point clouds, can also resolve the effective rotation vector—defined as the component of the total rotation vector perpendicular to the line of sight (LOS). This component is critical for 2-D image scaling and for enhancing understanding of the imaging projection plane (IPP). However, the component along the LOS remains unknown, which limits the ability to fully exploit multiperspective ISAR imaging to characterize targets. In this article, a novel spin estimation framework is proposed based on multiperspective InISAR, which leverages viewing angle diversity during on-orbit observation to resolve the total rotation vector of space targets. The proposed method performs InISAR imaging and the estimation of the effective rotation vector for each perspective, followed by the combination of effective rotation vectors to determine the total rotation vector as a unified least-squares solution. The framework requires no prior 3-D model and achieves robust spin estimation. Both simulation experiments using sub-THz radar parameters and laboratory validation using W-band radar are carried out to verify the effectiveness of the proposed method, demonstrating its potential for future SDA and on-orbit servicing applications.
亚太赫兹(sub-THz)雷达能够对空间物体进行高分辨率和高度详细的成像,为空间域感知(SDA)提供了实质性的优势。精确估计非合作空间目标的自旋运动和三维几何形状对于目标表征、自主交会和接近操作(RPO)以及碎片清除等SDA任务至关重要。干涉ISAR (InISAR)以点云的形式生成三维图像,也可以解析有效旋转矢量——定义为垂直于视线(LOS)的总旋转矢量的分量。该组件对于二维图像缩放和增强对成像投影平面(IPP)的理解至关重要。然而,LOS沿线的组件仍然未知,这限制了充分利用多视角ISAR成像来表征目标的能力。本文提出了一种基于多视角InISAR的自旋估计框架,利用在轨观测时视角的多样性来求解空间目标的总旋转矢量。该方法首先进行InISAR成像,并对每个视角进行有效旋转矢量估计,然后将有效旋转矢量组合,确定总旋转矢量作为统一的最小二乘解。该框架不需要预先建立三维模型,实现了鲁棒的自旋估计。利用亚太赫兹雷达参数和w波段雷达进行了仿真实验,验证了该方法的有效性,展示了其在未来SDA和在轨服务应用中的潜力。
{"title":"Spin Estimation and 3-D Geometry Reconstruction Using Multiperspective Space-Borne Sub-THz Interferometric Inverse Synthetic Aperture Radar","authors":"Bangjie Zhang;Marina S. Gashinova;Marco Martorella","doi":"10.1109/TRS.2026.3657920","DOIUrl":"https://doi.org/10.1109/TRS.2026.3657920","url":null,"abstract":"Sub-terahertz (sub-THz) radar enables high-resolution and highly detailed imaging of space objects, offering substantial advantages for space domain awareness (SDA). Accurate estimation of spin motion and 3-D geometry of noncooperative space targets is essential for SDA tasks such as target characterization, autonomous rendezvous, and proximity operation (RPO), and debris removal. Interferometric ISAR (InISAR), which generates 3-D images in the form of point clouds, can also resolve the effective rotation vector—defined as the component of the total rotation vector perpendicular to the line of sight (LOS). This component is critical for 2-D image scaling and for enhancing understanding of the imaging projection plane (IPP). However, the component along the LOS remains unknown, which limits the ability to fully exploit multiperspective ISAR imaging to characterize targets. In this article, a novel spin estimation framework is proposed based on multiperspective InISAR, which leverages viewing angle diversity during on-orbit observation to resolve the total rotation vector of space targets. The proposed method performs InISAR imaging and the estimation of the effective rotation vector for each perspective, followed by the combination of effective rotation vectors to determine the total rotation vector as a unified least-squares solution. The framework requires no prior 3-D model and achieves robust spin estimation. Both simulation experiments using sub-THz radar parameters and laboratory validation using W-band radar are carried out to verify the effectiveness of the proposed method, demonstrating its potential for future SDA and on-orbit servicing applications.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"473-485"},"PeriodicalIF":0.0,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximation of the Range Ambiguity Function in Near-Field Sensing Systems 近场传感系统中距离模糊函数的逼近
Pub Date : 2026-01-22 DOI: 10.1109/TRS.2026.3657154
Marcin Wachowiak;André Bourdoux;Sofie Pollin
This article investigates the range ambiguity function of near-field (NF) systems where bandwidth and NF beamfocusing jointly determine the resolution. First, the general matched filter ambiguity function is derived and the NF array factors of different antenna array geometries are introduced. Next, the NF ambiguity function is approximated as a product of the range-dependent NF array factor and the ambiguity function due to the utilized waveform and bandwidth. An approximation criterion based on the aperture–bandwidth product is formulated, and its accuracy is examined. Finally, the improvements to the ambiguity function offered by the NF beamfocusing, as compared to the far-field case, are presented. The performance gains are evaluated in terms of resolution improvement offered by beamfocusing, peak-to-sidelobe, and integrated-sidelobe-level improvement for a few popular array geometries. The gains offered by the NF regime are shown to be range-dependent and substantial only in close proximity to the array.
研究了带宽和波束聚焦共同决定分辨率的近场系统距离模糊函数。首先,推导了一般匹配滤波器模糊函数,并介绍了不同天线阵列几何形状下的NF阵列因子。接下来,将NF模糊函数近似为与距离相关的NF阵列因子和由所利用的波形和带宽引起的模糊函数的乘积。提出了基于孔径-带宽乘积的近似准则,并对其精度进行了检验。最后,介绍了与远场情况相比,纳频波束聚焦对模糊函数的改进。性能增益是根据波束聚焦、峰对旁瓣和集成旁瓣对一些流行的阵列几何形状的分辨率改进来评估的。NF体制提供的增益与距离有关,只有在靠近阵列时才有实质性的增益。
{"title":"Approximation of the Range Ambiguity Function in Near-Field Sensing Systems","authors":"Marcin Wachowiak;André Bourdoux;Sofie Pollin","doi":"10.1109/TRS.2026.3657154","DOIUrl":"https://doi.org/10.1109/TRS.2026.3657154","url":null,"abstract":"This article investigates the range ambiguity function of near-field (NF) systems where bandwidth and NF beamfocusing jointly determine the resolution. First, the general matched filter ambiguity function is derived and the NF array factors of different antenna array geometries are introduced. Next, the NF ambiguity function is approximated as a product of the range-dependent NF array factor and the ambiguity function due to the utilized waveform and bandwidth. An approximation criterion based on the aperture–bandwidth product is formulated, and its accuracy is examined. Finally, the improvements to the ambiguity function offered by the NF beamfocusing, as compared to the far-field case, are presented. The performance gains are evaluated in terms of resolution improvement offered by beamfocusing, peak-to-sidelobe, and integrated-sidelobe-level improvement for a few popular array geometries. The gains offered by the NF regime are shown to be range-dependent and substantial only in close proximity to the array.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"430-442"},"PeriodicalIF":0.0,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Focused Observations on the Advanced Technology Demonstrator Phased-Array Radar 先进技术验证型相控阵雷达的自适应聚焦观测
Pub Date : 2026-01-21 DOI: 10.1109/TRS.2026.3656643
Sebastián M. Torres;Christopher D. Curtis;Robert J. Estes;Stephen B. Gregg
The advanced technology demonstrator (ATD) is a full-scale, S-band, dual-polarization phased-array radar (PAR) developed as a proof-of-concept to evaluate the capabilities of electronically scanned radars for weather observation. Recent upgrades to the ATD have introduced a closed-loop adaptive scanning framework and an implementation of the adaptive digital signal-processing algorithm for PAR timely scans (ADAPTSs) to enable adaptive focused observations. ADAPTS leverages the radar’s beam agility to dynamically concentrate measurements in regions with significant weather returns, reducing the total scan time while preserving data quality and meaningful volumetric coverage. The algorithm selectively enables (disables) beams with (without) significant weather echoes, enables beams in a neighborhood of beams with significant weather echoes to account for storm advection and growth, and periodically reactivates disabled beams to detect new storm development. Performance evaluations using archived and real-time data demonstrate substantial reductions in scan time, rapid detection of storm initiation, and high-fidelity coverage of active weather regions. These results highlight the potential of adaptive scanning strategies for improving weather observations using PAR systems.
先进技术演示器(ATD)是一种全尺寸、s波段、双极化相控阵雷达(PAR),作为一种概念验证,用于评估电子扫描雷达用于天气观测的能力。最近对ATD的升级引入了闭环自适应扫描框架和自适应数字信号处理算法,用于PAR及时扫描(adapts),以实现自适应聚焦观测。ADAPTS利用雷达的波束敏捷性,动态地将测量集中在有显著天气回报的地区,减少总扫描时间,同时保持数据质量和有意义的体积覆盖。该算法选择性地启用(禁用)具有(不具有)显著天气回波的波束,启用具有显著天气回波的波束附近的波束,以解释风暴平流和增长,并周期性地重新激活被禁用的波束,以检测新的风暴发展。使用存档和实时数据进行的性能评估表明,扫描时间大大缩短,风暴开始的快速检测,以及对活跃天气区域的高保真覆盖。这些结果突出了自适应扫描策略在利用PAR系统改善天气观测方面的潜力。
{"title":"Adaptive Focused Observations on the Advanced Technology Demonstrator Phased-Array Radar","authors":"Sebastián M. Torres;Christopher D. Curtis;Robert J. Estes;Stephen B. Gregg","doi":"10.1109/TRS.2026.3656643","DOIUrl":"https://doi.org/10.1109/TRS.2026.3656643","url":null,"abstract":"The advanced technology demonstrator (ATD) is a full-scale, S-band, dual-polarization phased-array radar (PAR) developed as a proof-of-concept to evaluate the capabilities of electronically scanned radars for weather observation. Recent upgrades to the ATD have introduced a closed-loop adaptive scanning framework and an implementation of the adaptive digital signal-processing algorithm for PAR timely scans (ADAPTSs) to enable adaptive focused observations. ADAPTS leverages the radar’s beam agility to dynamically concentrate measurements in regions with significant weather returns, reducing the total scan time while preserving data quality and meaningful volumetric coverage. The algorithm selectively enables (disables) beams with (without) significant weather echoes, enables beams in a neighborhood of beams with significant weather echoes to account for storm advection and growth, and periodically reactivates disabled beams to detect new storm development. Performance evaluations using archived and real-time data demonstrate substantial reductions in scan time, rapid detection of storm initiation, and high-fidelity coverage of active weather regions. These results highlight the potential of adaptive scanning strategies for improving weather observations using PAR systems.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"463-472"},"PeriodicalIF":0.0,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TD-AMRKNet-Based Radar Image Processing Framework for Sea Clutter Suppression 基于td - amrknet的海杂波抑制雷达图像处理框架
Pub Date : 2026-01-20 DOI: 10.1109/TRS.2026.3656433
Xiaolin Du;Di Ma;Xiaolong Chen;Guolong Cui;Jibin Zheng
Sea clutter significantly impacts the radar detection of maritime targets. Existing sea clutter suppression methods often face challenges in complex, dynamic marine environments, and their generalization capabilities may be limited. This article proposes a network architecture named triplet diffusion attention multiscale Res-KAN Net (TD-AMRKNet), based on a diffusion model and triplet attention. By introducing lightweight multiscale generalized spatial convolutions (multiscale-GSConvs) and several small model networks, TD-AMRKNet effectively reduces model parameters, making it a compact and efficient network. The AMRK module, designed with a gating mechanism, captures long-range dependencies in images. It also integrates multisource knowledge through cross-resolution image fusion, thereby enhancing semantic understanding and improving the representation of details and local features. TD-AMRKNet effectively suppresses sea clutter across different radar data types, including time–frequency spectrograms from staring radar and PPI images from scanning radar. Experimental results show that the model contains only 3.46-M parameters and requires approximately 0.0305 s for overall average (OA) processing. It achieves competitive performance on six real-world sea clutter datasets, with clutter suppression effectiveness evaluated using peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), P-S, and clutter suppression ratio (CSR) metrics.
海杂波对雷达探测海上目标的影响很大。现有的海杂波抑制方法在复杂、动态的海洋环境中往往面临挑战,其泛化能力可能受到限制。本文提出了一种基于扩散模型和三重注意力的三重扩散注意力多尺度Res-KAN网(TD-AMRKNet)网络结构。TD-AMRKNet通过引入轻量级多尺度广义空间卷积(multiscale- gsconvs)和几个小型模型网络,有效地降低了模型参数,使其成为一个紧凑高效的网络。AMRK模块设计了一个门控机制,可以捕获图像中的远程依赖关系。它还通过跨分辨率图像融合集成了多源知识,从而增强了语义理解,改善了细节和局部特征的表示。TD-AMRKNet有效地抑制了不同雷达数据类型的海杂波,包括凝视雷达的时频谱图和扫描雷达的PPI图像。实验结果表明,该模型仅包含3.46 m个参数,总体平均(OA)处理时间约为0.0305 s。通过峰值信噪比(PSNR)、结构相似指数(SSIM)、P-S和杂波抑制比(CSR)等指标来评估杂波抑制效果,该系统在六个真实海杂波数据集上取得了具有竞争力的性能。
{"title":"TD-AMRKNet-Based Radar Image Processing Framework for Sea Clutter Suppression","authors":"Xiaolin Du;Di Ma;Xiaolong Chen;Guolong Cui;Jibin Zheng","doi":"10.1109/TRS.2026.3656433","DOIUrl":"https://doi.org/10.1109/TRS.2026.3656433","url":null,"abstract":"Sea clutter significantly impacts the radar detection of maritime targets. Existing sea clutter suppression methods often face challenges in complex, dynamic marine environments, and their generalization capabilities may be limited. This article proposes a network architecture named triplet diffusion attention multiscale Res-KAN Net (TD-AMRKNet), based on a diffusion model and triplet attention. By introducing lightweight multiscale generalized spatial convolutions (multiscale-GSConvs) and several small model networks, TD-AMRKNet effectively reduces model parameters, making it a compact and efficient network. The AMRK module, designed with a gating mechanism, captures long-range dependencies in images. It also integrates multisource knowledge through cross-resolution image fusion, thereby enhancing semantic understanding and improving the representation of details and local features. TD-AMRKNet effectively suppresses sea clutter across different radar data types, including time–frequency spectrograms from staring radar and PPI images from scanning radar. Experimental results show that the model contains only 3.46-M parameters and requires approximately 0.0305 s for overall average (OA) processing. It achieves competitive performance on six real-world sea clutter datasets, with clutter suppression effectiveness evaluated using peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), P-S, and clutter suppression ratio (CSR) metrics.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"387-406"},"PeriodicalIF":0.0,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time and Phase Synchronization of a Broadband Multistatic Imaging Radar Network Using Non-Cooperative Signals 基于非合作信号的宽带多静态成像雷达网络时相同步
Pub Date : 2026-01-15 DOI: 10.1109/TRS.2026.3654640
Fabian Hochberg;Matthias Jirousek;Simon Anger;Markus Peichl;Thomas Zwick
High-resolution multistatic imaging radar systems pose significant challenges to the employed synchronization schemes, as such radar networks need to operate coherently. Especially high-resolution systems operating at a high center frequency push the required synchronization requirements into the single-digit picosecond regime. Within the project Imaging of Satellites in Space—Next Generation (IoSiS-NG), the task at hand is further challenged by the use of far baselines, where commonly seen approaches fail, as no line-of-sight (LOS) free-space propagation or wired method can be employed. In this article, a robust synchronization method is presented that elevates well-established global navigation satellite systems (GNSSs)-based methods by about three orders of magnitude through the coordinated reception of non-cooperative (NC) signals at all participating nodes. Exploiting the identical signal payload at all stations, the timing and phase differences of the nodes can be tracked and corrected in the post-processing stage. Here, we demonstrate our newly developed algorithm, simulative studies and real-world experiments using satellite broadcast television (TV) signals as NC signals to synchronize a high-resolution imaging radar achieving a timing standard deviation of less than 1.8 ps and a phase coherence for the X-band radar of less than 2° allowing interferometric or tomographic imaging principles to be used.
高分辨率多基地成像雷达系统对所采用的同步方案提出了重大挑战,因为这种雷达网络需要同步运行。特别是在高中心频率下工作的高分辨率系统将所需的同步要求推到了个位数皮秒。在新一代太空卫星成像(IoSiS-NG)项目中,当前的任务受到远基线使用的进一步挑战,通常使用的方法失败了,因为无法使用视距(LOS)自由空间传播或有线方法。在本文中,提出了一种鲁棒同步方法,通过在所有参与节点协调接收非合作(NC)信号,将基于全球导航卫星系统(gnss)的方法提高了约三个数量级。利用所有站点相同的信号有效载荷,可以在后处理阶段跟踪和纠正节点的时序和相位差。在这里,我们展示了我们新开发的算法,模拟研究和现实世界的实验,使用卫星广播电视(TV)信号作为NC信号来同步高分辨率成像雷达,实现时间标准偏差小于1.8 ps, x波段雷达的相位相干性小于2°,允许使用干涉测量或层析成像原理。
{"title":"Time and Phase Synchronization of a Broadband Multistatic Imaging Radar Network Using Non-Cooperative Signals","authors":"Fabian Hochberg;Matthias Jirousek;Simon Anger;Markus Peichl;Thomas Zwick","doi":"10.1109/TRS.2026.3654640","DOIUrl":"https://doi.org/10.1109/TRS.2026.3654640","url":null,"abstract":"High-resolution multistatic imaging radar systems pose significant challenges to the employed synchronization schemes, as such radar networks need to operate coherently. Especially high-resolution systems operating at a high center frequency push the required synchronization requirements into the single-digit picosecond regime. Within the project Imaging of Satellites in Space—Next Generation (IoSiS-NG), the task at hand is further challenged by the use of far baselines, where commonly seen approaches fail, as no line-of-sight (LOS) free-space propagation or wired method can be employed. In this article, a robust synchronization method is presented that elevates well-established global navigation satellite systems (GNSSs)-based methods by about three orders of magnitude through the coordinated reception of non-cooperative (NC) signals at all participating nodes. Exploiting the identical signal payload at all stations, the timing and phase differences of the nodes can be tracked and corrected in the post-processing stage. Here, we demonstrate our newly developed algorithm, simulative studies and real-world experiments using satellite broadcast television (TV) signals as NC signals to synchronize a high-resolution imaging radar achieving a timing standard deviation of less than 1.8 ps and a phase coherence for the X-band radar of less than 2° allowing interferometric or tomographic imaging principles to be used.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"373-386"},"PeriodicalIF":0.0,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative Learning of Scattering and Deep Features for SAR Target Recognition With Noisy Labels 带噪声标签SAR目标识别中散射与深度特征的协同学习
Pub Date : 2026-01-15 DOI: 10.1109/TRS.2026.3654779
Yimin Fu;Zhunga Liu;Dongxiu Guo;Longfei Wang
The acquisition of high-quality labeled synthetic aperture radar (SAR) data is challenging due to the demanding requirement for expert knowledge. Consequently, the presence of unreliable noisy labels is unavoidable, which results in performance degradation of SAR automatic target recognition (ATR). Existing research on learning with noisy labels mainly focuses on image data. However, the nonintuitive visual characteristics of SAR data are insufficient to achieve noise-robust learning. To address this problem, we propose collaborative learning of scattering and deep features (CLSDFs) for SAR ATR with noisy labels. Specifically, a multimodel feature fusion framework is designed to integrate scattering and deep features. The attributed scattering centers (ASCs) are treated as dynamic graph structure data, and the extracted physical characteristics effectively enrich the representation of deep image features. Then, the samples with clean and noisy labels are divided by modeling the loss distribution with multiple class-wise Gaussian mixture models (GMMs). Afterward, the semi-supervised learning of two divergent branches is conducted based on the data divided by each other. Moreover, a joint distribution alignment (JDA) strategy is introduced to enhance the reliability of coguessed labels. Extensive experiments have been done on the moving and stationary target acquisition and recognition (MSTAR) and SAR-ACD datasets, and the results show that the proposed method can achieve state-of-the-art performance under different operating conditions with various label noises. The code is released at https://github.com/fuyimin96/CLSDF
由于对专家知识的要求很高,高质量标记合成孔径雷达(SAR)数据的获取具有挑战性。因此,不可靠的噪声标签的存在是不可避免的,这将导致SAR自动目标识别(ATR)的性能下降。现有的带有噪声标签的学习研究主要集中在图像数据上。然而,SAR数据的非直观视觉特征不足以实现抗噪学习。为了解决这个问题,我们提出了带有噪声标签的SAR ATR散射和深度特征的协同学习(clsdf)。具体来说,设计了一个多模型特征融合框架,将散射特征和深度特征融合在一起。将属性散射中心(ASCs)作为动态图结构数据处理,提取的物理特征有效地丰富了图像深度特征的表征。然后,使用多个分类高斯混合模型(GMMs)建模损失分布,对带有干净标签和有噪声标签的样本进行划分。然后,基于彼此分割的数据对两个分叉分支进行半监督学习。此外,还引入了联合分布对齐(JDA)策略,以提高共同猜测标签的可靠性。在运动和静止目标获取与识别(MSTAR)和SAR-ACD数据集上进行了大量的实验,结果表明,该方法在不同的操作条件下具有不同的标签噪声,可以达到最先进的性能。该代码发布在https://github.com/fuyimin96/CLSDF
{"title":"Collaborative Learning of Scattering and Deep Features for SAR Target Recognition With Noisy Labels","authors":"Yimin Fu;Zhunga Liu;Dongxiu Guo;Longfei Wang","doi":"10.1109/TRS.2026.3654779","DOIUrl":"https://doi.org/10.1109/TRS.2026.3654779","url":null,"abstract":"The acquisition of high-quality labeled synthetic aperture radar (SAR) data is challenging due to the demanding requirement for expert knowledge. Consequently, the presence of unreliable noisy labels is unavoidable, which results in performance degradation of SAR automatic target recognition (ATR). Existing research on learning with noisy labels mainly focuses on image data. However, the nonintuitive visual characteristics of SAR data are insufficient to achieve noise-robust learning. To address this problem, we propose collaborative learning of scattering and deep features (CLSDFs) for SAR ATR with noisy labels. Specifically, a multimodel feature fusion framework is designed to integrate scattering and deep features. The attributed scattering centers (ASCs) are treated as dynamic graph structure data, and the extracted physical characteristics effectively enrich the representation of deep image features. Then, the samples with clean and noisy labels are divided by modeling the loss distribution with multiple class-wise Gaussian mixture models (GMMs). Afterward, the semi-supervised learning of two divergent branches is conducted based on the data divided by each other. Moreover, a joint distribution alignment (JDA) strategy is introduced to enhance the reliability of coguessed labels. Extensive experiments have been done on the moving and stationary target acquisition and recognition (MSTAR) and SAR-ACD datasets, and the results show that the proposed method can achieve state-of-the-art performance under different operating conditions with various label noises. The code is released at <uri>https://github.com/fuyimin96/CLSDF</uri>","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"359-372"},"PeriodicalIF":0.0,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2025 Index IEEE Transactions on Radar Systems 雷达系统学报
Pub Date : 2026-01-14 DOI: 10.1109/TRS.2026.3654820
{"title":"2025 Index IEEE Transactions on Radar Systems","authors":"","doi":"10.1109/TRS.2026.3654820","DOIUrl":"https://doi.org/10.1109/TRS.2026.3654820","url":null,"abstract":"","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"1489-1515"},"PeriodicalIF":0.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11353370","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Federated Learning With Harmonized Foundation Models for FMCW Radar-Based Hand Gesture Recognition 基于FMCW雷达手势识别的协调基础模型无监督联邦学习
Pub Date : 2026-01-14 DOI: 10.1109/TRS.2026.3654128
Tobias Sukianto;Matthias Wagner;Maximilian Strobel;Sarah Seifi;Cecilia Carbonelli;Mario Huemer
Frequency-modulated continuous wave (FMCW) radar-based hand gesture recognition (HGR) systems face deployment challenges due to variations in radar hardware, antenna layouts, and gesture classes, which cause distributional shifts across devices. These shifts limit the effectiveness of transfer learning (TL), which, while helpful for reusing knowledge, struggles with significant hardware and configuration changes and often requires substantial labeled data from the target domain. We present a radar-specific adaptation framework that enables cross-device gesture recognition with minimal labeled data. A key component is the harmonization module (HM), which performs signal-level transformations to align the range and Doppler dimensions of radar data across differing configurations. In parallel, radar-specific data augmentation techniques simulate missing antenna channels and gesture variability to improve pretraining robustness. A transformer-based foundation model is pretrained on harmonized and augmented data from a source radar and then fine-tuned using a small number of labeled samples from the target configuration. The adapted model is distilled into a lightweight architecture and deployed to clients sharing the same radar setup using an unsupervised federated learning (FL) pipeline. This enables on-device model refinement using only unlabeled data. Experiments on public datasets from Infineon and Texas Instruments radars show that our method achieves over 94% accuracy with just 20 labeled samples per class, outperforming baselines by more than 10%, and converging with four times fewer communication rounds during FL.
基于调频连续波(FMCW)雷达的手势识别(HGR)系统由于雷达硬件、天线布局和手势类别的变化而面临部署挑战,这些变化会导致设备之间的分布变化。这些变化限制了迁移学习(TL)的有效性,迁移学习虽然有助于知识的重用,但会与重大的硬件和配置更改作斗争,并且通常需要来自目标领域的大量标记数据。我们提出了一个雷达特定的适应框架,使跨设备手势识别与最小的标记数据。一个关键组件是协调模块(HM),它执行信号级转换,以在不同配置下对齐雷达数据的距离和多普勒维度。同时,雷达特定数据增强技术模拟缺失的天线信道和手势可变性,以提高预训练的鲁棒性。基于变压器的基础模型是在来自源雷达的协调和增强数据上进行预训练的,然后使用来自目标配置的少量标记样本进行微调。经过调整的模型被提炼成一个轻量级架构,并使用无监督联邦学习(FL)管道部署到共享相同雷达设置的客户端。这样就可以只使用未标记的数据来优化设备上的模型。在英飞凌和德州仪器雷达的公共数据集上进行的实验表明,我们的方法在每个类别只有20个标记样本的情况下达到了94%以上的准确率,比基线高出10%以上,并且在FL期间的通信轮数减少了四倍。
{"title":"Unsupervised Federated Learning With Harmonized Foundation Models for FMCW Radar-Based Hand Gesture Recognition","authors":"Tobias Sukianto;Matthias Wagner;Maximilian Strobel;Sarah Seifi;Cecilia Carbonelli;Mario Huemer","doi":"10.1109/TRS.2026.3654128","DOIUrl":"https://doi.org/10.1109/TRS.2026.3654128","url":null,"abstract":"Frequency-modulated continuous wave (FMCW) radar-based hand gesture recognition (HGR) systems face deployment challenges due to variations in radar hardware, antenna layouts, and gesture classes, which cause distributional shifts across devices. These shifts limit the effectiveness of transfer learning (TL), which, while helpful for reusing knowledge, struggles with significant hardware and configuration changes and often requires substantial labeled data from the target domain. We present a radar-specific adaptation framework that enables cross-device gesture recognition with minimal labeled data. A key component is the harmonization module (HM), which performs signal-level transformations to align the range and Doppler dimensions of radar data across differing configurations. In parallel, radar-specific data augmentation techniques simulate missing antenna channels and gesture variability to improve pretraining robustness. A transformer-based foundation model is pretrained on harmonized and augmented data from a source radar and then fine-tuned using a small number of labeled samples from the target configuration. The adapted model is distilled into a lightweight architecture and deployed to clients sharing the same radar setup using an unsupervised federated learning (FL) pipeline. This enables on-device model refinement using only unlabeled data. Experiments on public datasets from Infineon and Texas Instruments radars show that our method achieves over 94% accuracy with just 20 labeled samples per class, outperforming baselines by more than 10%, and converging with four times fewer communication rounds during FL.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"443-462"},"PeriodicalIF":0.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancement of Forward-Looking Imaging Based on Sparse MIMO Radar via Motion-Based Multiframe Inversion 基于运动的多帧反演增强稀疏MIMO雷达前视成像
Pub Date : 2026-01-12 DOI: 10.1109/TRS.2026.3652673
Qiqiang Zou;Hongda Guan;Chang Chen;Yingbin Chen;Yixiong Zhang;Caipin Li;Sailong Yang
Radar forward-looking (FL) imaging plays a crucial role in achieving high-resolution environmental perception. When a sparse array configuration is used, the imaging resolution can be improved. However, it is inevitably accompanied by severe grating lobes. Although multiframe observations can be used to mitigate the influence of grating lobes, they often introduce additional artifacts. Existing approaches remain inadequate in artifact suppression and unavoidably attenuate weak targets. To address these issues, we propose an adaptive weight strategy that leverages the statistical characteristics of multiframe data to suppress artifacts and improve the fidelity of target amplitude reconstruction. First, the energy of multichannel signals is computed for each frame, forming a multiframe energy sequence. Second, an adaptive weight is calculated based on the mean and variance of this sequence. Finally, an exponential factor of the weight is applied to enhance the spectral reconstruction by enlarging the discrimination between true targets and artifacts. The effectiveness of the proposed method is validated through simulation and measurement experiments, demonstrating its advantages over existing methods.
雷达前视成像(FL)在实现高分辨率环境感知中起着至关重要的作用。当使用稀疏阵列配置时,可以提高成像分辨率。然而,它不可避免地伴随着严重的光栅瓣。虽然多帧观测可以用来减轻光栅瓣的影响,但它们往往会引入额外的伪影。现有的方法在抑制伪影方面存在不足,不可避免地会减弱弱目标。为了解决这些问题,我们提出了一种自适应加权策略,该策略利用多帧数据的统计特征来抑制伪像并提高目标振幅重建的保真度。首先,计算每帧多通道信号的能量,形成多帧能量序列;其次,根据该序列的均值和方差计算自适应权重;最后,利用权重的指数因子,通过扩大真实目标和伪影之间的区分来增强光谱重建。通过仿真和测量实验验证了该方法的有效性,证明了该方法相对于现有方法的优越性。
{"title":"Enhancement of Forward-Looking Imaging Based on Sparse MIMO Radar via Motion-Based Multiframe Inversion","authors":"Qiqiang Zou;Hongda Guan;Chang Chen;Yingbin Chen;Yixiong Zhang;Caipin Li;Sailong Yang","doi":"10.1109/TRS.2026.3652673","DOIUrl":"https://doi.org/10.1109/TRS.2026.3652673","url":null,"abstract":"Radar forward-looking (FL) imaging plays a crucial role in achieving high-resolution environmental perception. When a sparse array configuration is used, the imaging resolution can be improved. However, it is inevitably accompanied by severe grating lobes. Although multiframe observations can be used to mitigate the influence of grating lobes, they often introduce additional artifacts. Existing approaches remain inadequate in artifact suppression and unavoidably attenuate weak targets. To address these issues, we propose an adaptive weight strategy that leverages the statistical characteristics of multiframe data to suppress artifacts and improve the fidelity of target amplitude reconstruction. First, the energy of multichannel signals is computed for each frame, forming a multiframe energy sequence. Second, an adaptive weight is calculated based on the mean and variance of this sequence. Finally, an exponential factor of the weight is applied to enhance the spectral reconstruction by enlarging the discrimination between true targets and artifacts. The effectiveness of the proposed method is validated through simulation and measurement experiments, demonstrating its advantages over existing methods.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"4 ","pages":"353-358"},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Radar Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1