The 3-D interferometric inverse synthetic aperture radar (3D-InISAR) imaging provides a more complete and reliable representation of targets compared to traditional 2D-ISAR, overcoming limitations related to the geometry of the radar-target system and relative motion. This article presents the application of a point cloud transformer (PCT) for automatic target recognition (ATR) using 3D-InISAR data. The PCT model, originally developed to classify LIDAR’s point clouds, is trained on sparse synthetic point cloud datasets representing various military vehicles, including cars, tanks, and trucks. The synthetic data are carefully generated from computer-aided design (CAD) models, incorporating techniques such as voxel downsampling and data augmentation to ensure high fidelity and diversity. Initial testing on synthetic data demonstrates the PCT’s robustness and high accuracy when used for ATR. To bridge the gap between synthetic and real data, a transfer learning approach is employed, which operates a fine-tuning on the pretrained model by using real 3D-InISAR point clouds obtained from the publicly available sensor data management system (SDMS)-Air Force Research Laboratory (AFRL) dataset. Results show significant improvements in classification accuracy post-fine-tuning, validating the effectiveness of the PCT model for real-world ATR applications. The findings highlight the potential of transformer-based models in enhancing target recognition systems for future ATR systems based on 3-D radar images.
{"title":"Transformer-Based Automatic Target Recognition for 3D-InISAR","authors":"Giulio Meucci;Elisa Giusti;Ajeet Kumar;Francesco Mancuso;Selenia Ghio;Marco Martorella","doi":"10.1109/TRS.2025.3527281","DOIUrl":"https://doi.org/10.1109/TRS.2025.3527281","url":null,"abstract":"The 3-D interferometric inverse synthetic aperture radar (3D-InISAR) imaging provides a more complete and reliable representation of targets compared to traditional 2D-ISAR, overcoming limitations related to the geometry of the radar-target system and relative motion. This article presents the application of a point cloud transformer (PCT) for automatic target recognition (ATR) using 3D-InISAR data. The PCT model, originally developed to classify LIDAR’s point clouds, is trained on sparse synthetic point cloud datasets representing various military vehicles, including cars, tanks, and trucks. The synthetic data are carefully generated from computer-aided design (CAD) models, incorporating techniques such as voxel downsampling and data augmentation to ensure high fidelity and diversity. Initial testing on synthetic data demonstrates the PCT’s robustness and high accuracy when used for ATR. To bridge the gap between synthetic and real data, a transfer learning approach is employed, which operates a fine-tuning on the pretrained model by using real 3D-InISAR point clouds obtained from the publicly available sensor data management system (SDMS)-Air Force Research Laboratory (AFRL) dataset. Results show significant improvements in classification accuracy post-fine-tuning, validating the effectiveness of the PCT model for real-world ATR applications. The findings highlight the potential of transformer-based models in enhancing target recognition systems for future ATR systems based on 3-D radar images.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"180-192"},"PeriodicalIF":0.0,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10833573","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-08DOI: 10.1109/TRS.2025.3527209
Xueru Bai;Xuchen Mao;Xudong Tian;Feng Zhou
For a micromotion space target, its narrowband radar cross section (RCS) series reflects the characteristics of target shape and motion. In practical scenarios, however, the RCS series of distant targets with weak scattering coefficients suffers from low signal-to-noise ratio (SNR), and performing separate noise suppression and recognition purely on the amplitude results in degraded recognition performance. To tackle this issue, an end-to-end complex-valued (CV) time convolutional attention denoising recognition network, dubbed as CV-TCANet, is proposed. Specifically, the denoising module captures temporal correlation by the CV attention mechanism and calculates the noise mask for denoising; and the recognition module utilizes the CV temporal convolutional network (CV-TCN) for feature extraction and recognition. In addition, a hybrid loss is designed to realize the integration of denoising and recognition, thus preserving target information while denoising and improving the recognition accuracy. Experimental results have proved that the proposed method could achieve satisfying recognition performance at low SNR.
{"title":"Recognition of Micromotion Space Targets at Low SNR Based on Complex-Valued Time Convolutional Attention Denoising Recognition Network","authors":"Xueru Bai;Xuchen Mao;Xudong Tian;Feng Zhou","doi":"10.1109/TRS.2025.3527209","DOIUrl":"https://doi.org/10.1109/TRS.2025.3527209","url":null,"abstract":"For a micromotion space target, its narrowband radar cross section (RCS) series reflects the characteristics of target shape and motion. In practical scenarios, however, the RCS series of distant targets with weak scattering coefficients suffers from low signal-to-noise ratio (SNR), and performing separate noise suppression and recognition purely on the amplitude results in degraded recognition performance. To tackle this issue, an end-to-end complex-valued (CV) time convolutional attention denoising recognition network, dubbed as CV-TCANet, is proposed. Specifically, the denoising module captures temporal correlation by the CV attention mechanism and calculates the noise mask for denoising; and the recognition module utilizes the CV temporal convolutional network (CV-TCN) for feature extraction and recognition. In addition, a hybrid loss is designed to realize the integration of denoising and recognition, thus preserving target information while denoising and improving the recognition accuracy. Experimental results have proved that the proposed method could achieve satisfying recognition performance at low SNR.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"193-202"},"PeriodicalIF":0.0,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-31DOI: 10.1109/TRS.2024.3524574
Ferhat Can Ataman;Chethan Y. B. Kumar;Sandeep Rao;Sule Ozev
Millimeter-wave (mm-Wave) radars are used to determine an object’s position relative to the radar, based on parameters such as range (R), azimuth angle ($theta $ ), and elevation angle ($phi $ ). Radars typically operate by transmitting a chirp signal, receiving the reflected signal from objects in the environment, and combining these signals at the receiver (RX). In systems with multiple antennas, the range is calculated for each transmitter (TX)–RX pair, producing multiple measurements that are averaged to improve accuracy. Angle estimation, however, relies on analyzing phase differences between antenna paths, and since it involves a single calculation across all antenna components, it does not benefit from averaging. In addition to random errors, systematic errors also affect the angle estimation. Specifically, the object’s distance varies slightly across the virtual antennas (formed by TX-RX combinations), causing shifts in the peak position of range estimation. This phenomenon, known as range migration, introduces errors. This article examines the root causes of range migration and its impact on angle of arrival (AoA) estimation, proposing effective solutions to mitigate these effects and enhance the overall accuracy of angle estimation.
{"title":"Eliminating Range Migration Error in mm-Wave Radars for Angle of Arrival Estimation","authors":"Ferhat Can Ataman;Chethan Y. B. Kumar;Sandeep Rao;Sule Ozev","doi":"10.1109/TRS.2024.3524574","DOIUrl":"https://doi.org/10.1109/TRS.2024.3524574","url":null,"abstract":"Millimeter-wave (mm-Wave) radars are used to determine an object’s position relative to the radar, based on parameters such as range (R), azimuth angle (<inline-formula> <tex-math>$theta $ </tex-math></inline-formula>), and elevation angle (<inline-formula> <tex-math>$phi $ </tex-math></inline-formula>). Radars typically operate by transmitting a chirp signal, receiving the reflected signal from objects in the environment, and combining these signals at the receiver (RX). In systems with multiple antennas, the range is calculated for each transmitter (TX)–RX pair, producing multiple measurements that are averaged to improve accuracy. Angle estimation, however, relies on analyzing phase differences between antenna paths, and since it involves a single calculation across all antenna components, it does not benefit from averaging. In addition to random errors, systematic errors also affect the angle estimation. Specifically, the object’s distance varies slightly across the virtual antennas (formed by TX-RX combinations), causing shifts in the peak position of range estimation. This phenomenon, known as range migration, introduces errors. This article examines the root causes of range migration and its impact on angle of arrival (AoA) estimation, proposing effective solutions to mitigate these effects and enhance the overall accuracy of angle estimation.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"169-179"},"PeriodicalIF":0.0,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-27DOI: 10.1109/TRS.2024.3523589
Huimin Liu;Jiawang Li;Zhang-Cheng Hao;Yun Hu;Gang Xu;Wei Hong
This article proposes a scatter suppression L-shaped phased-array imaging radar. The system operates at 24–26.4 GHz and is capable of 4-D imaging to determine the distance, elevation, azimuth, and speed of targets. It utilizes a frequency-modulated continuous-wave (FMCW) signal with a bandwidth of 2.4 GHz to extract range information, resulting in a range resolution of 62.5 mm. Orthogonal L-shaped linearly phased arrays are used for both transmission and reception. The azimuth and elevation angle information are obtained by switching the radiation beams of the phased arrays. The radar exhibits good scanning capabilities in 2-D space, with a scanning field of view (FOV) over 100° and an angular resolution of 3°. Importantly, the imaging artifacts due to multiple diffuse reflections can be suppressed by switching the transmit and receive phased-array antennas. A prototype is manufactured using the printed circuit board technology, which has a compact size of $23.5times 23.5$ cm2. Experimental validation of the design has been conducted. The proposed radar architecture and array layout reduce the complexity of the baseband, offering advantages such as easy implementation, high integration, and low cost, showing promising prospects for potential sensing applications.
{"title":"A Planar Millimeter-Wave Diffuse-Reflection Suppression 4-D Imaging Radar Using L-Shaped Switchable Linearly Phased Array","authors":"Huimin Liu;Jiawang Li;Zhang-Cheng Hao;Yun Hu;Gang Xu;Wei Hong","doi":"10.1109/TRS.2024.3523589","DOIUrl":"https://doi.org/10.1109/TRS.2024.3523589","url":null,"abstract":"This article proposes a scatter suppression L-shaped phased-array imaging radar. The system operates at 24–26.4 GHz and is capable of 4-D imaging to determine the distance, elevation, azimuth, and speed of targets. It utilizes a frequency-modulated continuous-wave (FMCW) signal with a bandwidth of 2.4 GHz to extract range information, resulting in a range resolution of 62.5 mm. Orthogonal L-shaped linearly phased arrays are used for both transmission and reception. The azimuth and elevation angle information are obtained by switching the radiation beams of the phased arrays. The radar exhibits good scanning capabilities in 2-D space, with a scanning field of view (FOV) over 100° and an angular resolution of 3°. Importantly, the imaging artifacts due to multiple diffuse reflections can be suppressed by switching the transmit and receive phased-array antennas. A prototype is manufactured using the printed circuit board technology, which has a compact size of <inline-formula> <tex-math>$23.5times 23.5$ </tex-math></inline-formula> cm2. Experimental validation of the design has been conducted. The proposed radar architecture and array layout reduce the complexity of the baseband, offering advantages such as easy implementation, high integration, and low cost, showing promising prospects for potential sensing applications.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"155-168"},"PeriodicalIF":0.0,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-23DOI: 10.1109/TRS.2024.3521814
Zhiding Yang;Weimin Huang
This study introduces a novel approach to mitigate the impact of rain on significant wave height (SWH) measurements using X-band marine radar. First, the proposed method uses a transformer-based segmentation model, SegFormer, to divide radar images into four distinct regions: clear wave signatures, rain-contaminated areas, low backscatter areas, and wind-dominated rain areas. Given that radar wave signatures in rain-contaminated regions are significantly blurred, this segmentation step identifies regions with clear wave signatures, ensuring subsequent analysis to be more accurate. Next, an iterative dehazing method, which adaptively enhances image clarity based on gradient standard deviation (GSD), is applied to achieve optimal dehazing effects. Finally, the segmented and dehazed polar radar images are transformed into the Cartesian coordinates, where subimages from valid regions are selected for SWH estimation using the SWHFormer model. The radar dataset used for test was collected from a shipborne Decca radar in a sea area 300 km from Halifax, Canada, in 2008. The SegFormer model demonstrates superior segmentation performance, with 1.3% improvement in accuracy compared with the SegNet-based method. Besides, the iterative dehazing method significantly reduces haze effects in heavily contaminated images, outperforming traditional one-time dehazing methods in both precision and robustness for SWH estimation. Results show that the combination of segmentation and iterative dehazing reduces the root mean square deviation (RMSD) of SWH estimation from 0.42 and 0.33 to 0.28 m, compared with the existing support vector regression (SVR)-based and convolutional gated recurrent unit (CGRU)-based methods, and improves the correlation coefficient (CC) to 0.96. These advancements underscore the potential of integrating segmentation and adaptive dehazing for enhanced radar-based ocean monitoring under challenging meteorological conditions.
{"title":"Wave Height Estimation From Radar Images Under Rainy Conditions Based on Context-Aware Segmentation and Iterative Dehazing","authors":"Zhiding Yang;Weimin Huang","doi":"10.1109/TRS.2024.3521814","DOIUrl":"https://doi.org/10.1109/TRS.2024.3521814","url":null,"abstract":"This study introduces a novel approach to mitigate the impact of rain on significant wave height (SWH) measurements using X-band marine radar. First, the proposed method uses a transformer-based segmentation model, SegFormer, to divide radar images into four distinct regions: clear wave signatures, rain-contaminated areas, low backscatter areas, and wind-dominated rain areas. Given that radar wave signatures in rain-contaminated regions are significantly blurred, this segmentation step identifies regions with clear wave signatures, ensuring subsequent analysis to be more accurate. Next, an iterative dehazing method, which adaptively enhances image clarity based on gradient standard deviation (GSD), is applied to achieve optimal dehazing effects. Finally, the segmented and dehazed polar radar images are transformed into the Cartesian coordinates, where subimages from valid regions are selected for SWH estimation using the SWHFormer model. The radar dataset used for test was collected from a shipborne Decca radar in a sea area 300 km from Halifax, Canada, in 2008. The SegFormer model demonstrates superior segmentation performance, with 1.3% improvement in accuracy compared with the SegNet-based method. Besides, the iterative dehazing method significantly reduces haze effects in heavily contaminated images, outperforming traditional one-time dehazing methods in both precision and robustness for SWH estimation. Results show that the combination of segmentation and iterative dehazing reduces the root mean square deviation (RMSD) of SWH estimation from 0.42 and 0.33 to 0.28 m, compared with the existing support vector regression (SVR)-based and convolutional gated recurrent unit (CGRU)-based methods, and improves the correlation coefficient (CC) to 0.96. These advancements underscore the potential of integrating segmentation and adaptive dehazing for enhanced radar-based ocean monitoring under challenging meteorological conditions.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"101-114"},"PeriodicalIF":0.0,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1109/TRS.2024.3520733
{"title":"2024 Index IEEE Transactions on Radar Systems Vol. 2","authors":"","doi":"10.1109/TRS.2024.3520733","DOIUrl":"https://doi.org/10.1109/TRS.2024.3520733","url":null,"abstract":"","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"2 ","pages":"1229-1250"},"PeriodicalIF":0.0,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10811761","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142859244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-17DOI: 10.1109/TRS.2024.3518842
Daniel White;Mohammed Jahangir;Amit Kumar Mishra;Chris J. Baker;Michail Antoniou
Deep learning with convolutional neural networks (CNNs) has been widely utilized in radar research concerning automatic target recognition. Maximizing numerical metrics to gauge the performance of such algorithms does not necessarily correspond to model robustness against untested targets, nor does it lead to improved model interpretability. Approaches designed to explain the mechanisms behind the operation of a classifier on radar data are proliferating, but bring with them a significant computational and analysis overhead. This work uses an elementary unsupervised convolutional autoencoder (CAE) to learn a compressed representation of a challenging dataset of urban bird and drone targets, and subsequently if apparent, the quality of the representation via preservation of class labels leads to better classification performance after a separate supervised training stage. It is shown that a CAE that reduces the features output after each layer of the encoder gives rise to the best drone versus bird classifier. A clear connection between unsupervised evaluation via label preservation in the latent space and subsequent classification accuracy after supervised fine-tuning is shown, supporting further efforts to optimize radar data latent representations to enable optimal performance and model interpretability.
{"title":"Latent Variable and Classification Performance Analysis of Bird–Drone Spectrograms With Elementary Autoencoder","authors":"Daniel White;Mohammed Jahangir;Amit Kumar Mishra;Chris J. Baker;Michail Antoniou","doi":"10.1109/TRS.2024.3518842","DOIUrl":"https://doi.org/10.1109/TRS.2024.3518842","url":null,"abstract":"Deep learning with convolutional neural networks (CNNs) has been widely utilized in radar research concerning automatic target recognition. Maximizing numerical metrics to gauge the performance of such algorithms does not necessarily correspond to model robustness against untested targets, nor does it lead to improved model interpretability. Approaches designed to explain the mechanisms behind the operation of a classifier on radar data are proliferating, but bring with them a significant computational and analysis overhead. This work uses an elementary unsupervised convolutional autoencoder (CAE) to learn a compressed representation of a challenging dataset of urban bird and drone targets, and subsequently if apparent, the quality of the representation via preservation of class labels leads to better classification performance after a separate supervised training stage. It is shown that a CAE that reduces the features output after each layer of the encoder gives rise to the best drone versus bird classifier. A clear connection between unsupervised evaluation via label preservation in the latent space and subsequent classification accuracy after supervised fine-tuning is shown, supporting further efforts to optimize radar data latent representations to enable optimal performance and model interpretability.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"115-123"},"PeriodicalIF":0.0,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-17DOI: 10.1109/TRS.2024.3518954
Brian W. Rybicki;Jill K. Nelson
A cognitive tracking radar continuously acquires, stores, and exploits knowledge from its target environment in order to improve kinematic tracking performance. In this work, we apply a reinforcement learning (RL) technique, API-DNN, based on approximate policy iteration (API) with a deep neural network (DNN) policy to cognitive radar tracking. API-DNN iteratively improves upon an initial base policy using repeated application of rollout and supervised learning. This approach can appropriately balance online versus offline computation in order to improve efficiency and can adapt to changes in problem specification through online replanning. Prior state-of-the-art cognitive radar tracking approaches either rely on sophisticated search procedures with heuristics and carefully selected hyperparameters or deep RL (DRL) agents based on exotic DNN architectures with poorly understood performance guarantees. API-DNN, instead, is based on well-known principles of rollout, Monte Carlo simulation, and basic DNN function approximation. We demonstrate the effectiveness of API-DNN in cognitive radar simulations based on a standard maneuvering target tracking benchmark scenario. We also show how API-DNN can implement online replanning with updated target information.
{"title":"Train Offline, Refine Online: Improving Cognitive Tracking Radar Performance With Approximate Policy Iteration and Deep Neural Networks","authors":"Brian W. Rybicki;Jill K. Nelson","doi":"10.1109/TRS.2024.3518954","DOIUrl":"https://doi.org/10.1109/TRS.2024.3518954","url":null,"abstract":"A cognitive tracking radar continuously acquires, stores, and exploits knowledge from its target environment in order to improve kinematic tracking performance. In this work, we apply a reinforcement learning (RL) technique, API-DNN, based on approximate policy iteration (API) with a deep neural network (DNN) policy to cognitive radar tracking. API-DNN iteratively improves upon an initial base policy using repeated application of rollout and supervised learning. This approach can appropriately balance online versus offline computation in order to improve efficiency and can adapt to changes in problem specification through online replanning. Prior state-of-the-art cognitive radar tracking approaches either rely on sophisticated search procedures with heuristics and carefully selected hyperparameters or deep RL (DRL) agents based on exotic DNN architectures with poorly understood performance guarantees. API-DNN, instead, is based on well-known principles of rollout, Monte Carlo simulation, and basic DNN function approximation. We demonstrate the effectiveness of API-DNN in cognitive radar simulations based on a standard maneuvering target tracking benchmark scenario. We also show how API-DNN can implement online replanning with updated target information.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"57-70"},"PeriodicalIF":0.0,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-17DOI: 10.1109/TRS.2024.3519138
Evert I. Pocoma Copa;Hasan Can Yildirim;Jean-François Determe;François Horlin
Synthetic generation of radar signals is an attractive solution to alleviate the lack of standardized datasets containing paired radar and human-motion data. Unfortunately, current approaches in the literature, such as SimHumalator, fail to closely resemble real measurements and thus cannot be used alone in data-driven applications that rely on large training sets. Consequently, we propose an empirical signal model that considers the human body as an ensemble of extended targets. Unlike SimHumalator, which uses a single-point scatterer, our approach locates a multiple-point scatterer on each body part. Our method does not rely on 3-D-meshes but leverages primitive shapes fit to each body part, thereby making it possible to take advantage of publicly available motion-capture (MoCap) datasets. By carefully selecting the parameters of the proposed empirical model, we can generate Doppler-time spectrograms (DTSs) that better resemble real measurements, thus reducing the gap between synthetic and real data. Finally, we show the applicability of our approach in two different application use cases that leverage artificial neural networks (ANNs) to address activity classification and skeleton-joint velocity estimation.
{"title":"Synthetic Radar Signal Generator for Human Motion Analysis","authors":"Evert I. Pocoma Copa;Hasan Can Yildirim;Jean-François Determe;François Horlin","doi":"10.1109/TRS.2024.3519138","DOIUrl":"https://doi.org/10.1109/TRS.2024.3519138","url":null,"abstract":"Synthetic generation of radar signals is an attractive solution to alleviate the lack of standardized datasets containing paired radar and human-motion data. Unfortunately, current approaches in the literature, such as SimHumalator, fail to closely resemble real measurements and thus cannot be used alone in data-driven applications that rely on large training sets. Consequently, we propose an empirical signal model that considers the human body as an ensemble of extended targets. Unlike SimHumalator, which uses a single-point scatterer, our approach locates a multiple-point scatterer on each body part. Our method does not rely on 3-D-meshes but leverages primitive shapes fit to each body part, thereby making it possible to take advantage of publicly available motion-capture (MoCap) datasets. By carefully selecting the parameters of the proposed empirical model, we can generate Doppler-time spectrograms (DTSs) that better resemble real measurements, thus reducing the gap between synthetic and real data. Finally, we show the applicability of our approach in two different application use cases that leverage artificial neural networks (ANNs) to address activity classification and skeleton-joint velocity estimation.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"88-100"},"PeriodicalIF":0.0,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-13DOI: 10.1109/TRS.2024.3516745
Hai Li;Yu Xiong;Boxin Zhang;Zihua Wu
Modeling nonspherical precipitation targets and calculating their scattering properties are key for simulating dual-polarization weather radar echoes and remote sensing. The invariant imbedding T-matrix (IITM) method, due to its accuracy and practicality in computing nonspherical precipitation targets, is the most promising approach. However, accurate echo simulation requires repeated calculations of the scattering amplitude matrices for precipitation targets at various diameters, involving iterative computations, which leads to significant memory usage and long computation times when using the IITM. Hence, enhancing the computational efficiency of the IITM in simulations of nonspherical precipitation targets in dual-polarization weather radars is urgent. This article improves upon the traditional method of using ellipsoids for modeling precipitation targets by precisely considering particle shapes, employing various nonspherical particles, and dividing these targets into an inscribed homogeneous domain and an extended heterogeneous domain. For the homogeneous domain, the logarithmic-derivative Mie scattering method is used to improve computational efficiency, while the heterogeneous domain utilizes conventional iterative methods, rotational symmetry fast algorithms, and N-fold symmetry fast algorithms. The computed scattering amplitude matrices are integrated with the weather radar equation and pulse covariance matrix to complete echo simulations. Analyzing the computational results from individual particles and overall calculations, experiments show that fast algorithms can increase the computational efficiency of simulating various nonspherical precipitation targets in airborne dual-polarization weather radars by more than tenfold.
{"title":"Simulation of Precipitation Echoes From Airborne Dual-Polarization Weather Radar Based on a Fast Algorithm for Invariant Imbedding T-Matrix","authors":"Hai Li;Yu Xiong;Boxin Zhang;Zihua Wu","doi":"10.1109/TRS.2024.3516745","DOIUrl":"https://doi.org/10.1109/TRS.2024.3516745","url":null,"abstract":"Modeling nonspherical precipitation targets and calculating their scattering properties are key for simulating dual-polarization weather radar echoes and remote sensing. The invariant imbedding T-matrix (IITM) method, due to its accuracy and practicality in computing nonspherical precipitation targets, is the most promising approach. However, accurate echo simulation requires repeated calculations of the scattering amplitude matrices for precipitation targets at various diameters, involving iterative computations, which leads to significant memory usage and long computation times when using the IITM. Hence, enhancing the computational efficiency of the IITM in simulations of nonspherical precipitation targets in dual-polarization weather radars is urgent. This article improves upon the traditional method of using ellipsoids for modeling precipitation targets by precisely considering particle shapes, employing various nonspherical particles, and dividing these targets into an inscribed homogeneous domain and an extended heterogeneous domain. For the homogeneous domain, the logarithmic-derivative Mie scattering method is used to improve computational efficiency, while the heterogeneous domain utilizes conventional iterative methods, rotational symmetry fast algorithms, and N-fold symmetry fast algorithms. The computed scattering amplitude matrices are integrated with the weather radar equation and pulse covariance matrix to complete echo simulations. Analyzing the computational results from individual particles and overall calculations, experiments show that fast algorithms can increase the computational efficiency of simulating various nonspherical precipitation targets in airborne dual-polarization weather radars by more than tenfold.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"135-154"},"PeriodicalIF":0.0,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}