Synthetic aperture radar (SAR) imaging provides a method for increasing the resolution of small and low-cost frequency-modulated continuous wave (FMCW) multiple-input multiple-output (MIMO) radar sensors. Using SAR images as an alternative to traditional point cloud-based representations of the environment may improve the performance of simultaneous localization and mapping (SLAM) algorithms for mobile robots. This article presents the details of an indoor mobile robot system that fuses inertial measurement unit (IMU) measurements and radar velocity estimates from an incoherent network of automotive radar sensors using an error-state Kalman filter (ESKF). This trajectory estimate is used to create surround-view SAR images of the robot’s operating environment. The obtained trajectory accuracy is compared against a laboratory reference system, and high-resolution SAR imaging results are presented. The measurement results provide insights into the challenges of robotic millimeter-wave imaging in indoor scenarios.
{"title":"RIO-SAR: Synthetic Aperture Radar Imaging of Indoor Scenes Based on Radar-Inertial Odometry Using a Mobile Robot","authors":"Yuma Elia Ritterbusch;Johannes Fink;Christian Waldschmidt","doi":"10.1109/TRS.2024.3488474","DOIUrl":"https://doi.org/10.1109/TRS.2024.3488474","url":null,"abstract":"Synthetic aperture radar (SAR) imaging provides a method for increasing the resolution of small and low-cost frequency-modulated continuous wave (FMCW) multiple-input multiple-output (MIMO) radar sensors. Using SAR images as an alternative to traditional point cloud-based representations of the environment may improve the performance of simultaneous localization and mapping (SLAM) algorithms for mobile robots. This article presents the details of an indoor mobile robot system that fuses inertial measurement unit (IMU) measurements and radar velocity estimates from an incoherent network of automotive radar sensors using an error-state Kalman filter (ESKF). This trajectory estimate is used to create surround-view SAR images of the robot’s operating environment. The obtained trajectory accuracy is compared against a laboratory reference system, and high-resolution SAR imaging results are presented. The measurement results provide insights into the challenges of robotic millimeter-wave imaging in indoor scenarios.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"2 ","pages":"1200-1213"},"PeriodicalIF":0.0,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142777885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous driving vehicles are being more and more popular in the community with the rise of artificial intelligence systems. However, in the context of airborne navigation, it remains a challenge, especially during landing maneuver. In order to operate in all conditions (weather, day, and night) and in all airports, we propose a runway localization method based on images acquired by an onboard radar. The proposed algorithm is a radar data segmentation method designed for use by an aircraft, as an on-board system, to provide the pilot, whether human or automatic, with a runway location prediction to facilitate and secure the landing maneuver. This article describes the acquisition and labeling of a large-scale real dataset over 18 airports in France and Switzerland, and the proposition of an attention-based deep recurrent neural network (RNN) for semantic segmentation of 4-D radar data acquired during a landing maneuver. This end-to-end trainable neural network combines attention mechanisms adapted to the geometry of an approach scene, with the exploitation of spatial-temporal information via recursive cells, all being associated with a convolutional segmentation model (patent pending). This article proposes a sensitivity analysis of Lyon’s airport to tune the hyperparameters, demonstrating the interest in adapting the attention sequence, especially through the shape of patches. The experimental results have shown the benefit of each block in the model. Extensive experiments on the other available airports have allowed validating the potential of the proposed network. Experiments have shown a considerable gain of about 0.17 on the DICE score associated with the exploitation of attention mechanisms and recursive cells and a gain of 0.1 compared to the SegFormer-B0 model.
{"title":"Attention-Based Deep Recurrent Neural Network for Semantic Segmentation of 4-D Radar Data Acquired During Landing Maneuver","authors":"Solène Vilfroy;Thierry Urruty;Philippe Carré;Jean-Philippe Lebrat;Lionel Bombrun","doi":"10.1109/TRS.2024.3488475","DOIUrl":"https://doi.org/10.1109/TRS.2024.3488475","url":null,"abstract":"Autonomous driving vehicles are being more and more popular in the community with the rise of artificial intelligence systems. However, in the context of airborne navigation, it remains a challenge, especially during landing maneuver. In order to operate in all conditions (weather, day, and night) and in all airports, we propose a runway localization method based on images acquired by an onboard radar. The proposed algorithm is a radar data segmentation method designed for use by an aircraft, as an on-board system, to provide the pilot, whether human or automatic, with a runway location prediction to facilitate and secure the landing maneuver. This article describes the acquisition and labeling of a large-scale real dataset over 18 airports in France and Switzerland, and the proposition of an attention-based deep recurrent neural network (RNN) for semantic segmentation of 4-D radar data acquired during a landing maneuver. This end-to-end trainable neural network combines attention mechanisms adapted to the geometry of an approach scene, with the exploitation of spatial-temporal information via recursive cells, all being associated with a convolutional segmentation model (patent pending). This article proposes a sensitivity analysis of Lyon’s airport to tune the hyperparameters, demonstrating the interest in adapting the attention sequence, especially through the shape of patches. The experimental results have shown the benefit of each block in the model. Extensive experiments on the other available airports have allowed validating the potential of the proposed network. Experiments have shown a considerable gain of about 0.17 on the DICE score associated with the exploitation of attention mechanisms and recursive cells and a gain of 0.1 compared to the SegFormer-B0 model.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"2 ","pages":"1135-1147"},"PeriodicalIF":0.0,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radio frequency (RF) sensing applications such as RF waveform classification and human activity recognition (HAR) demand real-time processing capabilities. Current state-of-the-art techniques often require a two-stage process for classification: first, computing a time-frequency (TF) transform, and then applying machine learning (ML) using the TF domain as the input for classification. This process hinders the opportunities for real-time classification. Consequently, there is a growing interest in direct classification from raw IQ-RF data streams. Applying existing deep learning (DL) techniques directly to the raw IQ radar data has shown limited accuracy for various applications. To address this, this article proposes to learn the parameters of structured functions as filterbanks within complex-valued (CV) neural network architectures. The initial layer of the proposed architecture features CV parameterized learnable filters (PLFs) that directly work on the raw data and generate frequency-related features based on the structured function of the filter. This work presents four different PLFs: Sinc, Gaussian, Gammatone, and Ricker functions, which demonstrate different types of frequency-domain bandpass filtering to show their effectiveness in RF data classification directly from raw IQ radar data. Learning structured filters also enhances interpretability and understanding of the network. The proposed approach was tested on both experimental and synthetic datasets for sign and modulation recognition. The PLF-based models achieved an average of 47% improvement in classification accuracy compared with a 1-D convolutional neural network (CNN) on raw RF data and an average 7% improvement over CNNs with real-valued learnable filters for the experimental dataset. It also matched the accuracy of a 2-D CNN applied to micro-Doppler ( $mu $