Pub Date : 2024-12-27DOI: 10.1109/TRS.2024.3523589
Huimin Liu;Jiawang Li;Zhang-Cheng Hao;Yun Hu;Gang Xu;Wei Hong
This article proposes a scatter suppression L-shaped phased-array imaging radar. The system operates at 24–26.4 GHz and is capable of 4-D imaging to determine the distance, elevation, azimuth, and speed of targets. It utilizes a frequency-modulated continuous-wave (FMCW) signal with a bandwidth of 2.4 GHz to extract range information, resulting in a range resolution of 62.5 mm. Orthogonal L-shaped linearly phased arrays are used for both transmission and reception. The azimuth and elevation angle information are obtained by switching the radiation beams of the phased arrays. The radar exhibits good scanning capabilities in 2-D space, with a scanning field of view (FOV) over 100° and an angular resolution of 3°. Importantly, the imaging artifacts due to multiple diffuse reflections can be suppressed by switching the transmit and receive phased-array antennas. A prototype is manufactured using the printed circuit board technology, which has a compact size of $23.5times 23.5$ cm2. Experimental validation of the design has been conducted. The proposed radar architecture and array layout reduce the complexity of the baseband, offering advantages such as easy implementation, high integration, and low cost, showing promising prospects for potential sensing applications.
{"title":"A Planar Millimeter-Wave Diffuse-Reflection Suppression 4-D Imaging Radar Using L-Shaped Switchable Linearly Phased Array","authors":"Huimin Liu;Jiawang Li;Zhang-Cheng Hao;Yun Hu;Gang Xu;Wei Hong","doi":"10.1109/TRS.2024.3523589","DOIUrl":"https://doi.org/10.1109/TRS.2024.3523589","url":null,"abstract":"This article proposes a scatter suppression L-shaped phased-array imaging radar. The system operates at 24–26.4 GHz and is capable of 4-D imaging to determine the distance, elevation, azimuth, and speed of targets. It utilizes a frequency-modulated continuous-wave (FMCW) signal with a bandwidth of 2.4 GHz to extract range information, resulting in a range resolution of 62.5 mm. Orthogonal L-shaped linearly phased arrays are used for both transmission and reception. The azimuth and elevation angle information are obtained by switching the radiation beams of the phased arrays. The radar exhibits good scanning capabilities in 2-D space, with a scanning field of view (FOV) over 100° and an angular resolution of 3°. Importantly, the imaging artifacts due to multiple diffuse reflections can be suppressed by switching the transmit and receive phased-array antennas. A prototype is manufactured using the printed circuit board technology, which has a compact size of <inline-formula> <tex-math>$23.5times 23.5$ </tex-math></inline-formula> cm2. Experimental validation of the design has been conducted. The proposed radar architecture and array layout reduce the complexity of the baseband, offering advantages such as easy implementation, high integration, and low cost, showing promising prospects for potential sensing applications.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"155-168"},"PeriodicalIF":0.0,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-23DOI: 10.1109/TRS.2024.3521814
Zhiding Yang;Weimin Huang
This study introduces a novel approach to mitigate the impact of rain on significant wave height (SWH) measurements using X-band marine radar. First, the proposed method uses a transformer-based segmentation model, SegFormer, to divide radar images into four distinct regions: clear wave signatures, rain-contaminated areas, low backscatter areas, and wind-dominated rain areas. Given that radar wave signatures in rain-contaminated regions are significantly blurred, this segmentation step identifies regions with clear wave signatures, ensuring subsequent analysis to be more accurate. Next, an iterative dehazing method, which adaptively enhances image clarity based on gradient standard deviation (GSD), is applied to achieve optimal dehazing effects. Finally, the segmented and dehazed polar radar images are transformed into the Cartesian coordinates, where subimages from valid regions are selected for SWH estimation using the SWHFormer model. The radar dataset used for test was collected from a shipborne Decca radar in a sea area 300 km from Halifax, Canada, in 2008. The SegFormer model demonstrates superior segmentation performance, with 1.3% improvement in accuracy compared with the SegNet-based method. Besides, the iterative dehazing method significantly reduces haze effects in heavily contaminated images, outperforming traditional one-time dehazing methods in both precision and robustness for SWH estimation. Results show that the combination of segmentation and iterative dehazing reduces the root mean square deviation (RMSD) of SWH estimation from 0.42 and 0.33 to 0.28 m, compared with the existing support vector regression (SVR)-based and convolutional gated recurrent unit (CGRU)-based methods, and improves the correlation coefficient (CC) to 0.96. These advancements underscore the potential of integrating segmentation and adaptive dehazing for enhanced radar-based ocean monitoring under challenging meteorological conditions.
{"title":"Wave Height Estimation From Radar Images Under Rainy Conditions Based on Context-Aware Segmentation and Iterative Dehazing","authors":"Zhiding Yang;Weimin Huang","doi":"10.1109/TRS.2024.3521814","DOIUrl":"https://doi.org/10.1109/TRS.2024.3521814","url":null,"abstract":"This study introduces a novel approach to mitigate the impact of rain on significant wave height (SWH) measurements using X-band marine radar. First, the proposed method uses a transformer-based segmentation model, SegFormer, to divide radar images into four distinct regions: clear wave signatures, rain-contaminated areas, low backscatter areas, and wind-dominated rain areas. Given that radar wave signatures in rain-contaminated regions are significantly blurred, this segmentation step identifies regions with clear wave signatures, ensuring subsequent analysis to be more accurate. Next, an iterative dehazing method, which adaptively enhances image clarity based on gradient standard deviation (GSD), is applied to achieve optimal dehazing effects. Finally, the segmented and dehazed polar radar images are transformed into the Cartesian coordinates, where subimages from valid regions are selected for SWH estimation using the SWHFormer model. The radar dataset used for test was collected from a shipborne Decca radar in a sea area 300 km from Halifax, Canada, in 2008. The SegFormer model demonstrates superior segmentation performance, with 1.3% improvement in accuracy compared with the SegNet-based method. Besides, the iterative dehazing method significantly reduces haze effects in heavily contaminated images, outperforming traditional one-time dehazing methods in both precision and robustness for SWH estimation. Results show that the combination of segmentation and iterative dehazing reduces the root mean square deviation (RMSD) of SWH estimation from 0.42 and 0.33 to 0.28 m, compared with the existing support vector regression (SVR)-based and convolutional gated recurrent unit (CGRU)-based methods, and improves the correlation coefficient (CC) to 0.96. These advancements underscore the potential of integrating segmentation and adaptive dehazing for enhanced radar-based ocean monitoring under challenging meteorological conditions.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"101-114"},"PeriodicalIF":0.0,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1109/TRS.2024.3520733
{"title":"2024 Index IEEE Transactions on Radar Systems Vol. 2","authors":"","doi":"10.1109/TRS.2024.3520733","DOIUrl":"https://doi.org/10.1109/TRS.2024.3520733","url":null,"abstract":"","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"2 ","pages":"1229-1250"},"PeriodicalIF":0.0,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10811761","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142859244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-17DOI: 10.1109/TRS.2024.3518842
Daniel White;Mohammed Jahangir;Amit Kumar Mishra;Chris J. Baker;Michail Antoniou
Deep learning with convolutional neural networks (CNNs) has been widely utilized in radar research concerning automatic target recognition. Maximizing numerical metrics to gauge the performance of such algorithms does not necessarily correspond to model robustness against untested targets, nor does it lead to improved model interpretability. Approaches designed to explain the mechanisms behind the operation of a classifier on radar data are proliferating, but bring with them a significant computational and analysis overhead. This work uses an elementary unsupervised convolutional autoencoder (CAE) to learn a compressed representation of a challenging dataset of urban bird and drone targets, and subsequently if apparent, the quality of the representation via preservation of class labels leads to better classification performance after a separate supervised training stage. It is shown that a CAE that reduces the features output after each layer of the encoder gives rise to the best drone versus bird classifier. A clear connection between unsupervised evaluation via label preservation in the latent space and subsequent classification accuracy after supervised fine-tuning is shown, supporting further efforts to optimize radar data latent representations to enable optimal performance and model interpretability.
{"title":"Latent Variable and Classification Performance Analysis of Bird–Drone Spectrograms With Elementary Autoencoder","authors":"Daniel White;Mohammed Jahangir;Amit Kumar Mishra;Chris J. Baker;Michail Antoniou","doi":"10.1109/TRS.2024.3518842","DOIUrl":"https://doi.org/10.1109/TRS.2024.3518842","url":null,"abstract":"Deep learning with convolutional neural networks (CNNs) has been widely utilized in radar research concerning automatic target recognition. Maximizing numerical metrics to gauge the performance of such algorithms does not necessarily correspond to model robustness against untested targets, nor does it lead to improved model interpretability. Approaches designed to explain the mechanisms behind the operation of a classifier on radar data are proliferating, but bring with them a significant computational and analysis overhead. This work uses an elementary unsupervised convolutional autoencoder (CAE) to learn a compressed representation of a challenging dataset of urban bird and drone targets, and subsequently if apparent, the quality of the representation via preservation of class labels leads to better classification performance after a separate supervised training stage. It is shown that a CAE that reduces the features output after each layer of the encoder gives rise to the best drone versus bird classifier. A clear connection between unsupervised evaluation via label preservation in the latent space and subsequent classification accuracy after supervised fine-tuning is shown, supporting further efforts to optimize radar data latent representations to enable optimal performance and model interpretability.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"115-123"},"PeriodicalIF":0.0,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-17DOI: 10.1109/TRS.2024.3518954
Brian W. Rybicki;Jill K. Nelson
A cognitive tracking radar continuously acquires, stores, and exploits knowledge from its target environment in order to improve kinematic tracking performance. In this work, we apply a reinforcement learning (RL) technique, API-DNN, based on approximate policy iteration (API) with a deep neural network (DNN) policy to cognitive radar tracking. API-DNN iteratively improves upon an initial base policy using repeated application of rollout and supervised learning. This approach can appropriately balance online versus offline computation in order to improve efficiency and can adapt to changes in problem specification through online replanning. Prior state-of-the-art cognitive radar tracking approaches either rely on sophisticated search procedures with heuristics and carefully selected hyperparameters or deep RL (DRL) agents based on exotic DNN architectures with poorly understood performance guarantees. API-DNN, instead, is based on well-known principles of rollout, Monte Carlo simulation, and basic DNN function approximation. We demonstrate the effectiveness of API-DNN in cognitive radar simulations based on a standard maneuvering target tracking benchmark scenario. We also show how API-DNN can implement online replanning with updated target information.
{"title":"Train Offline, Refine Online: Improving Cognitive Tracking Radar Performance With Approximate Policy Iteration and Deep Neural Networks","authors":"Brian W. Rybicki;Jill K. Nelson","doi":"10.1109/TRS.2024.3518954","DOIUrl":"https://doi.org/10.1109/TRS.2024.3518954","url":null,"abstract":"A cognitive tracking radar continuously acquires, stores, and exploits knowledge from its target environment in order to improve kinematic tracking performance. In this work, we apply a reinforcement learning (RL) technique, API-DNN, based on approximate policy iteration (API) with a deep neural network (DNN) policy to cognitive radar tracking. API-DNN iteratively improves upon an initial base policy using repeated application of rollout and supervised learning. This approach can appropriately balance online versus offline computation in order to improve efficiency and can adapt to changes in problem specification through online replanning. Prior state-of-the-art cognitive radar tracking approaches either rely on sophisticated search procedures with heuristics and carefully selected hyperparameters or deep RL (DRL) agents based on exotic DNN architectures with poorly understood performance guarantees. API-DNN, instead, is based on well-known principles of rollout, Monte Carlo simulation, and basic DNN function approximation. We demonstrate the effectiveness of API-DNN in cognitive radar simulations based on a standard maneuvering target tracking benchmark scenario. We also show how API-DNN can implement online replanning with updated target information.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"57-70"},"PeriodicalIF":0.0,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-17DOI: 10.1109/TRS.2024.3519138
Evert I. Pocoma Copa;Hasan Can Yildirim;Jean-François Determe;François Horlin
Synthetic generation of radar signals is an attractive solution to alleviate the lack of standardized datasets containing paired radar and human-motion data. Unfortunately, current approaches in the literature, such as SimHumalator, fail to closely resemble real measurements and thus cannot be used alone in data-driven applications that rely on large training sets. Consequently, we propose an empirical signal model that considers the human body as an ensemble of extended targets. Unlike SimHumalator, which uses a single-point scatterer, our approach locates a multiple-point scatterer on each body part. Our method does not rely on 3-D-meshes but leverages primitive shapes fit to each body part, thereby making it possible to take advantage of publicly available motion-capture (MoCap) datasets. By carefully selecting the parameters of the proposed empirical model, we can generate Doppler-time spectrograms (DTSs) that better resemble real measurements, thus reducing the gap between synthetic and real data. Finally, we show the applicability of our approach in two different application use cases that leverage artificial neural networks (ANNs) to address activity classification and skeleton-joint velocity estimation.
{"title":"Synthetic Radar Signal Generator for Human Motion Analysis","authors":"Evert I. Pocoma Copa;Hasan Can Yildirim;Jean-François Determe;François Horlin","doi":"10.1109/TRS.2024.3519138","DOIUrl":"https://doi.org/10.1109/TRS.2024.3519138","url":null,"abstract":"Synthetic generation of radar signals is an attractive solution to alleviate the lack of standardized datasets containing paired radar and human-motion data. Unfortunately, current approaches in the literature, such as SimHumalator, fail to closely resemble real measurements and thus cannot be used alone in data-driven applications that rely on large training sets. Consequently, we propose an empirical signal model that considers the human body as an ensemble of extended targets. Unlike SimHumalator, which uses a single-point scatterer, our approach locates a multiple-point scatterer on each body part. Our method does not rely on 3-D-meshes but leverages primitive shapes fit to each body part, thereby making it possible to take advantage of publicly available motion-capture (MoCap) datasets. By carefully selecting the parameters of the proposed empirical model, we can generate Doppler-time spectrograms (DTSs) that better resemble real measurements, thus reducing the gap between synthetic and real data. Finally, we show the applicability of our approach in two different application use cases that leverage artificial neural networks (ANNs) to address activity classification and skeleton-joint velocity estimation.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"88-100"},"PeriodicalIF":0.0,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-13DOI: 10.1109/TRS.2024.3516745
Hai Li;Yu Xiong;Boxin Zhang;Zihua Wu
Modeling nonspherical precipitation targets and calculating their scattering properties are key for simulating dual-polarization weather radar echoes and remote sensing. The invariant imbedding T-matrix (IITM) method, due to its accuracy and practicality in computing nonspherical precipitation targets, is the most promising approach. However, accurate echo simulation requires repeated calculations of the scattering amplitude matrices for precipitation targets at various diameters, involving iterative computations, which leads to significant memory usage and long computation times when using the IITM. Hence, enhancing the computational efficiency of the IITM in simulations of nonspherical precipitation targets in dual-polarization weather radars is urgent. This article improves upon the traditional method of using ellipsoids for modeling precipitation targets by precisely considering particle shapes, employing various nonspherical particles, and dividing these targets into an inscribed homogeneous domain and an extended heterogeneous domain. For the homogeneous domain, the logarithmic-derivative Mie scattering method is used to improve computational efficiency, while the heterogeneous domain utilizes conventional iterative methods, rotational symmetry fast algorithms, and N-fold symmetry fast algorithms. The computed scattering amplitude matrices are integrated with the weather radar equation and pulse covariance matrix to complete echo simulations. Analyzing the computational results from individual particles and overall calculations, experiments show that fast algorithms can increase the computational efficiency of simulating various nonspherical precipitation targets in airborne dual-polarization weather radars by more than tenfold.
{"title":"Simulation of Precipitation Echoes From Airborne Dual-Polarization Weather Radar Based on a Fast Algorithm for Invariant Imbedding T-Matrix","authors":"Hai Li;Yu Xiong;Boxin Zhang;Zihua Wu","doi":"10.1109/TRS.2024.3516745","DOIUrl":"https://doi.org/10.1109/TRS.2024.3516745","url":null,"abstract":"Modeling nonspherical precipitation targets and calculating their scattering properties are key for simulating dual-polarization weather radar echoes and remote sensing. The invariant imbedding T-matrix (IITM) method, due to its accuracy and practicality in computing nonspherical precipitation targets, is the most promising approach. However, accurate echo simulation requires repeated calculations of the scattering amplitude matrices for precipitation targets at various diameters, involving iterative computations, which leads to significant memory usage and long computation times when using the IITM. Hence, enhancing the computational efficiency of the IITM in simulations of nonspherical precipitation targets in dual-polarization weather radars is urgent. This article improves upon the traditional method of using ellipsoids for modeling precipitation targets by precisely considering particle shapes, employing various nonspherical particles, and dividing these targets into an inscribed homogeneous domain and an extended heterogeneous domain. For the homogeneous domain, the logarithmic-derivative Mie scattering method is used to improve computational efficiency, while the heterogeneous domain utilizes conventional iterative methods, rotational symmetry fast algorithms, and N-fold symmetry fast algorithms. The computed scattering amplitude matrices are integrated with the weather radar equation and pulse covariance matrix to complete echo simulations. Analyzing the computational results from individual particles and overall calculations, experiments show that fast algorithms can increase the computational efficiency of simulating various nonspherical precipitation targets in airborne dual-polarization weather radars by more than tenfold.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"135-154"},"PeriodicalIF":0.0,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-12DOI: 10.1109/TRS.2024.3516413
Wending Li;Zhihuo Xu;Liu Chu;Quan Shi;Robin Braun;Jiajia Shi
The state of drowsiness significantly affects work efficiency and productivity, increasing the risk of accidents and mishaps. Radar-based detection technology offers significant advantages in drowsiness detection, providing a noninvasive and reliable method based on vital sign tracking and physiological feature extraction. However, the classification of sleepiness levels is often simple and the detection accuracy is limited. This study proposes a frequency-modulated continuous-wave (FMCW) radar-based system with a convolutional adaptive pooling attention gated-recurrent-unit (CAPA-GRU) network to enhance detection accuracy and precisely determine levels of radar-based drowsiness detection. First, an FMCW radar is used to obtain breathing and heartbeat signals, and the radar signals are processed through the wavelet transform method to obtain highly accurate physiological characteristics. Then, the vital sign signals are analyzed both in the time and frequency domains, and the optimal input data is obtained by combining the characteristic data. Also, the CAPA-GRU, comprising a convolutional neural network (CNN), a gated-recurrent-unit (GRU), and a convolutional adaptive average pooling (CAA) module, is proposed for drowsiness classification and monitoring. The experimental results show that the proposed method achieves multistage sleepiness detection based on FMCW radar and achieves excellent results in low classification. The proposed network has excellent performance and certain robustness. Experiments conducted with cross-validation on a self-collected dataset show that the proposed method achieved 90.11% accuracy in binary classification, 80.50% accuracy in ternary classification, and 58.17% accuracy in quinary classification and the study also used a public data set for sleepiness detection, and the detection accuracy reached 97.34%.
{"title":"FMCW Radar-Based Drowsiness Detection With a Convolutional Adaptive Pooling Attention Gated-Recurrent-Unit Network","authors":"Wending Li;Zhihuo Xu;Liu Chu;Quan Shi;Robin Braun;Jiajia Shi","doi":"10.1109/TRS.2024.3516413","DOIUrl":"https://doi.org/10.1109/TRS.2024.3516413","url":null,"abstract":"The state of drowsiness significantly affects work efficiency and productivity, increasing the risk of accidents and mishaps. Radar-based detection technology offers significant advantages in drowsiness detection, providing a noninvasive and reliable method based on vital sign tracking and physiological feature extraction. However, the classification of sleepiness levels is often simple and the detection accuracy is limited. This study proposes a frequency-modulated continuous-wave (FMCW) radar-based system with a convolutional adaptive pooling attention gated-recurrent-unit (CAPA-GRU) network to enhance detection accuracy and precisely determine levels of radar-based drowsiness detection. First, an FMCW radar is used to obtain breathing and heartbeat signals, and the radar signals are processed through the wavelet transform method to obtain highly accurate physiological characteristics. Then, the vital sign signals are analyzed both in the time and frequency domains, and the optimal input data is obtained by combining the characteristic data. Also, the CAPA-GRU, comprising a convolutional neural network (CNN), a gated-recurrent-unit (GRU), and a convolutional adaptive average pooling (CAA) module, is proposed for drowsiness classification and monitoring. The experimental results show that the proposed method achieves multistage sleepiness detection based on FMCW radar and achieves excellent results in low classification. The proposed network has excellent performance and certain robustness. Experiments conducted with cross-validation on a self-collected dataset show that the proposed method achieved 90.11% accuracy in binary classification, 80.50% accuracy in ternary classification, and 58.17% accuracy in quinary classification and the study also used a public data set for sleepiness detection, and the detection accuracy reached 97.34%.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"71-87"},"PeriodicalIF":0.0,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-11DOI: 10.1109/TRS.2024.3514840
Sabri Mustafa Kahya;Muhammet Sami Yavuz;Eckehard Steinbach
Detecting human presence indoors with millimeter-wave frequency-modulated continuous-wave (FMCW) radar faces challenges from both moving and stationary clutters. This work proposes a robust and real-time capable human presence and out-of-distribution (OOD) detection method using 60-GHz short-range FMCW radar. HOOD solves the human presence and OOD detection problems simultaneously in a single pipeline. Our solution relies on a reconstruction-based architecture and works with radar macro- and micro-range-Doppler images (RDIs). HOOD aims to accurately detect the presence of humans in the presence or absence of moving and stationary disturbers. Since HOOD is also an OOD detector, it aims to detect moving or stationary clutters as OOD in humans’ absence and predicts the current scene’s output as “no presence.” HOOD performs well in diverse scenarios, demonstrating its effectiveness across different human activities and situations. On our dataset collected with a 60-GHz short-range FMCW radar with only one transmit (Tx) and three receive antennas, we achieved an average area under the receiver operating characteristic curve (AUROC) of 94.36%. Additionally, our extensive evaluations and experiments demonstrate that HOOD outperforms state-of-the-art (SOTA) OOD detection methods in terms of common OOD detection metrics. Importantly, HOOD also perfectly fits on Raspberry Pi 3B+ with a advanced RISC machines (ARM) Cortex-A53 CPU, which showcases its versatility across different hardware environments. Videos of our human presence detection experiments are available at: https://muskahya.github.io/HOOD