Pub Date : 2023-05-01DOI: 10.1109/RadarConf2351548.2023.10149653
Weitong Zhai, Xiangrong Wang, M. Greco, F. Gini
Integrated sensing and communication (ISAC) is a promising technique in vehicular transportation thanks to its substantial gains in size, cost, power consumption, electromag-netic compatibility and spectrum congestion. In this paper, we propose a reinforcement learning (RL) based ISAC system with a multi-input-multi-output (MIMO) automotive radar. The target sensing and downlink communication are separately performed by dividing the transmit antennas into two non-overlapping but interweaving subarrays. We first design a RL framework to adaptively allocate the proper number of transmit antennas for the two subarrays under any unknown environment. The training is performed in the metrics of Cramer-Rao Bound (CRB) of direction of arrival (DOA) estimation for sensing and receive signal-to-noise (SNR) for communications, respectively. We proceed to propose a co-design method to jointly optimize the configurations of the two subarrays to further enhance the sensing accuracy with a constrained communication quality. The resultant problem is converted into the convex form via convex relaxation. Simulations are provided to demonstrate the adaptability and effectiveness of the proposed RL based ISAC system under the unkown environment.
{"title":"Reinforcement Learning based Integrated Sensing and Communication for Automotive MIMO Radar","authors":"Weitong Zhai, Xiangrong Wang, M. Greco, F. Gini","doi":"10.1109/RadarConf2351548.2023.10149653","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149653","url":null,"abstract":"Integrated sensing and communication (ISAC) is a promising technique in vehicular transportation thanks to its substantial gains in size, cost, power consumption, electromag-netic compatibility and spectrum congestion. In this paper, we propose a reinforcement learning (RL) based ISAC system with a multi-input-multi-output (MIMO) automotive radar. The target sensing and downlink communication are separately performed by dividing the transmit antennas into two non-overlapping but interweaving subarrays. We first design a RL framework to adaptively allocate the proper number of transmit antennas for the two subarrays under any unknown environment. The training is performed in the metrics of Cramer-Rao Bound (CRB) of direction of arrival (DOA) estimation for sensing and receive signal-to-noise (SNR) for communications, respectively. We proceed to propose a co-design method to jointly optimize the configurations of the two subarrays to further enhance the sensing accuracy with a constrained communication quality. The resultant problem is converted into the convex form via convex relaxation. Simulations are provided to demonstrate the adaptability and effectiveness of the proposed RL based ISAC system under the unkown environment.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121202935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1109/RadarConf2351548.2023.10149616
B. K. Chalise, Jahi Douglas, K. Wagner
The effectiveness of target detection methods in radar systems depend on how accurately clutter can be characterized. However, depending on application, clutter statistics vary, and therefore it is difficult to accurately predict such statistics and their parameters. Model-based detection algorithms that are developed for one clutter scenario will fail to yield satisfactory results in another scenario. In this paper, we propose a complete data driven multiple change point detection (CPD) for target detection which does not requires the knowledge of the underlying clutter distribution. The key concept is to iteratively search for slow time instance that maximizes the cumulative sum (CUMSUM) Kolmogorov-Smirnov (KS) statistics. If such statistics exceeds a pre-specified threshold value, then this slow time instance is added to the collection of the estimated change points. This process continues until all CUMSUM-KS statistics are below the threshold. Computer simulations are used to demonstrate the effectiveness of this method for different clutter distributions.
{"title":"Multiple Change Point Detection-based Target Detection in Clutter","authors":"B. K. Chalise, Jahi Douglas, K. Wagner","doi":"10.1109/RadarConf2351548.2023.10149616","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149616","url":null,"abstract":"The effectiveness of target detection methods in radar systems depend on how accurately clutter can be characterized. However, depending on application, clutter statistics vary, and therefore it is difficult to accurately predict such statistics and their parameters. Model-based detection algorithms that are developed for one clutter scenario will fail to yield satisfactory results in another scenario. In this paper, we propose a complete data driven multiple change point detection (CPD) for target detection which does not requires the knowledge of the underlying clutter distribution. The key concept is to iteratively search for slow time instance that maximizes the cumulative sum (CUMSUM) Kolmogorov-Smirnov (KS) statistics. If such statistics exceeds a pre-specified threshold value, then this slow time instance is added to the collection of the estimated change points. This process continues until all CUMSUM-KS statistics are below the threshold. Computer simulations are used to demonstrate the effectiveness of this method for different clutter distributions.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125843689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1109/RadarConf2351548.2023.10149775
A. Diewald, Benjamin Nuss, T. Zwick
Radar target simulators (RTSs) have recently drawn much attention in research and commercial development, as they are capable of performing over-the-air validation tests under laboratory conditions by generating virtual radar echoes that are perceived as targets by a radar under test (RuT). The estimated angle of arrival (AoA) of such a virtual target is controlled, among others, by the physical position of the respective RTS channel that generates it. In this contribution the authors investigate the achievable angle accuracy of RTS systems in dependence of their channel spacing and calibration. This allows to derive the number of RTS channels required given the field of view of the RuT and the desired angle accuracy. For this purpose, a signal model is developed that incorporates the angular positions of the RTS channels and thereby allows an estimation of the achievable angle accuracy under consideration of coherence conditions. The signal model is verified by a measurement campaign.
{"title":"Angle Accuracy in Radar Target Simulation","authors":"A. Diewald, Benjamin Nuss, T. Zwick","doi":"10.1109/RadarConf2351548.2023.10149775","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149775","url":null,"abstract":"Radar target simulators (RTSs) have recently drawn much attention in research and commercial development, as they are capable of performing over-the-air validation tests under laboratory conditions by generating virtual radar echoes that are perceived as targets by a radar under test (RuT). The estimated angle of arrival (AoA) of such a virtual target is controlled, among others, by the physical position of the respective RTS channel that generates it. In this contribution the authors investigate the achievable angle accuracy of RTS systems in dependence of their channel spacing and calibration. This allows to derive the number of RTS channels required given the field of view of the RuT and the desired angle accuracy. For this purpose, a signal model is developed that incorporates the angular positions of the RTS channels and thereby allows an estimation of the achievable angle accuracy under consideration of coherence conditions. The signal model is verified by a measurement campaign.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124380129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1109/RadarConf2351548.2023.10149728
S. Biswas, Benjamin Bartlett, J. Ball, A. Gurbuz
Advanced driver-assisted system (ADAS) typically includes sensors such as Radar, Lidar, or Camera to make vehicles aware of their surroundings. These ADAS systems are presented to a wide variety of situations in traffic, such as upcoming collisions, lane changes, intersections, sudden changes in speed, and other common instances of driving errors. One of the key barriers to automotive autonomy is the inability of self-driving cars to navigate unstructured environments, which typically do not have any traffic lights present or operational for directing traffic. In these circumstances, it is much more common for a person to be tasked with directing vehicles, either by signaling with an appropriate sign or via gesturing. The task of interpreting human body language and gestures by autonomous vehicles in traffic directing scenarios is a great challenge. In this study, we present a new dataset collected of traffic signaling motions using millimeter-wave (mmWave) radar, camera, Lidar and motion-capture system. The dataset is based on those utilized in the US traffic system. Initial classification results from Radar microDoppler (µ-D) signature analysis using basic Convolutional Neural Networks (CNN) demonstrates that deep learning can very accurately (around 92%) classify traffic signaling motions in automotive applications.
{"title":"Classification of Traffic Signaling Motion in Automotive Applications Using FMCW Radar","authors":"S. Biswas, Benjamin Bartlett, J. Ball, A. Gurbuz","doi":"10.1109/RadarConf2351548.2023.10149728","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149728","url":null,"abstract":"Advanced driver-assisted system (ADAS) typically includes sensors such as Radar, Lidar, or Camera to make vehicles aware of their surroundings. These ADAS systems are presented to a wide variety of situations in traffic, such as upcoming collisions, lane changes, intersections, sudden changes in speed, and other common instances of driving errors. One of the key barriers to automotive autonomy is the inability of self-driving cars to navigate unstructured environments, which typically do not have any traffic lights present or operational for directing traffic. In these circumstances, it is much more common for a person to be tasked with directing vehicles, either by signaling with an appropriate sign or via gesturing. The task of interpreting human body language and gestures by autonomous vehicles in traffic directing scenarios is a great challenge. In this study, we present a new dataset collected of traffic signaling motions using millimeter-wave (mmWave) radar, camera, Lidar and motion-capture system. The dataset is based on those utilized in the US traffic system. Initial classification results from Radar microDoppler (µ-D) signature analysis using basic Convolutional Neural Networks (CNN) demonstrates that deep learning can very accurately (around 92%) classify traffic signaling motions in automotive applications.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114936835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1109/RadarConf2351548.2023.10149706
Matthew B. Heintzelman, Jonathan Owen, S. Blunt, Brianna Maio, Erick Steinbach
We consider the intersection between nonrepeating random FM (RFM) waveforms and practical forms of optimal mismatched filtering (MMF). Specifically, the spectrally-shaped inverse filter (SIF) is a well-known approximation to the least-squares (LS-MMF) that provides significant computational savings. Given that nonrepeating waveforms likewise require unique nonrepeating MMFs, this efficient form is an attractive option. Moreover, both RFM waveforms and the SIF rely on spectrum shaping, which establishes a relationship between the goodness of a particular waveform and the mismatch loss (MML) the corresponding filter can achieve. Both simulated and open-air experimental results are shown to demonstrate performance.
{"title":"Practical Considerations for Optimal Mismatched Filtering of Nonrepeating Waveforms","authors":"Matthew B. Heintzelman, Jonathan Owen, S. Blunt, Brianna Maio, Erick Steinbach","doi":"10.1109/RadarConf2351548.2023.10149706","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149706","url":null,"abstract":"We consider the intersection between nonrepeating random FM (RFM) waveforms and practical forms of optimal mismatched filtering (MMF). Specifically, the spectrally-shaped inverse filter (SIF) is a well-known approximation to the least-squares (LS-MMF) that provides significant computational savings. Given that nonrepeating waveforms likewise require unique nonrepeating MMFs, this efficient form is an attractive option. Moreover, both RFM waveforms and the SIF rely on spectrum shaping, which establishes a relationship between the goodness of a particular waveform and the mismatch loss (MML) the corresponding filter can achieve. Both simulated and open-air experimental results are shown to demonstrate performance.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130612979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1109/RadarConf2351548.2023.10149578
Thomas J. Kramer, Erik R. Biehl, Matthew B. Heintzelman, S. Blunt, Erick Steinbach
Spectrally shaped forms of random frequency modulation (RFM) radar waveforms have been experimentally demonstrated for a variety of implementation approaches and applications. Of these, the continuous-wave (CW) perspective is particularly interesting because it enables the prospect of very high signal dimensionality and arbitrary receive processing from a range/Doppler perspective, while also mitigating range ambiguities by avoiding repetition. Here we leverage a modification to the constant-envelope orthogonal frequency division multiplexing (CE-OFDM) framework, which was originally proposed for power-efficient communications, to realize a nonrepeating FMCW radar signal that can be represented with a compact parameterization, thereby circumventing memory constraints that could arise for some applications. Experimental loopback and open-air measurements are used to demonstrate this waveform type.
{"title":"Compact Parameterization of Nonrepeating FMCW Radar Waveforms","authors":"Thomas J. Kramer, Erik R. Biehl, Matthew B. Heintzelman, S. Blunt, Erick Steinbach","doi":"10.1109/RadarConf2351548.2023.10149578","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149578","url":null,"abstract":"Spectrally shaped forms of random frequency modulation (RFM) radar waveforms have been experimentally demonstrated for a variety of implementation approaches and applications. Of these, the continuous-wave (CW) perspective is particularly interesting because it enables the prospect of very high signal dimensionality and arbitrary receive processing from a range/Doppler perspective, while also mitigating range ambiguities by avoiding repetition. Here we leverage a modification to the constant-envelope orthogonal frequency division multiplexing (CE-OFDM) framework, which was originally proposed for power-efficient communications, to realize a nonrepeating FMCW radar signal that can be represented with a compact parameterization, thereby circumventing memory constraints that could arise for some applications. Experimental loopback and open-air measurements are used to demonstrate this waveform type.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"19 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130714170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ship detection in synthetic aperture radar (SAR) images is a hot pot in the remote sensing (RS) field. However, most existing deep learning (DL)-based methods only focus on the single-polarization SAR ship detection without leveraging the rich dual-polarization SAR features, which poses a huge obstacle to the further model performance improvement. One problem for solution is how to fully excavate polarization characteristics using a convolution neural network (CNN). To address the above problem, we propose a novel group-wise feature fusion R-CNN (GWFF R-CNN) for dual-polarization SAR ship detection. Different from raw Faster R-CNN, GWFF R-CNN embeds a group-wise feature fusion module (GWFF module) into the subnetwork of Faster R-CNN, which enables group-wise feature fusion between polarization features and multi-scale ship features. Finally, the experiments on the dual-polarization SAR ship detection dataset (DSSDD) demonstrate that GWFF R-CNN can yield a ~4.1 F1 improvement and a ~2.9 average precision (AP) improvement, compared with Faster R-CNN.
{"title":"Group-Wise Feature Fusion R-CNN for Dual-Polarization SAR Ship Detection","authors":"Xiaowo Xu, Xiaoling Zhang, Tianjiao Zeng, Jun Shi, Zikang Shao, Tianwen Zhang","doi":"10.1109/RadarConf2351548.2023.10149675","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149675","url":null,"abstract":"Ship detection in synthetic aperture radar (SAR) images is a hot pot in the remote sensing (RS) field. However, most existing deep learning (DL)-based methods only focus on the single-polarization SAR ship detection without leveraging the rich dual-polarization SAR features, which poses a huge obstacle to the further model performance improvement. One problem for solution is how to fully excavate polarization characteristics using a convolution neural network (CNN). To address the above problem, we propose a novel group-wise feature fusion R-CNN (GWFF R-CNN) for dual-polarization SAR ship detection. Different from raw Faster R-CNN, GWFF R-CNN embeds a group-wise feature fusion module (GWFF module) into the subnetwork of Faster R-CNN, which enables group-wise feature fusion between polarization features and multi-scale ship features. Finally, the experiments on the dual-polarization SAR ship detection dataset (DSSDD) demonstrate that GWFF R-CNN can yield a ~4.1 F1 improvement and a ~2.9 average precision (AP) improvement, compared with Faster R-CNN.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117005727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1109/RadarConf2351548.2023.10149792
Juezhu Lai, D. Yuan, Jifang Pei, Deqing Mao, Yin Zhang, Xingyu Tuo, Yulin Huang
Complex scene reconstruction is one of the most critical issues in scanning radar processing. The azimuth echo of the scanning radar can be equivalent to the convolution result of the scene scattering coefficient and the antenna pattern. Iter-ative shrinkage-thresholding algorithm (ISTA) has been proven effective in the target reconstruction of the scanning radar, but it often performs unsatisfactory reconstruction quality on complex scenes. This paper proposes a new learning-based approach, an improved ISTA-based deep unfolding network, to reconstruct the scene information from the scanning radar echoes. Unlike the traditional analysis-based method, we established a deep unfolded scene reconstruction network based on the structure of ISTA. This network can learn the optimal network parameters through the input radar data, which avoids the manual selection of parameters in the traditional method. Besides, we apply a loss function to ensure the effectiveness of the sparse transformation so that the method can recover target information from scanning radar echoes in various complex scenes. Extensive experiments demonstrate that this method can highly improve scene reconstruction performance.
{"title":"Scanning Radar Scene Reconstruction With Deep Unfolded ISTA Neural Network","authors":"Juezhu Lai, D. Yuan, Jifang Pei, Deqing Mao, Yin Zhang, Xingyu Tuo, Yulin Huang","doi":"10.1109/RadarConf2351548.2023.10149792","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149792","url":null,"abstract":"Complex scene reconstruction is one of the most critical issues in scanning radar processing. The azimuth echo of the scanning radar can be equivalent to the convolution result of the scene scattering coefficient and the antenna pattern. Iter-ative shrinkage-thresholding algorithm (ISTA) has been proven effective in the target reconstruction of the scanning radar, but it often performs unsatisfactory reconstruction quality on complex scenes. This paper proposes a new learning-based approach, an improved ISTA-based deep unfolding network, to reconstruct the scene information from the scanning radar echoes. Unlike the traditional analysis-based method, we established a deep unfolded scene reconstruction network based on the structure of ISTA. This network can learn the optimal network parameters through the input radar data, which avoids the manual selection of parameters in the traditional method. Besides, we apply a loss function to ensure the effectiveness of the sparse transformation so that the method can recover target information from scanning radar echoes in various complex scenes. Extensive experiments demonstrate that this method can highly improve scene reconstruction performance.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131257265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1109/RadarConf2351548.2023.10149693
Mingcheng Fu, Zhi Zheng, Yizhen Jia, Bang Huang, Wen-qin Wang
In this paper, we devise a novel cylindrical conformal array, termed cylindrical distributed coprime conformal array (CDCCA), for two-dimensional (2-D) direction-of-arrival (DOA) and polarization estimation. The proposed CDCCA avoids the lag redundancies between two adjacent linear subarrays of cylindrical conformal array, and increases the unique lags number in its difference coarray. Moreover, it provides a larger array aperture than the exiting cylindrical conformal arrays under the same number of sensors. Therefore, the CDCCA configuration can resolve a larger number of sources and provide a higher estimation accuracy. Numerical results demonstrate its superiority in comparison to several existing conformal arrays.
{"title":"Cylindrical Distributed Coprime Conformal Array for 2-D DOA and Polarization Estimation","authors":"Mingcheng Fu, Zhi Zheng, Yizhen Jia, Bang Huang, Wen-qin Wang","doi":"10.1109/RadarConf2351548.2023.10149693","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149693","url":null,"abstract":"In this paper, we devise a novel cylindrical conformal array, termed cylindrical distributed coprime conformal array (CDCCA), for two-dimensional (2-D) direction-of-arrival (DOA) and polarization estimation. The proposed CDCCA avoids the lag redundancies between two adjacent linear subarrays of cylindrical conformal array, and increases the unique lags number in its difference coarray. Moreover, it provides a larger array aperture than the exiting cylindrical conformal arrays under the same number of sensors. Therefore, the CDCCA configuration can resolve a larger number of sources and provide a higher estimation accuracy. Numerical results demonstrate its superiority in comparison to several existing conformal arrays.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134513392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1109/RadarConf2351548.2023.10149733
Nazila Karimian Sichani, Moein Ahmadi, E. Raei, M. Alaee-Kerahroodi, B. M. R., E. Mehrshahi, Seyyed Ali Ghorashi
The emerging 4D-imaging automotive MIMO radar sensors necessitate the selection of appropriate transmit wave-forms, which should be separable on the receive side in addition to having low auto-correlation sidelobes. TDM, FDM, DDM, and inter-chirp CDM approaches have traditionally been proposed for FMCW radar sensors to ensure the orthogonality of the transmit signals. However, as the number of transmit antennas increases, each of the aforementioned approaches suffers from some drawbacks, which are described in this paper. PMCW radars, on the other hand, can be considered to be more costly to implement, have been proposed to provide better performance and allow for the use of waveform optimization techniques. In this context, we use a block gradient descent approach to design a waveform set for MIMO-PMCW that is optimized based on weighted integrated sidelobe level in this paper, and we show that the proposed waveform outperforms conventional MIMO-FMCW approaches by performing comparative simulations.
{"title":"Waveform Selection for FMCW and PMCW 4D-Imaging Automotive Radar Sensors","authors":"Nazila Karimian Sichani, Moein Ahmadi, E. Raei, M. Alaee-Kerahroodi, B. M. R., E. Mehrshahi, Seyyed Ali Ghorashi","doi":"10.1109/RadarConf2351548.2023.10149733","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149733","url":null,"abstract":"The emerging 4D-imaging automotive MIMO radar sensors necessitate the selection of appropriate transmit wave-forms, which should be separable on the receive side in addition to having low auto-correlation sidelobes. TDM, FDM, DDM, and inter-chirp CDM approaches have traditionally been proposed for FMCW radar sensors to ensure the orthogonality of the transmit signals. However, as the number of transmit antennas increases, each of the aforementioned approaches suffers from some drawbacks, which are described in this paper. PMCW radars, on the other hand, can be considered to be more costly to implement, have been proposed to provide better performance and allow for the use of waveform optimization techniques. In this context, we use a block gradient descent approach to design a waveform set for MIMO-PMCW that is optimized based on weighted integrated sidelobe level in this paper, and we show that the proposed waveform outperforms conventional MIMO-FMCW approaches by performing comparative simulations.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131787634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}