Sheng Zhuang, Lin Cao, Zongmin Zhao, Dongfeng Wang
{"title":"A Target Detection Method Based on the Fusion Algorithm of Radar and Camera","authors":"Sheng Zhuang, Lin Cao, Zongmin Zhao, Dongfeng Wang","doi":"10.1109/IC-NIDC54101.2021.9660407","DOIUrl":null,"url":null,"abstract":"The method based on the fusion of radar and video in this paper is oriented to detecting surrounding objects while driving. This is usually a method of improving robustness and accuracy by using several senses, which makes sensor fusion a key part of the perception system. We propose a new fusion method called CT-EPNP, which uses radar and camera data for fast detection. Adding a central fusion algorithm on the basis of EPNP, and use the truncated cone method to compensate the radar information on the associated image when mapping. CT-EPNP returns to the object attributes depth, rotation, speed and other attributes. Based on this, simulation verification and related derivation of mathematical formulas are proved. We combined the improved algorithm with the RetinaNet model to ensure that the model is satisfied with the normal driving of the vehicle while gaining a certain increase in the detection rate. We have also made a certain improvement in ensuring repeated detection without using any additional time information.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC-NIDC54101.2021.9660407","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The method based on the fusion of radar and video in this paper is oriented to detecting surrounding objects while driving. This is usually a method of improving robustness and accuracy by using several senses, which makes sensor fusion a key part of the perception system. We propose a new fusion method called CT-EPNP, which uses radar and camera data for fast detection. Adding a central fusion algorithm on the basis of EPNP, and use the truncated cone method to compensate the radar information on the associated image when mapping. CT-EPNP returns to the object attributes depth, rotation, speed and other attributes. Based on this, simulation verification and related derivation of mathematical formulas are proved. We combined the improved algorithm with the RetinaNet model to ensure that the model is satisfied with the normal driving of the vehicle while gaining a certain increase in the detection rate. We have also made a certain improvement in ensuring repeated detection without using any additional time information.