基于可见光和红外图像融合的增强型自动驾驶目标跟踪算法

Quan Yuan;Haixu Shi;Ashton Tan Yu Xuan;Ming Gao;Qing Xu;Jianqiang Wang
{"title":"基于可见光和红外图像融合的增强型自动驾驶目标跟踪算法","authors":"Quan Yuan;Haixu Shi;Ashton Tan Yu Xuan;Ming Gao;Qing Xu;Jianqiang Wang","doi":"10.26599/JICV.2023.9210018","DOIUrl":null,"url":null,"abstract":"In autonomous driving, target tracking is essential to environmental perception. The study of target tracking algorithms can improve the accuracy of an autonomous driving vehicle's perception, which is of great significance in ensuring the safety of autonomous driving and promoting the landing of technical applications. This study focuses on the fusion tracking algorithm based on visible and infrared images. The proposed approach utilizes a feature-level image fusion method, dividing the tracking process into two components: image fusion and target tracking. An unsupervised network, Visible and Infrared image Fusion Network (VIF-net), is employed for visible and infrared image fusion in the image fusion part. In the target tracking part, Siamese Region Proposal Network (SiamRPN), based on deep learning, tracks the target with fused images. The fusion tracking algorithm is trained and evaluated on the visible infrared image dataset RGBT234. Experimental results demonstrate that the algorithm outperforms training networks solely based on visible images, proving that the fusion of visible and infrared images in the target tracking algorithm can improve the accuracy of the target tracking even if it is like tracking-based visual images. This improvement is also attributed to the algorithm's ability to extract infrared image features, augmenting the target tracking accuracy.","PeriodicalId":100793,"journal":{"name":"Journal of Intelligent and Connected Vehicles","volume":"6 4","pages":"237-249"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10409225","citationCount":"0","resultStr":"{\"title\":\"Enhanced Target Tracking Algorithm for Autonomous Driving Based on Visible and Infrared Image Fusion\",\"authors\":\"Quan Yuan;Haixu Shi;Ashton Tan Yu Xuan;Ming Gao;Qing Xu;Jianqiang Wang\",\"doi\":\"10.26599/JICV.2023.9210018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In autonomous driving, target tracking is essential to environmental perception. The study of target tracking algorithms can improve the accuracy of an autonomous driving vehicle's perception, which is of great significance in ensuring the safety of autonomous driving and promoting the landing of technical applications. This study focuses on the fusion tracking algorithm based on visible and infrared images. The proposed approach utilizes a feature-level image fusion method, dividing the tracking process into two components: image fusion and target tracking. An unsupervised network, Visible and Infrared image Fusion Network (VIF-net), is employed for visible and infrared image fusion in the image fusion part. In the target tracking part, Siamese Region Proposal Network (SiamRPN), based on deep learning, tracks the target with fused images. The fusion tracking algorithm is trained and evaluated on the visible infrared image dataset RGBT234. Experimental results demonstrate that the algorithm outperforms training networks solely based on visible images, proving that the fusion of visible and infrared images in the target tracking algorithm can improve the accuracy of the target tracking even if it is like tracking-based visual images. This improvement is also attributed to the algorithm's ability to extract infrared image features, augmenting the target tracking accuracy.\",\"PeriodicalId\":100793,\"journal\":{\"name\":\"Journal of Intelligent and Connected Vehicles\",\"volume\":\"6 4\",\"pages\":\"237-249\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10409225\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Intelligent and Connected Vehicles\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10409225/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent and Connected Vehicles","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10409225/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在自动驾驶中,目标跟踪对环境感知至关重要。目标跟踪算法的研究可以提高自动驾驶汽车感知的准确性,对保障自动驾驶的安全性、促进技术应用落地具有重要意义。本研究的重点是基于可见光和红外图像的融合跟踪算法。所提出的方法采用了特征级图像融合方法,将跟踪过程分为图像融合和目标跟踪两部分。在图像融合部分,采用无监督网络--可见光和红外图像融合网络(VIF-net)进行可见光和红外图像融合。在目标跟踪部分,基于深度学习的暹罗区域建议网络(SiamRPN)利用融合图像跟踪目标。融合跟踪算法在可见光红外图像数据集 RGBT234 上进行了训练和评估。实验结果表明,该算法的性能优于仅基于可见光图像的训练网络,证明在目标跟踪算法中融合可见光和红外图像可以提高目标跟踪的准确性,即使它就像基于视觉图像的跟踪一样。这种改进还归功于该算法提取红外图像特征的能力,从而提高了目标跟踪的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Enhanced Target Tracking Algorithm for Autonomous Driving Based on Visible and Infrared Image Fusion
In autonomous driving, target tracking is essential to environmental perception. The study of target tracking algorithms can improve the accuracy of an autonomous driving vehicle's perception, which is of great significance in ensuring the safety of autonomous driving and promoting the landing of technical applications. This study focuses on the fusion tracking algorithm based on visible and infrared images. The proposed approach utilizes a feature-level image fusion method, dividing the tracking process into two components: image fusion and target tracking. An unsupervised network, Visible and Infrared image Fusion Network (VIF-net), is employed for visible and infrared image fusion in the image fusion part. In the target tracking part, Siamese Region Proposal Network (SiamRPN), based on deep learning, tracks the target with fused images. The fusion tracking algorithm is trained and evaluated on the visible infrared image dataset RGBT234. Experimental results demonstrate that the algorithm outperforms training networks solely based on visible images, proving that the fusion of visible and infrared images in the target tracking algorithm can improve the accuracy of the target tracking even if it is like tracking-based visual images. This improvement is also attributed to the algorithm's ability to extract infrared image features, augmenting the target tracking accuracy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.10
自引率
0.00%
发文量
0
期刊最新文献
Front Cover Contents Advancements and Prospects in Multisensor Fusion for Autonomous Driving Extracting Networkwide Road Segment Location, Direction, and Turning Movement Rules From Global Positioning System Vehicle Trajectory Data for Macrosimulation Decision Making and Control of Autonomous Vehicles Under the Condition of Front Vehicle Sideslip
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1