交叉:用事件相机增强面向运动的对象跟踪

Zhiwen Chen;Jinjian Wu;Weisheng Dong;Leida Li;Guangming Shi
{"title":"交叉:用事件相机增强面向运动的对象跟踪","authors":"Zhiwen Chen;Jinjian Wu;Weisheng Dong;Leida Li;Guangming Shi","doi":"10.1109/TIP.2024.3505672","DOIUrl":null,"url":null,"abstract":"With the differential sensitivity and high time resolution, event cameras can record detailed motion clues, which form a complementary advantage with frame-based cameras to enhance the object tracking, especially in challenging dynamic scenes. However, how to better match heterogeneous event-image data and exploit rich complementary cues from them still remains an open issue. In this paper, we align event-image modalities by proposing a motion adaptive event sampling method, and we revisit the cross-complementarities of event-image data to design a bidirectional-enhanced fusion framework. Specifically, this sampling strategy can adapt to different dynamic scenes and integrate aligned event-image pairs. Besides, we design an image-guided motion estimation unit for extracting explicit instance-level motions, aiming at refining the uncertain event clues to distinguish primary objects and background. Then, a semantic modulation module is devised to utilize the enhanced object motion to modify the image features. Coupled with these two modules, this framework learns both the high motion sensitivity of events and the full texture of images to achieve more accurate and robust tracking. The proposed method is easily embedded in existing tracking pipelines, and trained end-to-end. We evaluate it on four large benchmarks, i.e. FE108, VisEvent, FE240hz and CoeSot. Extensive experiments demonstrate our method achieves state-of-the-art performance, and large improvements are pointed as contributions by our sampling strategy and fusion concept.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"73-84"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CrossEI: Boosting Motion-Oriented Object Tracking With an Event Camera\",\"authors\":\"Zhiwen Chen;Jinjian Wu;Weisheng Dong;Leida Li;Guangming Shi\",\"doi\":\"10.1109/TIP.2024.3505672\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the differential sensitivity and high time resolution, event cameras can record detailed motion clues, which form a complementary advantage with frame-based cameras to enhance the object tracking, especially in challenging dynamic scenes. However, how to better match heterogeneous event-image data and exploit rich complementary cues from them still remains an open issue. In this paper, we align event-image modalities by proposing a motion adaptive event sampling method, and we revisit the cross-complementarities of event-image data to design a bidirectional-enhanced fusion framework. Specifically, this sampling strategy can adapt to different dynamic scenes and integrate aligned event-image pairs. Besides, we design an image-guided motion estimation unit for extracting explicit instance-level motions, aiming at refining the uncertain event clues to distinguish primary objects and background. Then, a semantic modulation module is devised to utilize the enhanced object motion to modify the image features. Coupled with these two modules, this framework learns both the high motion sensitivity of events and the full texture of images to achieve more accurate and robust tracking. The proposed method is easily embedded in existing tracking pipelines, and trained end-to-end. We evaluate it on four large benchmarks, i.e. FE108, VisEvent, FE240hz and CoeSot. Extensive experiments demonstrate our method achieves state-of-the-art performance, and large improvements are pointed as contributions by our sampling strategy and fusion concept.\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":\"34 \",\"pages\":\"73-84\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-12-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10776574/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10776574/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由于差分灵敏度和高时间分辨率,事件相机可以记录详细的运动线索,这与基于帧的相机形成互补优势,以增强目标跟踪,特别是在具有挑战性的动态场景中。然而,如何更好地匹配异构事件图像数据并从中挖掘丰富的互补线索仍然是一个悬而未决的问题。在本文中,我们通过提出一种运动自适应事件采样方法来对齐事件图像模式,并重新审视事件图像数据的交叉互补性来设计一个双向增强的融合框架。具体来说,该采样策略可以适应不同的动态场景,并整合对齐的事件-图像对。此外,我们设计了一个图像引导的运动估计单元,用于提取显式的实例级运动,旨在提炼不确定事件线索,以区分主要目标和背景。然后,设计了语义调制模块,利用增强的物体运动对图像特征进行修改。结合这两个模块,该框架学习了事件的高运动灵敏度和图像的完整纹理,以实现更准确和鲁棒的跟踪。该方法易于嵌入到现有的跟踪管道中,并且可以进行端到端训练。我们在四个大型基准测试上进行了评估,即FE108, VisEvent, FE240hz和CoeSot。大量的实验表明,我们的方法达到了最先进的性能,并指出了我们的采样策略和融合概念的巨大改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CrossEI: Boosting Motion-Oriented Object Tracking With an Event Camera
With the differential sensitivity and high time resolution, event cameras can record detailed motion clues, which form a complementary advantage with frame-based cameras to enhance the object tracking, especially in challenging dynamic scenes. However, how to better match heterogeneous event-image data and exploit rich complementary cues from them still remains an open issue. In this paper, we align event-image modalities by proposing a motion adaptive event sampling method, and we revisit the cross-complementarities of event-image data to design a bidirectional-enhanced fusion framework. Specifically, this sampling strategy can adapt to different dynamic scenes and integrate aligned event-image pairs. Besides, we design an image-guided motion estimation unit for extracting explicit instance-level motions, aiming at refining the uncertain event clues to distinguish primary objects and background. Then, a semantic modulation module is devised to utilize the enhanced object motion to modify the image features. Coupled with these two modules, this framework learns both the high motion sensitivity of events and the full texture of images to achieve more accurate and robust tracking. The proposed method is easily embedded in existing tracking pipelines, and trained end-to-end. We evaluate it on four large benchmarks, i.e. FE108, VisEvent, FE240hz and CoeSot. Extensive experiments demonstrate our method achieves state-of-the-art performance, and large improvements are pointed as contributions by our sampling strategy and fusion concept.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Enhancing Text-Video Retrieval Performance With Low-Salient but Discriminative Objects Breaking Boundaries: Unifying Imaging and Compression for HDR Image Compression A Pyramid Fusion MLP for Dense Prediction IFENet: Interaction, Fusion, and Enhancement Network for V-D-T Salient Object Detection NeuralDiffuser: Neuroscience-Inspired Diffusion Guidance for fMRI Visual Reconstruction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1