OmniTracker: Unifying Visual Object Tracking by Tracking-With-Detection

Junke Wang;Zuxuan Wu;Dongdong Chen;Chong Luo;Xiyang Dai;Lu Yuan;Yu-Gang Jiang
{"title":"OmniTracker: Unifying Visual Object Tracking by Tracking-With-Detection","authors":"Junke Wang;Zuxuan Wu;Dongdong Chen;Chong Luo;Xiyang Dai;Lu Yuan;Yu-Gang Jiang","doi":"10.1109/TPAMI.2025.3529926","DOIUrl":null,"url":null,"abstract":"Visual Object Tracking (VOT) aims to estimate the positions of target objects in a video sequence, which is an important vision task with various real-world applications. Depending on whether the initial states of target objects are specified by provided annotations in the first frame or the categories, VOT could be classified as instance tracking (e.g., SOT and VOS) and category tracking (e.g., MOT, MOTS, and VIS) tasks. Different definitions have led to divergent solutions for these two types of tasks, resulting in redundant training expenses and parameter overhead. In this paper, combing the advantages of the best practices developed in both communities, we propose a novel tracking-with-detection paradigm, where tracking supplements appearance priors for detection and detection provides tracking with candidate bounding boxes for the association. Equipped with such a design, a unified tracking model, OmniTracker, is further presented to resolve all the tracking tasks with a fully shared network architecture, model weights, and inference pipeline, eliminating the need for task-specific architectures and reducing redundancy in model parameters. We conduct extensive experimentation on seven prominent tracking datasets of different tracking tasks, including LaSOT, TrackingNet, DAVIS16-17, MOT17, MOTS20, and YTVIS19, and demonstrate that OmniTracker achieves on-par or even better results than both task-specific and unified tracking models.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 4","pages":"3159-3174"},"PeriodicalIF":18.6000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10842236/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Visual Object Tracking (VOT) aims to estimate the positions of target objects in a video sequence, which is an important vision task with various real-world applications. Depending on whether the initial states of target objects are specified by provided annotations in the first frame or the categories, VOT could be classified as instance tracking (e.g., SOT and VOS) and category tracking (e.g., MOT, MOTS, and VIS) tasks. Different definitions have led to divergent solutions for these two types of tasks, resulting in redundant training expenses and parameter overhead. In this paper, combing the advantages of the best practices developed in both communities, we propose a novel tracking-with-detection paradigm, where tracking supplements appearance priors for detection and detection provides tracking with candidate bounding boxes for the association. Equipped with such a design, a unified tracking model, OmniTracker, is further presented to resolve all the tracking tasks with a fully shared network architecture, model weights, and inference pipeline, eliminating the need for task-specific architectures and reducing redundancy in model parameters. We conduct extensive experimentation on seven prominent tracking datasets of different tracking tasks, including LaSOT, TrackingNet, DAVIS16-17, MOT17, MOTS20, and YTVIS19, and demonstrate that OmniTracker achieves on-par or even better results than both task-specific and unified tracking models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
OmniTracker:通过跟踪检测来统一视觉对象跟踪
视觉目标跟踪(VOT)旨在估计视频序列中目标物体的位置,是一项具有多种实际应用的重要视觉任务。根据目标对象的初始状态是否由第一帧或类别中提供的注释指定,VOT可以分为实例跟踪(例如SOT和VOS)和类别跟踪(例如MOT、MOTS和VIS)任务。不同的定义导致了这两类任务的不同解决方案,导致了冗余的培训费用和参数开销。在本文中,结合这两个领域的最佳实践的优点,我们提出了一种新的带有检测的跟踪范式,其中跟踪补充了检测的外观先验,检测为关联提供了带有候选边界框的跟踪。在此基础上,进一步提出了统一的跟踪模型OmniTracker,该模型通过完全共享的网络架构、模型权重和推理管道来解决所有跟踪任务,从而消除了对特定任务架构的需求,减少了模型参数的冗余。我们在LaSOT、TrackingNet、DAVIS16-17、MOT17、MOTS20和YTVIS19等7个不同跟踪任务的主要跟踪数据集上进行了广泛的实验,并证明OmniTracker比特定任务和统一跟踪模型取得了同等甚至更好的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Examining the Impact of Optical Aberrations to Image Classification and Object Detection Models. Neural Eigenfunctions are Structured Representation Learners. Distribution-to-Points Matching for Image Text Retrieval. Penny-Wise and Pound-Foolish in AI-Generated Image Detection. Enhancing Adversarial Transferability with Cost-efficient Landscape Flattening.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1