Hybrid-Mode tracker with online SA-LSTM updater

Hongsheng Zheng, Yun Gao, Yaqing Hu, Xuejie Zhang
{"title":"Hybrid-Mode tracker with online SA-LSTM updater","authors":"Hongsheng Zheng, Yun Gao, Yaqing Hu, Xuejie Zhang","doi":"10.1007/s00521-024-10354-4","DOIUrl":null,"url":null,"abstract":"<p>The backbone network and target template are pivotal factors influencing the performance of Siamese trackers. However, traditional approaches encounter challenges in eliminating local redundancy and establishing global dependencies when learning visual data representations. While convolutional neural networks (CNNs) and vision transformers (ViTs) are commonly employed as backbones in Siamese-based trackers, each primarily addresses only one of these challenges. Furthermore, tracking is a dynamic process. Nonetheless, in many Siamese trackers, solely a fixed initial template is employed to facilitate target state matching. This approach often proves inadequate for effectively handling scenes characterized by target deformation, occlusion, and fast motion. In this paper, we propose a Hybrid-Mode Siamese tracker featuring an online SA-LSTM updater. Distinct learning operators are tailored to exploit characteristics at different depth levels of the backbone, integrating convolution and transformers to form a Hybrid-Mode backbone. This backbone efficiently learns global dependencies among input tokens while minimizing redundant computations in local domains, enhancing feature richness for target tracking. The online SA-LSTM updater comprehensively integrates spatial–temporal context during tracking, producing dynamic template features with enhanced representations of target appearance. Extensive experiments across multiple benchmark datasets, including GOT-10K, LaSOT, TrackingNet, OTB-100, UAV123, and NFS, demonstrate that the proposed method achieves outstanding performance, running at 35 FPS on a single GPU.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computing and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00521-024-10354-4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The backbone network and target template are pivotal factors influencing the performance of Siamese trackers. However, traditional approaches encounter challenges in eliminating local redundancy and establishing global dependencies when learning visual data representations. While convolutional neural networks (CNNs) and vision transformers (ViTs) are commonly employed as backbones in Siamese-based trackers, each primarily addresses only one of these challenges. Furthermore, tracking is a dynamic process. Nonetheless, in many Siamese trackers, solely a fixed initial template is employed to facilitate target state matching. This approach often proves inadequate for effectively handling scenes characterized by target deformation, occlusion, and fast motion. In this paper, we propose a Hybrid-Mode Siamese tracker featuring an online SA-LSTM updater. Distinct learning operators are tailored to exploit characteristics at different depth levels of the backbone, integrating convolution and transformers to form a Hybrid-Mode backbone. This backbone efficiently learns global dependencies among input tokens while minimizing redundant computations in local domains, enhancing feature richness for target tracking. The online SA-LSTM updater comprehensively integrates spatial–temporal context during tracking, producing dynamic template features with enhanced representations of target appearance. Extensive experiments across multiple benchmark datasets, including GOT-10K, LaSOT, TrackingNet, OTB-100, UAV123, and NFS, demonstrate that the proposed method achieves outstanding performance, running at 35 FPS on a single GPU.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
带有在线 SA-LSTM 更新器的混合模式跟踪器
骨干网络和目标模板是影响连体跟踪器性能的关键因素。然而,传统方法在学习视觉数据表示时,在消除局部冗余和建立全局依赖关系方面遇到了挑战。虽然卷积神经网络(CNN)和视觉变换器(ViT)通常被用作基于连体的跟踪器的骨干,但每种方法都只能解决其中的一个难题。此外,跟踪是一个动态过程。然而,在许多连体跟踪器中,仅使用固定的初始模板来促进目标状态匹配。事实证明,这种方法往往无法有效处理以目标变形、遮挡和快速运动为特征的场景。在本文中,我们提出了一种混合模式连体跟踪器,具有在线 SA-LSTM 更新器。为利用骨干网不同深度层次的特征,我们定制了不同的学习算子,并将卷积和变换器整合在一起,形成混合模式骨干网。该骨干系统能有效地学习输入标记之间的全局依赖关系,同时最大限度地减少局部域的冗余计算,从而提高目标跟踪的特征丰富度。在线 SA-LSTM 更新器在跟踪过程中全面整合了时空背景,生成了具有目标外观增强表征的动态模板特征。在多个基准数据集(包括 GOT-10K、LaSOT、TrackingNet、OTB-100、UAV123 和 NFS)上进行的广泛实验表明,所提出的方法实现了出色的性能,在单 GPU 上以 35 FPS 的速度运行。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Potential analysis of radiographic images to determine infestation of rice seeds Recommendation systems with user and item profiles based on symbolic modal data End-to-end entity extraction from OCRed texts using summarization models Firearm detection using DETR with multiple self-coordinated neural networks Automated defect identification in coherent diffraction imaging with smart continual learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1