Fengwei Gu;Jun Lu;Chengtao Cai;Qidan Zhu;Zhaojie Ju
{"title":"RTSformer: A Robust Toroidal Transformer With Spatiotemporal Features for Visual Tracking","authors":"Fengwei Gu;Jun Lu;Chengtao Cai;Qidan Zhu;Zhaojie Ju","doi":"10.1109/THMS.2024.3370582","DOIUrl":null,"url":null,"abstract":"In complex environments, trackers are extremely susceptible to some interference factors, such as fast motions, occlusion, and scale changes, which result in poor tracking performance. The reason is that trackers cannot sufficiently utilize the target feature information in these cases. Therefore, it has become a particularly critical issue in the field of visual tracking to utilize the target feature information efficiently. In this article, a composite transformer involving spatiotemporal features is proposed to achieve robust visual tracking. Our method develops a novel toroidal transformer to fully integrate features while designing a template refresh mechanism to provide temporal features efficiently. Combined with the hybrid attention mechanism, the composite of temporal and spatial feature information is more conducive to mining feature associations between the template and search region than a single feature. To further correlate the global information, the proposed method adopts a closed-loop structure of the toroidal transformer formed by the cross-feature fusion head to integrate features. Moreover, the designed score head is used as a basis for judging whether the template is refreshed. Ultimately, the proposed tracker can achieve the tracking task only through a simple network framework, which especially simplifies the existing tracking architectures. Experiments show that the proposed tracker outperforms extensive state-of-the-art methods on seven benchmarks at a real-time speed of 56.5 fps.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Human-Machine Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10474210/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In complex environments, trackers are extremely susceptible to some interference factors, such as fast motions, occlusion, and scale changes, which result in poor tracking performance. The reason is that trackers cannot sufficiently utilize the target feature information in these cases. Therefore, it has become a particularly critical issue in the field of visual tracking to utilize the target feature information efficiently. In this article, a composite transformer involving spatiotemporal features is proposed to achieve robust visual tracking. Our method develops a novel toroidal transformer to fully integrate features while designing a template refresh mechanism to provide temporal features efficiently. Combined with the hybrid attention mechanism, the composite of temporal and spatial feature information is more conducive to mining feature associations between the template and search region than a single feature. To further correlate the global information, the proposed method adopts a closed-loop structure of the toroidal transformer formed by the cross-feature fusion head to integrate features. Moreover, the designed score head is used as a basis for judging whether the template is refreshed. Ultimately, the proposed tracker can achieve the tracking task only through a simple network framework, which especially simplifies the existing tracking architectures. Experiments show that the proposed tracker outperforms extensive state-of-the-art methods on seven benchmarks at a real-time speed of 56.5 fps.
期刊介绍:
The scope of the IEEE Transactions on Human-Machine Systems includes the fields of human machine systems. It covers human systems and human organizational interactions including cognitive ergonomics, system test and evaluation, and human information processing concerns in systems and organizations.