Pub Date : 2024-03-18DOI: 10.1109/THMS.2024.3370582
Fengwei Gu;Jun Lu;Chengtao Cai;Qidan Zhu;Zhaojie Ju
In complex environments, trackers are extremely susceptible to some interference factors, such as fast motions, occlusion, and scale changes, which result in poor tracking performance. The reason is that trackers cannot sufficiently utilize the target feature information in these cases. Therefore, it has become a particularly critical issue in the field of visual tracking to utilize the target feature information efficiently. In this article, a composite transformer involving spatiotemporal features is proposed to achieve robust visual tracking. Our method develops a novel toroidal transformer to fully integrate features while designing a template refresh mechanism to provide temporal features efficiently. Combined with the hybrid attention mechanism, the composite of temporal and spatial feature information is more conducive to mining feature associations between the template and search region than a single feature. To further correlate the global information, the proposed method adopts a closed-loop structure of the toroidal transformer formed by the cross-feature fusion head to integrate features. Moreover, the designed score head is used as a basis for judging whether the template is refreshed. Ultimately, the proposed tracker can achieve the tracking task only through a simple network framework, which especially simplifies the existing tracking architectures. Experiments show that the proposed tracker outperforms extensive state-of-the-art methods on seven benchmarks at a real-time speed of 56.5 fps.
{"title":"RTSformer: A Robust Toroidal Transformer With Spatiotemporal Features for Visual Tracking","authors":"Fengwei Gu;Jun Lu;Chengtao Cai;Qidan Zhu;Zhaojie Ju","doi":"10.1109/THMS.2024.3370582","DOIUrl":"10.1109/THMS.2024.3370582","url":null,"abstract":"In complex environments, trackers are extremely susceptible to some interference factors, such as fast motions, occlusion, and scale changes, which result in poor tracking performance. The reason is that trackers cannot sufficiently utilize the target feature information in these cases. Therefore, it has become a particularly critical issue in the field of visual tracking to utilize the target feature information efficiently. In this article, a composite transformer involving spatiotemporal features is proposed to achieve robust visual tracking. Our method develops a novel toroidal transformer to fully integrate features while designing a template refresh mechanism to provide temporal features efficiently. Combined with the hybrid attention mechanism, the composite of temporal and spatial feature information is more conducive to mining feature associations between the template and search region than a single feature. To further correlate the global information, the proposed method adopts a closed-loop structure of the toroidal transformer formed by the cross-feature fusion head to integrate features. Moreover, the designed score head is used as a basis for judging whether the template is refreshed. Ultimately, the proposed tracker can achieve the tracking task only through a simple network framework, which especially simplifies the existing tracking architectures. Experiments show that the proposed tracker outperforms extensive state-of-the-art methods on seven benchmarks at a real-time speed of 56.5 fps.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1109/THMS.2024.3371099
Balint K. Hodossy;Annika S. Guez;Shibo Jing;Weiguang Huo;Ravi Vaidyanathan;Dario Farina
To control wearable robotic systems, it is critical to obtain a prediction of the user's motion intent with high accuracy. Surface electromyography (sEMG) recordings have often been used as inputs for these devices, however bipolar sEMG electrodes are highly sensitive to their location. Positional shifts of electrodes after training gait prediction models can therefore result in severe performance degradation. This study uses high-density sEMG (HD-sEMG) electrodes to simulate various bipolar electrode signals from four leg muscles during steady-state walking. The bipolar signals were ranked based on the consistency of the corresponding sEMG envelope's activity and timing across gait cycles. The locations were then compared by evaluating the performance of an offline temporal convolutional network (TCN) that mapped sEMG signals to knee angles. The results showed that electrode locations with consistent sEMG envelopes resulted in greater prediction accuracy compared to hand-aligned placements ( p