Pub Date : 2025-01-01DOI: 10.1109/mcom.001.2400007
Shu Sun, Renwang Li, Chong Han, Xingchen Liu, Liuxun Xue, Meixia Tao
{"title":"How to Differentiate Between Near Field and Far Field: Revisiting the Rayleigh Distance","authors":"Shu Sun, Renwang Li, Chong Han, Xingchen Liu, Liuxun Xue, Meixia Tao","doi":"10.1109/mcom.001.2400007","DOIUrl":"https://doi.org/10.1109/mcom.001.2400007","url":null,"abstract":"","PeriodicalId":55030,"journal":{"name":"IEEE Communications Magazine","volume":"26 1","pages":""},"PeriodicalIF":11.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142911868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-10-29DOI: 10.1016/j.neunet.2024.106839
Yuanming Zhang, Huihui Pan, Jue Wang
Discriminative correlation filters with temporal regularization have recently attracted much attention in mobile video tracking, due to the challenges of target occlusion and background interference. However, rigidly penalizing the variability of templates between adjacent frames makes trackers lazy for target evolution, leading to inaccurate responses or even tracking failure when deformation occurs. In this paper, we address the problem of instant template learning when the target undergoes drastic variations in appearance and aspect ratio. We first propose a temporally even model featuring deformation slack, which theoretically supports the ability of the template to respond quickly to variations while suppressing disturbances. Then, an optimal derivation of our model is formulated, and the closed form solutions are deduced to facilitate the algorithm implementation. Further, we introduce a cyclic shift methodology for mirror factors to achieve scale estimation of varying aspect ratios, thereby dramatically improving the cross-area accuracy. Comprehensive comparisons on seven datasets demonstrate our excellent performance: DroneTB-70, VisDrone-SOT2019, VOT-2019, LaSOT, TC-128, UAV-20L, and UAVDT. Our approach runs at 16.9 frames per second on a low-cost Central Processing Unit, which makes it suitable for tracking on drones. The code and raw results will be made publicly available at: https://github.com/visualperceptlab/TEDS.
{"title":"Enabling deformation slack in tracking with temporally even correlation filters.","authors":"Yuanming Zhang, Huihui Pan, Jue Wang","doi":"10.1016/j.neunet.2024.106839","DOIUrl":"10.1016/j.neunet.2024.106839","url":null,"abstract":"<p><p>Discriminative correlation filters with temporal regularization have recently attracted much attention in mobile video tracking, due to the challenges of target occlusion and background interference. However, rigidly penalizing the variability of templates between adjacent frames makes trackers lazy for target evolution, leading to inaccurate responses or even tracking failure when deformation occurs. In this paper, we address the problem of instant template learning when the target undergoes drastic variations in appearance and aspect ratio. We first propose a temporally even model featuring deformation slack, which theoretically supports the ability of the template to respond quickly to variations while suppressing disturbances. Then, an optimal derivation of our model is formulated, and the closed form solutions are deduced to facilitate the algorithm implementation. Further, we introduce a cyclic shift methodology for mirror factors to achieve scale estimation of varying aspect ratios, thereby dramatically improving the cross-area accuracy. Comprehensive comparisons on seven datasets demonstrate our excellent performance: DroneTB-70, VisDrone-SOT2019, VOT-2019, LaSOT, TC-128, UAV-20L, and UAVDT. Our approach runs at 16.9 frames per second on a low-cost Central Processing Unit, which makes it suitable for tracking on drones. The code and raw results will be made publicly available at: https://github.com/visualperceptlab/TEDS.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"106839"},"PeriodicalIF":6.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1109/jiot.2024.3524615
Jiaxing Fang, Pengcheng Zhu, Bo Ai, Fu-Chun Zheng, Xiaohu You
{"title":"Resource Allocation for eMBB/URLLC Coexistence in Massive MIMO Industrial Automation","authors":"Jiaxing Fang, Pengcheng Zhu, Bo Ai, Fu-Chun Zheng, Xiaohu You","doi":"10.1109/jiot.2024.3524615","DOIUrl":"https://doi.org/10.1109/jiot.2024.3524615","url":null,"abstract":"","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"172 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142911632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}