{"title":"基于塑料神经网络的记忆模板提取器","authors":"Yuyu Zhao, Shaowu Yang","doi":"10.1109/ICCRE51898.2021.9435660","DOIUrl":null,"url":null,"abstract":"Visual object tracking plays an important role in military guidance, human-computer interaction, and robot visual navigation. With the high performance and real-time speed, the Siamese network has become popular in recent years, which localizes the target by comparing the similarity of the appearance template and the candidate boxes in the search region. However, the appearance template is only extracted from the current frame, which results in the missing of the target for constantly changing appearance. In this paper, we propose a template extractor that can capture the latest appearance features from the previously predicted templates, named PlasticNet. We take inspiration from the memory mechanism of neuroscience (synaptic plasticity): the connections between neurons will be enhanced when they are stimulated at the same time. We combined it with recurrent networks to realize the PlasticNet. Our method can easily be integrated into existing siamese trackers. Our proposed model is applied in SiamRPN and improved performance. Extensive experiments on OTB2015, VOT2018, VOT2016 datasets demonstrate that our PlasticNet can effectively adapt to appearance changes.","PeriodicalId":382619,"journal":{"name":"2021 6th International Conference on Control and Robotics Engineering (ICCRE)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PlasticNet: A Memory Template Extractor with Plastic Neural Networks for Object Tracking\",\"authors\":\"Yuyu Zhao, Shaowu Yang\",\"doi\":\"10.1109/ICCRE51898.2021.9435660\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual object tracking plays an important role in military guidance, human-computer interaction, and robot visual navigation. With the high performance and real-time speed, the Siamese network has become popular in recent years, which localizes the target by comparing the similarity of the appearance template and the candidate boxes in the search region. However, the appearance template is only extracted from the current frame, which results in the missing of the target for constantly changing appearance. In this paper, we propose a template extractor that can capture the latest appearance features from the previously predicted templates, named PlasticNet. We take inspiration from the memory mechanism of neuroscience (synaptic plasticity): the connections between neurons will be enhanced when they are stimulated at the same time. We combined it with recurrent networks to realize the PlasticNet. Our method can easily be integrated into existing siamese trackers. Our proposed model is applied in SiamRPN and improved performance. Extensive experiments on OTB2015, VOT2018, VOT2016 datasets demonstrate that our PlasticNet can effectively adapt to appearance changes.\",\"PeriodicalId\":382619,\"journal\":{\"name\":\"2021 6th International Conference on Control and Robotics Engineering (ICCRE)\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 6th International Conference on Control and Robotics Engineering (ICCRE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCRE51898.2021.9435660\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 6th International Conference on Control and Robotics Engineering (ICCRE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCRE51898.2021.9435660","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
PlasticNet: A Memory Template Extractor with Plastic Neural Networks for Object Tracking
Visual object tracking plays an important role in military guidance, human-computer interaction, and robot visual navigation. With the high performance and real-time speed, the Siamese network has become popular in recent years, which localizes the target by comparing the similarity of the appearance template and the candidate boxes in the search region. However, the appearance template is only extracted from the current frame, which results in the missing of the target for constantly changing appearance. In this paper, we propose a template extractor that can capture the latest appearance features from the previously predicted templates, named PlasticNet. We take inspiration from the memory mechanism of neuroscience (synaptic plasticity): the connections between neurons will be enhanced when they are stimulated at the same time. We combined it with recurrent networks to realize the PlasticNet. Our method can easily be integrated into existing siamese trackers. Our proposed model is applied in SiamRPN and improved performance. Extensive experiments on OTB2015, VOT2018, VOT2016 datasets demonstrate that our PlasticNet can effectively adapt to appearance changes.