{"title":"A New Method for Object Tracking Based on Regions Instead of Contours","authors":"N. Gómez, R. Alquézar, F. Serratosa","doi":"10.1109/CVPR.2007.383454","DOIUrl":null,"url":null,"abstract":"This paper presents a new method for object tracking in video sequences that is especially suitable in very noisy environments. In such situations, segmented images from one frame to the next one are usually so different that it is very hard or even impossible to match the corresponding regions or contours of both images. With the aim of tracking objects in these situations, our approach has two main characteristics. On one hand, we assume that the tracking approaches based on contours cannot be applied, and therefore, our system uses object recognition results computed from regions (specifically, colour spots from segmented images). On the other hand, we discard to match the spots of consecutive segmented images and, consequently, the methods that represent the objects by structures such as graphs or skeletons, since the structures obtained may be too different in consecutive frames. Thus, we represent the location of tracked objects through images of probabilities that are updated dynamically using both recognition and tracking results in previous steps. From these probabilities and a simple prediction of the apparent motion of the object in the image, a binary decision can be made for each pixel and abject.","PeriodicalId":351008,"journal":{"name":"2007 IEEE Conference on Computer Vision and Pattern Recognition","volume":"125 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE Conference on Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPR.2007.383454","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
This paper presents a new method for object tracking in video sequences that is especially suitable in very noisy environments. In such situations, segmented images from one frame to the next one are usually so different that it is very hard or even impossible to match the corresponding regions or contours of both images. With the aim of tracking objects in these situations, our approach has two main characteristics. On one hand, we assume that the tracking approaches based on contours cannot be applied, and therefore, our system uses object recognition results computed from regions (specifically, colour spots from segmented images). On the other hand, we discard to match the spots of consecutive segmented images and, consequently, the methods that represent the objects by structures such as graphs or skeletons, since the structures obtained may be too different in consecutive frames. Thus, we represent the location of tracked objects through images of probabilities that are updated dynamically using both recognition and tracking results in previous steps. From these probabilities and a simple prediction of the apparent motion of the object in the image, a binary decision can be made for each pixel and abject.