首页 > 最新文献

2011 Third Chinese Conference on Intelligent Visual Surveillance最新文献

英文 中文
A rain detection and removal method in video image 一种视频图像中的雨水检测与去除方法
Pub Date : 2011-12-01 DOI: 10.1109/IVSURV.2011.6157010
Meibin Qi, B. Geng, Jianguo Jiang, Tiao Wang
Due to the random spatial distribution and fast motion of rain, removal of rain in video is a more difficult problem. This paper presents a background subtraction based on sample model to remove the rain. First, analyze the properties of rain, then establish the sample model with values randomly take in the spatial neighborhood of each pixel on the first frame, so better to classify detected rain by background subtraction. In addition, the movement of objects will cause the corresponding color pixel brightness values to change significantly, the H component of the HSI color space was applied to reduce the impact of moving objects on rain removal. Experimental results show that this method compared with existing methods can not only eliminate a good rain, but also have faster processing speed.
由于雨水在空间上的随机分布和快速运动,在视频中去除雨水是一个比较困难的问题。本文提出了一种基于样本模型的背景减法来去除雨水。首先对雨的特性进行分析,然后在第一帧上每个像素的空间邻域随机取值,建立样本模型,以便更好地通过背景减法对检测到的雨进行分类。另外,物体的移动会导致相应的色彩像素亮度值发生明显变化,采用HSI色彩空间的H分量来减少运动物体对消雨的影响。实验结果表明,与现有方法相比,该方法不仅可以消除良好的降雨,而且处理速度更快。
{"title":"A rain detection and removal method in video image","authors":"Meibin Qi, B. Geng, Jianguo Jiang, Tiao Wang","doi":"10.1109/IVSURV.2011.6157010","DOIUrl":"https://doi.org/10.1109/IVSURV.2011.6157010","url":null,"abstract":"Due to the random spatial distribution and fast motion of rain, removal of rain in video is a more difficult problem. This paper presents a background subtraction based on sample model to remove the rain. First, analyze the properties of rain, then establish the sample model with values randomly take in the spatial neighborhood of each pixel on the first frame, so better to classify detected rain by background subtraction. In addition, the movement of objects will cause the corresponding color pixel brightness values to change significantly, the H component of the HSI color space was applied to reduce the impact of moving objects on rain removal. Experimental results show that this method compared with existing methods can not only eliminate a good rain, but also have faster processing speed.","PeriodicalId":141829,"journal":{"name":"2011 Third Chinese Conference on Intelligent Visual Surveillance","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121532863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Crowd segmentation based on fusion of appearance and motion features 基于外观和运动特征融合的人群分割
Pub Date : 2011-12-01 DOI: 10.1109/IVSURV.2011.6157036
Ya-Li Hou, G. Pang
Crowd segmentation is an important topic in a visual surveillance system. In this paper, crowd segmentation is formulated as a problem to cluster the feature points inside the foreground region with a set of rectangles. Coherent motion of feature points in an individual are fused with appearance cues around the feature points for crowd segmentation, which has improved the segmentation performance. Furthermore, three descriptors are proposed to extract the points with a non-articulated movement. Some results on the CAVIAR dataset have been shown. The results show that coherent motion cue can be used more reliably by considering the points with rigid motion only.
人群分割是视觉监控系统中的一个重要课题。本文将人群分割问题表述为用一组矩形将前景区域内的特征点聚类的问题。将个体特征点的相干运动与特征点周围的外观线索融合在一起进行人群分割,提高了分割性能。此外,提出了三个描述符来提取具有非铰接运动的点。在CAVIAR数据集上显示了一些结果。结果表明,只考虑具有刚体运动的点,可以更可靠地利用相干运动线索。
{"title":"Crowd segmentation based on fusion of appearance and motion features","authors":"Ya-Li Hou, G. Pang","doi":"10.1109/IVSURV.2011.6157036","DOIUrl":"https://doi.org/10.1109/IVSURV.2011.6157036","url":null,"abstract":"Crowd segmentation is an important topic in a visual surveillance system. In this paper, crowd segmentation is formulated as a problem to cluster the feature points inside the foreground region with a set of rectangles. Coherent motion of feature points in an individual are fused with appearance cues around the feature points for crowd segmentation, which has improved the segmentation performance. Furthermore, three descriptors are proposed to extract the points with a non-articulated movement. Some results on the CAVIAR dataset have been shown. The results show that coherent motion cue can be used more reliably by considering the points with rigid motion only.","PeriodicalId":141829,"journal":{"name":"2011 Third Chinese Conference on Intelligent Visual Surveillance","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129677166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
High-performance drosophila movement tracking 高性能果蝇运动跟踪
Pub Date : 2011-12-01 DOI: 10.1109/IVSURV.2011.6157027
Xiaoyi Yu, Han Zhou, Lingyi Wu, Qingfeng Liu
Machine vision systems have been designed for automated monitoring and analysis of social behavior in Drosophila by Herko Dankert. The Ctrax (The Caltech Multiple Fly Tracker) is implemented for tracking the Drosophila's movement. But the machine vision method is so sophisticated that it is hard to use by a researcher who is lack of computer technology knowledge. Likewise, most of the machine vision solutions are poor performance for the real time environment. Our work focuses on developing a high-performance application to track moving Drosophila and generate reliable tracks from the video. A light solution to automatically tracking the Drosophila in video is implemented, based on object motion in different parts of the scene. The results of effect tests show that our solution tackles the questions of performance, usability and accuracy in machine vision for bioresearch.
机器视觉系统是由Herko Dankert设计的,用于自动监测和分析果蝇的社会行为。Ctrax(加州理工学院多蝇追踪器)用于跟踪果蝇的运动。但是机器视觉方法过于复杂,对于缺乏计算机技术知识的研究人员来说很难使用。同样,大多数机器视觉解决方案在实时环境中表现不佳。我们的工作重点是开发一个高性能的应用程序来跟踪移动的果蝇,并从视频中生成可靠的轨迹。基于场景中不同部分的物体运动,实现了一种自动跟踪视频中果蝇的光解决方案。效果测试结果表明,我们的解决方案解决了生物研究中机器视觉的性能、可用性和准确性问题。
{"title":"High-performance drosophila movement tracking","authors":"Xiaoyi Yu, Han Zhou, Lingyi Wu, Qingfeng Liu","doi":"10.1109/IVSURV.2011.6157027","DOIUrl":"https://doi.org/10.1109/IVSURV.2011.6157027","url":null,"abstract":"Machine vision systems have been designed for automated monitoring and analysis of social behavior in Drosophila by Herko Dankert. The Ctrax (The Caltech Multiple Fly Tracker) is implemented for tracking the Drosophila's movement. But the machine vision method is so sophisticated that it is hard to use by a researcher who is lack of computer technology knowledge. Likewise, most of the machine vision solutions are poor performance for the real time environment. Our work focuses on developing a high-performance application to track moving Drosophila and generate reliable tracks from the video. A light solution to automatically tracking the Drosophila in video is implemented, based on object motion in different parts of the scene. The results of effect tests show that our solution tackles the questions of performance, usability and accuracy in machine vision for bioresearch.","PeriodicalId":141829,"journal":{"name":"2011 Third Chinese Conference on Intelligent Visual Surveillance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134519762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparison study on kernel based online learning for moving object classification 基于核的在线学习运动目标分类的比较研究
Pub Date : 2011-12-01 DOI: 10.1109/IVSURV.2011.6157014
Xin Zhao, Kaiqi Huang, T. Tan
Most visual surveillance and video understanding systems require knowledge of categories of objects in the scene. One of the key challenges is to be able to classify any object in a real-time procedure in spite of changes in the scene over time and the varying appearance or shape of object. In this paper, we explore the applications of kernel based online learning methods in dealing with the above problems. We evaluate the performance of recently developed kernel based online algorithms combined with the state-of-the-art local shape feature descriptor. We perform the experimental evaluation on our dataset. The experimental results demonstrate that the online algorithms can be highly accurate to the problem of moving object classification.
大多数视觉监控和视频理解系统需要了解场景中物体的类别。关键的挑战之一是能够在实时过程中对任何物体进行分类,尽管场景随着时间的推移而变化,物体的外观或形状也在变化。在本文中,我们探索了基于核的在线学习方法在处理上述问题中的应用。我们评估了最近开发的基于核的在线算法与最先进的局部形状特征描述符相结合的性能。我们对我们的数据集进行了实验评估。实验结果表明,该算法对运动目标分类具有较高的准确率。
{"title":"A comparison study on kernel based online learning for moving object classification","authors":"Xin Zhao, Kaiqi Huang, T. Tan","doi":"10.1109/IVSURV.2011.6157014","DOIUrl":"https://doi.org/10.1109/IVSURV.2011.6157014","url":null,"abstract":"Most visual surveillance and video understanding systems require knowledge of categories of objects in the scene. One of the key challenges is to be able to classify any object in a real-time procedure in spite of changes in the scene over time and the varying appearance or shape of object. In this paper, we explore the applications of kernel based online learning methods in dealing with the above problems. We evaluate the performance of recently developed kernel based online algorithms combined with the state-of-the-art local shape feature descriptor. We perform the experimental evaluation on our dataset. The experimental results demonstrate that the online algorithms can be highly accurate to the problem of moving object classification.","PeriodicalId":141829,"journal":{"name":"2011 Third Chinese Conference on Intelligent Visual Surveillance","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127109947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-faces tracking and recognition framework for surveillance system 一种用于监控系统的人脸跟踪与识别框架
Pub Date : 2011-12-01 DOI: 10.1109/IVSURV.2011.6157035
Huafeng Wang, Yunhong Wang, Zhaoxiang Zhang, Fan Wang, Jin Huang
A novel framework for unsupervised multi-faces tracking and recognition is built on Detection-Tracking-Recognition (DTR) approach. This framework proposed a hybrid face detector for real-time face tracking which is robust to occlusions and posture changes. Faces acquired during unsupervised detection stage will be further processed by SIFT operator in order to cluster face sequence into certain groups. After that, the relevant faces are put together which is of much importance for face recognition in videos. The framework is validated on several videos collected in unconstrained condition (20min each.).The framework can track the face and automatically group a serial faces for a single human-being object in an unlabeled video robustly.
基于检测-跟踪-识别(Detection-Tracking-Recognition, DTR)方法,提出了一种新的无监督人脸跟踪与识别框架。该框架提出了一种对遮挡和姿态变化具有鲁棒性的实时人脸跟踪混合检测器。在无监督检测阶段获得的人脸将被SIFT算子进一步处理,将人脸序列聚类成特定的组。然后将相关的人脸组合在一起,这对视频中的人脸识别非常重要。该框架在无约束条件下收集的多个视频(每个20分钟)上进行验证。该框架可以对人脸进行跟踪,并对未标记视频中单个人类对象的连续人脸进行鲁棒分组。
{"title":"A multi-faces tracking and recognition framework for surveillance system","authors":"Huafeng Wang, Yunhong Wang, Zhaoxiang Zhang, Fan Wang, Jin Huang","doi":"10.1109/IVSURV.2011.6157035","DOIUrl":"https://doi.org/10.1109/IVSURV.2011.6157035","url":null,"abstract":"A novel framework for unsupervised multi-faces tracking and recognition is built on Detection-Tracking-Recognition (DTR) approach. This framework proposed a hybrid face detector for real-time face tracking which is robust to occlusions and posture changes. Faces acquired during unsupervised detection stage will be further processed by SIFT operator in order to cluster face sequence into certain groups. After that, the relevant faces are put together which is of much importance for face recognition in videos. The framework is validated on several videos collected in unconstrained condition (20min each.).The framework can track the face and automatically group a serial faces for a single human-being object in an unlabeled video robustly.","PeriodicalId":141829,"journal":{"name":"2011 Third Chinese Conference on Intelligent Visual Surveillance","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115203115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A fuzzy identification method of contours based on chain-code features 基于链码特征的轮廓模糊识别方法
Pub Date : 2011-12-01 DOI: 10.1109/IVSURV.2011.6157018
Qi Dong, Guangping Xu, Liu Jing, Yanbing Xue, Hua Zhang
The contour analysis and identification are the important aspects in visual surveillance research. The paper proposes a fuzzy identification method of contours. First, according to the description of a contour based on the chain-code method, the proposed method utilizes the statistical features of contours including the chain-code entropy and chain-code space distribution entropies, from which the feature vector of a contour is composed. Then, the method generates the contour pattern from some contour samples and uses the approaching principle to identify a contour. Since our method integrates effectively multiple statistical feathers of its chain-code and employs a fuzzy pattern recognition technique, the experiments show quantitatively that it can achieve good results from various metrics.
轮廓线分析与识别是视觉监控研究的重要内容。提出了一种轮廓模糊识别方法。首先,根据基于链码方法的轮廓描述,利用轮廓的统计特征,包括链码熵和链码空间分布熵,构成轮廓特征向量;然后,该方法从一些轮廓样本中生成轮廓图案,并利用逼近原理对轮廓进行识别。由于该方法有效地集成了其链码的多个统计特征,并采用了模糊模式识别技术,因此定量地表明该方法可以从各种指标中获得较好的结果。
{"title":"A fuzzy identification method of contours based on chain-code features","authors":"Qi Dong, Guangping Xu, Liu Jing, Yanbing Xue, Hua Zhang","doi":"10.1109/IVSURV.2011.6157018","DOIUrl":"https://doi.org/10.1109/IVSURV.2011.6157018","url":null,"abstract":"The contour analysis and identification are the important aspects in visual surveillance research. The paper proposes a fuzzy identification method of contours. First, according to the description of a contour based on the chain-code method, the proposed method utilizes the statistical features of contours including the chain-code entropy and chain-code space distribution entropies, from which the feature vector of a contour is composed. Then, the method generates the contour pattern from some contour samples and uses the approaching principle to identify a contour. Since our method integrates effectively multiple statistical feathers of its chain-code and employs a fuzzy pattern recognition technique, the experiments show quantitatively that it can achieve good results from various metrics.","PeriodicalId":141829,"journal":{"name":"2011 Third Chinese Conference on Intelligent Visual Surveillance","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117344004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shadow removal using Retinex theory 使用视网膜理论去除阴影
Pub Date : 2011-12-01 DOI: 10.1109/IVSURV.2011.6157016
Guowen Ma, Jinfeng Yang
Shadows in images always cause problems to computer visual tasks, so how to remove shadow is an important topic of image processing. In this paper, we propose a new shadow removal method based on Retinex theory. We firstly use the gradient edge detection combined with 1-D illumination invariant image to detect the shadow area, then remove the shadow with Retinex algorithm, finally adjust the brightness of the shadow area. By transforming the RGB image into HSV space, we compute the average brightness of the non-shadow area in original image and shadow-free image, adjust the brightness of the image to coordinate with the original image. The experiment results show that our method can provide a good effect in shadow images.
图像中的阴影一直给计算机视觉任务带来困扰,因此如何去除阴影是图像处理的一个重要课题。本文提出了一种新的基于Retinex理论的阴影去除方法。首先利用梯度边缘检测结合一维照度不变图像检测阴影区域,然后利用Retinex算法去除阴影,最后调整阴影区域的亮度。通过将RGB图像转换为HSV空间,计算原图像和无阴影图像中无阴影区域的平均亮度,调整图像的亮度使其与原图像相协调。实验结果表明,该方法在阴影图像中具有较好的效果。
{"title":"Shadow removal using Retinex theory","authors":"Guowen Ma, Jinfeng Yang","doi":"10.1109/IVSURV.2011.6157016","DOIUrl":"https://doi.org/10.1109/IVSURV.2011.6157016","url":null,"abstract":"Shadows in images always cause problems to computer visual tasks, so how to remove shadow is an important topic of image processing. In this paper, we propose a new shadow removal method based on Retinex theory. We firstly use the gradient edge detection combined with 1-D illumination invariant image to detect the shadow area, then remove the shadow with Retinex algorithm, finally adjust the brightness of the shadow area. By transforming the RGB image into HSV space, we compute the average brightness of the non-shadow area in original image and shadow-free image, adjust the brightness of the image to coordinate with the original image. The experiment results show that our method can provide a good effect in shadow images.","PeriodicalId":141829,"journal":{"name":"2011 Third Chinese Conference on Intelligent Visual Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121486942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
An attempt to pedestrian detection in depth images 在深度图像中行人检测的尝试
Pub Date : 2011-12-01 DOI: 10.1109/IVSURV.2011.6157034
Shengyin Wu, Shiqi Yu, Wensheng Chen
We investigate pedestrian detection in depth images. Unlike pedestrian detection in intensity images, pedestrian detection in depth images can reduce the effect of complex background and illumination variation. We propose a new feature descriptor, Histogram of Depth Difference(HDD), for this task. The proposed HDD feature descriptor can describe the depth variance in a local region as Histogram of Oriented Gradients(HOG) describes local texture cues. To evaluate pedestrian detection in depth images, we also collected a large dataset, which contains not only depth images but also the synchronized intensity images. There are 4673 pedestrian samples in it. Our experimental results show that detecting pedestrians in depth images is feasible. We also fuse the HDD feature from depth images and HOG from intensity images. The fused feature gives an encouraging detection rate of 99.12% at FPPW=10−4.
我们研究了深度图像中的行人检测。与强度图像中的行人检测不同,深度图像中的行人检测可以减少复杂背景和光照变化的影响。为此我们提出了一种新的特征描述符——深度差直方图(Histogram of Depth Difference, HDD)。提出的HDD特征描述符可以用直方图定向梯度(HOG)描述局部纹理线索来描述局部区域的深度方差。为了评估深度图像中的行人检测,我们还收集了一个大型数据集,该数据集不仅包含深度图像,还包含同步强度图像。其中有4673个行人样本。实验结果表明,在深度图像中检测行人是可行的。我们还融合了深度图像的HDD特征和强度图像的HOG特征。在FPPW=10−4时,融合特征的检出率达到了令人鼓舞的99.12%。
{"title":"An attempt to pedestrian detection in depth images","authors":"Shengyin Wu, Shiqi Yu, Wensheng Chen","doi":"10.1109/IVSURV.2011.6157034","DOIUrl":"https://doi.org/10.1109/IVSURV.2011.6157034","url":null,"abstract":"We investigate pedestrian detection in depth images. Unlike pedestrian detection in intensity images, pedestrian detection in depth images can reduce the effect of complex background and illumination variation. We propose a new feature descriptor, Histogram of Depth Difference(HDD), for this task. The proposed HDD feature descriptor can describe the depth variance in a local region as Histogram of Oriented Gradients(HOG) describes local texture cues. To evaluate pedestrian detection in depth images, we also collected a large dataset, which contains not only depth images but also the synchronized intensity images. There are 4673 pedestrian samples in it. Our experimental results show that detecting pedestrians in depth images is feasible. We also fuse the HDD feature from depth images and HOG from intensity images. The fused feature gives an encouraging detection rate of 99.12% at FPPW=10−4.","PeriodicalId":141829,"journal":{"name":"2011 Third Chinese Conference on Intelligent Visual Surveillance","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128944437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
A brief review on visual tracking methods 视觉跟踪方法综述
Pub Date : 2011-12-01 DOI: 10.1109/IVSURV.2011.6157020
Xiang Xiang
Long-term robust visual tracking is still a challenge, primarily due to the appearance changes of the scene and target. In this paper, we briefly review the recent progress in image representation, appearance model and motion model for building a general tracking system. The models reviewed here are basic enough to be applicable for tracking either single target or multiple targets. Special attention has been paid to the on-line adaptation of appearance model, a hot topic in the recent. Its key techniques have been discussed, such as classifier issue, on-line manner, sample selection and drifting problem. We notice that the recent state-of-the-art performances are generally given by a class of on-line boosting methods or ‘tracking-by-detection’ methods (e.g. OnlineBoost, SemiBoost, MIL-Track, TLD, etc.). Therefore, we validate them together with typical traditional methods (e.g. template matching, Mean Shift, optical flow, particle filter, FragTrack) on a challenging sequence for single person tracking. Qualitative comparison results are presented.
长期鲁棒的视觉跟踪仍然是一个挑战,主要是由于场景和目标的外观变化。本文简要介绍了用于构建通用跟踪系统的图像表示、外观模型和运动模型的最新进展。这里回顾的模型足够基本,可以用于跟踪单个目标或多个目标。外观模型的在线自适应是近年来研究的热点问题之一。讨论了其关键技术,如分类器问题、在线方式、样本选择和漂移问题。我们注意到,最近最先进的性能通常是由一类在线增强方法或“检测跟踪”方法(例如OnlineBoost, SemiBoost, MIL-Track, TLD等)给出的。因此,我们将它们与典型的传统方法(例如模板匹配,Mean Shift,光流,粒子滤波,FragTrack)一起在具有挑战性的单人跟踪序列上进行验证。给出了定性比较结果。
{"title":"A brief review on visual tracking methods","authors":"Xiang Xiang","doi":"10.1109/IVSURV.2011.6157020","DOIUrl":"https://doi.org/10.1109/IVSURV.2011.6157020","url":null,"abstract":"Long-term robust visual tracking is still a challenge, primarily due to the appearance changes of the scene and target. In this paper, we briefly review the recent progress in image representation, appearance model and motion model for building a general tracking system. The models reviewed here are basic enough to be applicable for tracking either single target or multiple targets. Special attention has been paid to the on-line adaptation of appearance model, a hot topic in the recent. Its key techniques have been discussed, such as classifier issue, on-line manner, sample selection and drifting problem. We notice that the recent state-of-the-art performances are generally given by a class of on-line boosting methods or ‘tracking-by-detection’ methods (e.g. OnlineBoost, SemiBoost, MIL-Track, TLD, etc.). Therefore, we validate them together with typical traditional methods (e.g. template matching, Mean Shift, optical flow, particle filter, FragTrack) on a challenging sequence for single person tracking. Qualitative comparison results are presented.","PeriodicalId":141829,"journal":{"name":"2011 Third Chinese Conference on Intelligent Visual Surveillance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134164410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Design of algorithms for video monitoring and state recognition system for complex equipments 复杂设备视频监控与状态识别系统算法设计
Pub Date : 2011-12-01 DOI: 10.1109/IVSURV.2011.6157017
Xiao-gang Yang, Chuan Li, Bin-wen Chen, Fei Meng, Zhaohui Xia
According to the technique requirements of a video monitoring and state recognition system for complex equipments, practical video image analysis and recognition algorithms are designed in this paper. Algorithms including dynamic registration of panel and precise adjustment of component based on Normalized Product Correlation (NPC), operation change detection based on Sobel edge strength dynamic analysis, multi-state component recognition based on Laplacian edge strength NPC matching, nixie tube reading based on NMI feature, meanwhile, the basic steps of these algorithms are given. The application experiments demonstrate the efficiency of the proposed algorithms.
根据复杂设备视频监控与状态识别系统的技术要求,设计了实用的视频图像分析与识别算法。给出了基于归一化积相关(NPC)的面板动态配准和组件精确调整算法、基于Sobel边缘强度动态分析的运行变化检测算法、基于拉普拉斯边缘强度NPC匹配的多状态组件识别算法、基于NMI特征的nixie管读取算法,并给出了这些算法的基本步骤。应用实验证明了所提算法的有效性。
{"title":"Design of algorithms for video monitoring and state recognition system for complex equipments","authors":"Xiao-gang Yang, Chuan Li, Bin-wen Chen, Fei Meng, Zhaohui Xia","doi":"10.1109/IVSURV.2011.6157017","DOIUrl":"https://doi.org/10.1109/IVSURV.2011.6157017","url":null,"abstract":"According to the technique requirements of a video monitoring and state recognition system for complex equipments, practical video image analysis and recognition algorithms are designed in this paper. Algorithms including dynamic registration of panel and precise adjustment of component based on Normalized Product Correlation (NPC), operation change detection based on Sobel edge strength dynamic analysis, multi-state component recognition based on Laplacian edge strength NPC matching, nixie tube reading based on NMI feature, meanwhile, the basic steps of these algorithms are given. The application experiments demonstrate the efficiency of the proposed algorithms.","PeriodicalId":141829,"journal":{"name":"2011 Third Chinese Conference on Intelligent Visual Surveillance","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128138526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2011 Third Chinese Conference on Intelligent Visual Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1