首页 > 最新文献

2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
Automatic Detection and Reading of Dangerous Goods Plates 危险品车牌自动检测和读取
P. Roth, Martin Köstinger, Paul Wohlhart, H. Bischof, J. Birchbauer
In this paper, we present an efficient solution for automaticdetection and reading of dangerous goods plates ontrucks and trains. According to the ADR agreement dangerousgoods transports are marked with an orange platecovering the hazard class and the identification number forthe hazardous substances. Since under real-world conditionshigh resolution images (often at low quality) have tobe processed an efficient and robust system is required. Inparticular, we propose a multi-stage system consisting ofan acquisition step, a saliency region detector (to reducethe run-time), a plate detector, and a robust recognitionstep based on an Optical Character Recognition (OCR). Todemonstrate the system, we show qualitative and quantitativelocalization/recognition results on two challenging datasets. In fact, building on proven robust and efficient methods,we show excellent detection and classification resultsunder hard environmental conditions at low run-time.detection and reading of dangerous goods plates ontrucks and trains. According to the ADR agreement dangerousgoods transports are marked with an orange platecovering the hazard class and the identification number forthe hazardous substances. Since under real-world conditionshigh resolution images (often at low quality) have tobe processed an efficient and robust system is required. Inparticular, we propose a multi-stage system consisting ofan acquisition step, a saliency region detector (to reducethe run-time), a plate detector, and a robust recognitionstep based on an Optical Character Recognition (OCR). Todemonstrate the system, we show qualitative and quantitativelocalization/recognition results on two challenging datasets. In fact, building on proven robust and efficient methods,we show excellent detection and classification resultsunder hard environmental conditions at low run-time.
本文提出了一种有效的卡车和火车危险品车牌自动检测和读取方案。根据ADR协议,危险品运输有一个橙色的牌子,上面标有危险等级和危险物质的识别号码。由于在现实条件下,高分辨率图像(通常是低质量的)必须处理,因此需要一个高效且健壮的系统。特别是,我们提出了一个多级系统,包括一个采集步骤,一个显著区域检测器(以减少运行时间),一个板检测器和一个基于光学字符识别(OCR)的鲁棒识别步骤。为了演示该系统,我们在两个具有挑战性的数据集上展示了定性和定量定位/识别结果。事实上,基于可靠而高效的方法,我们在恶劣的环境条件下,在低运行时间下,对卡车和火车上的危险品车牌进行检测和读取,显示了出色的检测和分类结果。根据ADR协议,危险品运输有一个橙色的牌子,上面标有危险等级和危险物质的识别号码。由于在现实条件下,高分辨率图像(通常是低质量的)必须处理,因此需要一个高效且健壮的系统。特别是,我们提出了一个多级系统,包括一个采集步骤,一个显著区域检测器(以减少运行时间),一个板检测器和一个基于光学字符识别(OCR)的鲁棒识别步骤。为了演示该系统,我们在两个具有挑战性的数据集上展示了定性和定量定位/识别结果。事实上,在经过验证的鲁棒和高效方法的基础上,我们在低运行时间的恶劣环境条件下展示了出色的检测和分类结果。
{"title":"Automatic Detection and Reading of Dangerous Goods Plates","authors":"P. Roth, Martin Köstinger, Paul Wohlhart, H. Bischof, J. Birchbauer","doi":"10.1109/AVSS.2010.28","DOIUrl":"https://doi.org/10.1109/AVSS.2010.28","url":null,"abstract":"In this paper, we present an efficient solution for automaticdetection and reading of dangerous goods plates ontrucks and trains. According to the ADR agreement dangerousgoods transports are marked with an orange platecovering the hazard class and the identification number forthe hazardous substances. Since under real-world conditionshigh resolution images (often at low quality) have tobe processed an efficient and robust system is required. Inparticular, we propose a multi-stage system consisting ofan acquisition step, a saliency region detector (to reducethe run-time), a plate detector, and a robust recognitionstep based on an Optical Character Recognition (OCR). Todemonstrate the system, we show qualitative and quantitativelocalization/recognition results on two challenging datasets. In fact, building on proven robust and efficient methods,we show excellent detection and classification resultsunder hard environmental conditions at low run-time.detection and reading of dangerous goods plates ontrucks and trains. According to the ADR agreement dangerousgoods transports are marked with an orange platecovering the hazard class and the identification number forthe hazardous substances. Since under real-world conditionshigh resolution images (often at low quality) have tobe processed an efficient and robust system is required. Inparticular, we propose a multi-stage system consisting ofan acquisition step, a saliency region detector (to reducethe run-time), a plate detector, and a robust recognitionstep based on an Optical Character Recognition (OCR). Todemonstrate the system, we show qualitative and quantitativelocalization/recognition results on two challenging datasets. In fact, building on proven robust and efficient methods,we show excellent detection and classification resultsunder hard environmental conditions at low run-time.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126628574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Trajectory Based Activity Discovery 基于轨迹的活动发现
Guido Pusiol, F. Brémond, M. Thonnat
This paper proposes a framework to discover activities inan unsupervised manner, and add semantics with minimalsupervision. The framework uses basic trajectory informationas input and goes up to video interpretation. The workreduces the gap between low-level information and semanticinterpretation, building an intermediate layer composedof Primitive Events. The proposed representation for primitiveevents aims at capturing small meaningful motions overthe scene with the advantage of being learnt in an unsupervisedmanner. We propose the discovery of an activity usingthese Primitive Events as the main descriptors. The activitydiscovery is done using only real tracking data. Semanticsare added to the discovered activities and the recognition ofactivities (e.g., “Cooking”, “Eating”) can be automaticallydone with new datasets. Finally we validate the descriptorsby discovering and recognizing activities in a home careapplication dataset.
本文提出了一个以无监督方式发现活动的框架,并在最小监督的情况下添加语义。该框架使用基本的轨迹信息作为输入,然后进行视频解释。该工作减少了低级信息和语义解释之间的差距,建立了一个由原始事件组成的中间层。提出的原始事件表示旨在捕捉场景中有意义的小运动,其优点是可以以无监督的方式学习。我们建议使用这些原始事件作为主要描述符来发现一个活动。活动发现仅使用真实的跟踪数据完成。添加到发现的活动中的语义和对活动的识别(例如,“烹饪”,“吃饭”)可以用新的数据集自动完成。最后,我们通过发现和识别家庭护理应用数据集中的活动来验证描述符。
{"title":"Trajectory Based Activity Discovery","authors":"Guido Pusiol, F. Brémond, M. Thonnat","doi":"10.1109/AVSS.2010.15","DOIUrl":"https://doi.org/10.1109/AVSS.2010.15","url":null,"abstract":"This paper proposes a framework to discover activities inan unsupervised manner, and add semantics with minimalsupervision. The framework uses basic trajectory informationas input and goes up to video interpretation. The workreduces the gap between low-level information and semanticinterpretation, building an intermediate layer composedof Primitive Events. The proposed representation for primitiveevents aims at capturing small meaningful motions overthe scene with the advantage of being learnt in an unsupervisedmanner. We propose the discovery of an activity usingthese Primitive Events as the main descriptors. The activitydiscovery is done using only real tracking data. Semanticsare added to the discovered activities and the recognition ofactivities (e.g., “Cooking”, “Eating”) can be automaticallydone with new datasets. Finally we validate the descriptorsby discovering and recognizing activities in a home careapplication dataset.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"72 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114016822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Local Directional Pattern Variance (LDPv) Based Face Descriptor for Human Facial Expression Recognition 基于局部方向模式方差(LDPv)的人脸描述子用于人脸表情识别
M. H. Kabir, T. Jabid, O. Chae
Automatic facial expression recognition is a challengingproblem in computer vision, and has gained significantimportance in applications of human-computer interaction.This paper presents a new appearance-based feature descriptor,the Local Directional Pattern Variance (LDPv), torepresent facial components for human expression recognition.In contrast with LDP, the proposed LDPv introducesthe local variance of directional responses to encodethe contrast information within the descriptor. Here,the LDPv represenation characterizes both spatial structureand contrast information of each micro-patterns. Templatematching and Support Vector Machine (SVM) classifierare used to classify the LDPv feature vector of differentprototypic expression images. Experimental results usingthe Cohn-Kanade database show that the LDPv descriptoryields an improved recognition rate, as compared to existingappearance-based feature descriptors, such as the Gaborwaveletand Local Binary Pattern (LBP).
面部表情自动识别是计算机视觉中的一个具有挑战性的问题,在人机交互应用中具有重要的意义。本文提出了一种新的基于外观的特征描述符——局部方向模式方差(LDPv)来表示人脸成分,用于人类表情识别。与LDP相比,本文提出的LDPv引入了方向响应的局部方差来对描述符中的对比度信息进行编码。在这里,LDPv表示表征了每个微格局的空间结构和对比信息。采用模板匹配和支持向量机(SVM)分类器对不同原型表达图像的LDPv特征向量进行分类。使用Cohn-Kanade数据库的实验结果表明,与现有的基于外观的特征描述符(如gaborwavelet和Local Binary Pattern (LBP))相比,LDPv描述符产生了更高的识别率。
{"title":"A Local Directional Pattern Variance (LDPv) Based Face Descriptor for Human Facial Expression Recognition","authors":"M. H. Kabir, T. Jabid, O. Chae","doi":"10.1109/AVSS.2010.9","DOIUrl":"https://doi.org/10.1109/AVSS.2010.9","url":null,"abstract":"Automatic facial expression recognition is a challengingproblem in computer vision, and has gained significantimportance in applications of human-computer interaction.This paper presents a new appearance-based feature descriptor,the Local Directional Pattern Variance (LDPv), torepresent facial components for human expression recognition.In contrast with LDP, the proposed LDPv introducesthe local variance of directional responses to encodethe contrast information within the descriptor. Here,the LDPv represenation characterizes both spatial structureand contrast information of each micro-patterns. Templatematching and Support Vector Machine (SVM) classifierare used to classify the LDPv feature vector of differentprototypic expression images. Experimental results usingthe Cohn-Kanade database show that the LDPv descriptoryields an improved recognition rate, as compared to existingappearance-based feature descriptors, such as the Gaborwaveletand Local Binary Pattern (LBP).","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121020644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 77
Performance Evaluation of a People Tracking System on PETS2009 Database 基于PETS2009数据库的人员跟踪系统性能评价
Donatello Conte, P. Foggia, G. Percannella, M. Vento
In this paper a system for autonomous video surveillance in relatively unconstrained environments is described. The system consists of two principal phases: object detection and object tracking. An adaptive background subtraction, together with a set of corrective algorithms, is used to cope with variable lighting, dynamic and articulate scenes, etc. The tracking algorithm is based on a matrix representation of the problem, and is used to face splitting and occlusion problems. When the tracking algorithm fails in following actual object trajectories, an appearancebased module is used to restore object identities. An experimental evaluation, carried out on the PETS2009 dataset for tracking, shows promising results.
本文介绍了一种相对无约束环境下的自主视频监控系统。该系统包括两个主要阶段:目标检测和目标跟踪。采用自适应背景减法,结合一套校正算法,解决了光照变化、动态和清晰场景等问题。该跟踪算法基于问题的矩阵表示,并用于面对分裂和遮挡问题。当跟踪算法无法跟踪实际目标轨迹时,使用基于外观的模块来恢复目标身份。在PETS2009数据集上进行的跟踪实验评估显示了令人满意的结果。
{"title":"Performance Evaluation of a People Tracking System on PETS2009 Database","authors":"Donatello Conte, P. Foggia, G. Percannella, M. Vento","doi":"10.1109/AVSS.2010.87","DOIUrl":"https://doi.org/10.1109/AVSS.2010.87","url":null,"abstract":"In this paper a system for autonomous video surveillance in relatively unconstrained environments is described. The system consists of two principal phases: object detection and object tracking. An adaptive background subtraction, together with a set of corrective algorithms, is used to cope with variable lighting, dynamic and articulate scenes, etc. The tracking algorithm is based on a matrix representation of the problem, and is used to face splitting and occlusion problems. When the tracking algorithm fails in following actual object trajectories, an appearancebased module is used to restore object identities. An experimental evaluation, carried out on the PETS2009 dataset for tracking, shows promising results.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115545462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
A Framework for an Event Driven Video Surveillance System 事件驱动视频监控系统的框架
Declan F. Kieran, Weiqi Yan
In this paper we present an event driven surveillance system.The purpose of this system is to enable thorough explorationof surveillance events. The system uses a clientserverweb architecture as this provides scalability for furtherdevelopment of the system infrastructure. The systemis designed to be accessed by surveillance operators whocan review and comment on events generated by our eventdetection processing modules. The presentation interfaceis based around a cross between Gmail and YouTube, aswe believe these interfaces to be intuitive for ordinary computeroperators. Our motivation is to fully utilize the eventsarchived in our database and to further refine the relevantevents. We do not just focus on event detection, but areworking towards the optimization of event detection. To thebest of our knowledge this system provides a novel approachto the technological surveillance paradigm.
本文提出了一个事件驱动的监控系统。该系统的目的是为了对监视事件进行彻底的探索。该系统使用clientserverweb架构,因为这为系统基础架构的进一步开发提供了可伸缩性。该系统旨在供监控操作员访问,他们可以对我们的事件检测处理模块生成的事件进行审查和评论。演示界面是基于Gmail和YouTube之间的交叉,因为我们相信这些界面对普通计算机操作员来说是直观的。我们的动机是充分利用数据库中存档的事件,并进一步细化相关事件。我们不仅专注于事件检测,还致力于事件检测的优化。据我们所知,该系统为技术监视范式提供了一种新颖的方法。
{"title":"A Framework for an Event Driven Video Surveillance System","authors":"Declan F. Kieran, Weiqi Yan","doi":"10.1109/AVSS.2010.57","DOIUrl":"https://doi.org/10.1109/AVSS.2010.57","url":null,"abstract":"In this paper we present an event driven surveillance system.The purpose of this system is to enable thorough explorationof surveillance events. The system uses a clientserverweb architecture as this provides scalability for furtherdevelopment of the system infrastructure. The systemis designed to be accessed by surveillance operators whocan review and comment on events generated by our eventdetection processing modules. The presentation interfaceis based around a cross between Gmail and YouTube, aswe believe these interfaces to be intuitive for ordinary computeroperators. Our motivation is to fully utilize the eventsarchived in our database and to further refine the relevantevents. We do not just focus on event detection, but areworking towards the optimization of event detection. To thebest of our knowledge this system provides a novel approachto the technological surveillance paradigm.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130195276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Learning of Scene-Specific Object Detectors by Classifier Co-Grids 基于分类器协同网格的场景特定目标检测器学习
Sabine Sternig, P. Roth, H. Bischof
Recently, classifier grids have shown to be a considerablealternative to sliding window approaches for objectdetection from static cameras. The main drawback of suchmethods is that they are biased by the initial model. In fact,the classifiers can be adapted to changing environmentalconditions but due to conservative updates no new objectspecificinformation is acquired. Thus, the goal of this workis to increase the recall of scene-specific classifiers whilepreserving their accuracy and speed. In particular, we introducea co-training strategy for classifier grids using arobust on-line learner. Thus, the robustness is preservedwhile the recall can be increased. The co-training strategyrobustly provides negative as well as positive updates. Inaddition, the number of negative updates can be drasticallyreduced, which additionally speeds up the system. In theexperimental results these benefits are demonstrated on differentpublicly available surveillance benchmark data sets.
最近,分类器网格已被证明是静态摄像机中物体检测的滑动窗口方法的一个相当大的替代方案。这种方法的主要缺点是它们受到初始模型的影响。事实上,分类器可以适应不断变化的环境条件,但由于保守更新,没有获得新的对象特定信息。因此,本工作的目标是在保持其准确性和速度的同时提高场景特定分类器的召回率。特别地,我们引入了一种基于在线学习器的分类器网格协同训练策略。因此,在保持鲁棒性的同时可以提高召回率。联合训练策略鲁棒地提供负向和正向的更新。此外,负面更新的数量可以大大减少,这也加快了系统的速度。在实验结果中,这些好处在不同的公开监测基准数据集上得到了证明。
{"title":"Learning of Scene-Specific Object Detectors by Classifier Co-Grids","authors":"Sabine Sternig, P. Roth, H. Bischof","doi":"10.1109/AVSS.2010.10","DOIUrl":"https://doi.org/10.1109/AVSS.2010.10","url":null,"abstract":"Recently, classifier grids have shown to be a considerablealternative to sliding window approaches for objectdetection from static cameras. The main drawback of suchmethods is that they are biased by the initial model. In fact,the classifiers can be adapted to changing environmentalconditions but due to conservative updates no new objectspecificinformation is acquired. Thus, the goal of this workis to increase the recall of scene-specific classifiers whilepreserving their accuracy and speed. In particular, we introducea co-training strategy for classifier grids using arobust on-line learner. Thus, the robustness is preservedwhile the recall can be increased. The co-training strategyrobustly provides negative as well as positive updates. Inaddition, the number of negative updates can be drasticallyreduced, which additionally speeds up the system. In theexperimental results these benefits are demonstrated on differentpublicly available surveillance benchmark data sets.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127694572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Counting People in Crowded Environments by Fusion of Shape and Motion Information 基于形状和运动信息融合的拥挤环境人口统计
Michael Pätzold, Rubén Heras Evangelio, T. Sikora
Knowing the number of people in a crowded scene is of big interest in the surveillance scene. In the past, this problem has been tackled mostly in an indirect, statistical way. This paper presents a direct, counting by detection, method based on fusing spatial information received from an adapted Histogram of Oriented Gradientsalgorithm (HOG) with temporal information by exploiting distinctive motion characteristics of different human body parts. For that purpose, this paper defines a measure for uniformity of motion. Furthermore, the system performance is enhanced by validating the resulting human hypotheses by tracking and applying a coherent motion detection. The approach is illustrated with an experimental evaluation.
在监控场景中,了解拥挤场景中的人数是非常重要的。过去,这个问题主要是通过间接的统计方法来解决的。本文利用人体不同部位的不同运动特征,提出了一种基于自适应定向梯度直方图(HOG)的空间信息与时间信息融合的直接检测计数方法。为此,本文定义了运动均匀性的度量。此外,通过跟踪和应用相干运动检测来验证由此产生的人类假设,从而增强了系统性能。最后以实验评价说明了该方法的可行性。
{"title":"Counting People in Crowded Environments by Fusion of Shape and Motion Information","authors":"Michael Pätzold, Rubén Heras Evangelio, T. Sikora","doi":"10.1109/AVSS.2010.92","DOIUrl":"https://doi.org/10.1109/AVSS.2010.92","url":null,"abstract":"Knowing the number of people in a crowded scene is of big interest in the surveillance scene. In the past, this problem has been tackled mostly in an indirect, statistical way. This paper presents a direct, counting by detection, method based on fusing spatial information received from an adapted Histogram of Oriented Gradientsalgorithm (HOG) with temporal information by exploiting distinctive motion characteristics of different human body parts. For that purpose, this paper defines a measure for uniformity of motion. Furthermore, the system performance is enhanced by validating the resulting human hypotheses by tracking and applying a coherent motion detection. The approach is illustrated with an experimental evaluation.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116447693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Group Level Activity Recognition in Crowded Environments across Multiple Cameras 拥挤环境中跨多个摄像头的群体级活动识别
Ming-Ching Chang, N. Krahnstoever, Ser-Nam Lim, Ting Yu
Environments such as schools, public parks and prisonsand others that contain a large number of people are typi-cally characterized by frequent and complex social interac-tions. In order to identify activities and behaviors in suchenvironments, it is necessary to understand the interactionsthat take place at a group level. To this end, this paper ad-dresses the problem of detecting and predicting suspiciousand in particular aggressive behaviors between groups ofindividuals such as gangs in prison yards. The work buildson a mature multi-camera multi-target person tracking sys-tem that operates in real-time and has the ability to han-dle crowded conditions. We consider two approaches forgrouping individuals: (i) agglomerative clustering favoredby the computer vision community, as well as (ii) decisiveclustering based on the concept of modularity, which is fa-vored by the social network analysis community. We showthe utility of such grouping analysis towards the detectionof group activities of interest. The presented algorithm isintegrated with a system operating in real-time to success-fully detect highly realistic aggressive behaviors enacted bycorrectional officers in a simulated prison environment. Wepresent results from these enactments that demonstrate theefficacy of our approach.
诸如学校、公园和监狱等包含大量人群的环境通常以频繁而复杂的社会互动为特征。为了识别这种环境中的活动和行为,有必要了解在群体层面上发生的相互作用。为此,本文解决了检测和预测可疑行为的问题,特别是个人群体之间的攻击行为,如监狱院子里的帮派。该工作构建了一个成熟的多摄像机多目标人跟踪系统,该系统实时运行,具有处理拥挤条件的能力。我们考虑了两种对个体进行分组的方法:(i)有利于计算机视觉社区的聚集聚类,以及(ii)基于模块化概念的决策聚类,这是社会网络分析社区所支持的。我们展示了这种分组分析在检测感兴趣的群体活动方面的效用。所提出的算法与一个实时运行的系统相结合,成功地检测了模拟监狱环境中狱警制定的高度真实的攻击行为。我们从这些立法中得出的结果证明了我们的方法的有效性。
{"title":"Group Level Activity Recognition in Crowded Environments across Multiple Cameras","authors":"Ming-Ching Chang, N. Krahnstoever, Ser-Nam Lim, Ting Yu","doi":"10.1109/AVSS.2010.65","DOIUrl":"https://doi.org/10.1109/AVSS.2010.65","url":null,"abstract":"Environments such as schools, public parks and prisonsand others that contain a large number of people are typi-cally characterized by frequent and complex social interac-tions. In order to identify activities and behaviors in suchenvironments, it is necessary to understand the interactionsthat take place at a group level. To this end, this paper ad-dresses the problem of detecting and predicting suspiciousand in particular aggressive behaviors between groups ofindividuals such as gangs in prison yards. The work buildson a mature multi-camera multi-target person tracking sys-tem that operates in real-time and has the ability to han-dle crowded conditions. We consider two approaches forgrouping individuals: (i) agglomerative clustering favoredby the computer vision community, as well as (ii) decisiveclustering based on the concept of modularity, which is fa-vored by the social network analysis community. We showthe utility of such grouping analysis towards the detectionof group activities of interest. The presented algorithm isintegrated with a system operating in real-time to success-fully detect highly realistic aggressive behaviors enacted bycorrectional officers in a simulated prison environment. Wepresent results from these enactments that demonstrate theefficacy of our approach.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117198827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Background Subtraction under Sudden Illumination Changes 光照突然变化下的背景减法
L. Vosters, Caifeng Shan, T. Gritti
Robust background subtraction under sudden illuminationchanges is a challenging problem. In this paper, wepropose an approach to address this issue, which combinesthe Eigenbackground algorithm together with a statisticalillumination model. The rst algorithm is used to give arough reconstruction of the input frame, while the secondone improves the foreground segmentation. We introduce anonline spatial likelihood model by detecting reliable backgroundand foreground pixels. Experimental results illustratethat our approach achieves consistently higher accuracycompared to several state-of-the-art algorithms
光照突然变化下的鲁棒背景减法是一个具有挑战性的问题。在本文中,我们提出了一种将特征背景算法与统计照明模型相结合的方法来解决这个问题。第一种算法用于对输入帧进行粗略重建,第二种算法用于改善前景分割。我们通过检测可靠的背景和前景像素引入了一种在线空间似然模型。实验结果表明,与几种最先进的算法相比,我们的方法始终具有更高的精度
{"title":"Background Subtraction under Sudden Illumination Changes","authors":"L. Vosters, Caifeng Shan, T. Gritti","doi":"10.1109/AVSS.2010.72","DOIUrl":"https://doi.org/10.1109/AVSS.2010.72","url":null,"abstract":"Robust background subtraction under sudden illuminationchanges is a challenging problem. In this paper, wepropose an approach to address this issue, which combinesthe Eigenbackground algorithm together with a statisticalillumination model. The rst algorithm is used to give arough reconstruction of the input frame, while the secondone improves the foreground segmentation. We introduce anonline spatial likelihood model by detecting reliable backgroundand foreground pixels. Experimental results illustratethat our approach achieves consistently higher accuracycompared to several state-of-the-art algorithms","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116819558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Thirteen Hard Cases in Visual Tracking
D. M. Chu, A. Smeulders
Visual tracking is a fundamental task in computer vision. However there has been no systematic way of analyzing visual trackers so far. In this paper we propose a method that can help researchers determine strengths and weaknesses of any visual tracker. To this end, we consider visual tracking as an isolated problem and decompose it into fundamental and independent subproblems. Each subproblem is designed to associate with a different tracking circumstance. By evaluating a visual tracker onto a specific subproblem, we can determine how good it is with respect to that dimension. In total we come up with thirteen subproblems in our decomposition. We demonstrate the use of our proposed method by analyzing working conditions of two state-of-theart trackers.
视觉跟踪是计算机视觉的一项基本任务。然而,目前还没有系统的方法来分析视觉跟踪器。在本文中,我们提出了一种方法,可以帮助研究人员确定任何视觉跟踪器的优点和缺点。为此,我们将视觉跟踪视为一个孤立的问题,并将其分解为基本的和独立的子问题。每个子问题都被设计成与不同的跟踪环境相关联。通过评估特定子问题上的视觉跟踪器,我们可以确定它相对于该维度有多好。在我们的分解中总共有13个子问题。我们通过分析两个状态心脏跟踪器的工作条件来演示我们提出的方法的使用。
{"title":"Thirteen Hard Cases in Visual Tracking","authors":"D. M. Chu, A. Smeulders","doi":"10.1109/AVSS.2010.85","DOIUrl":"https://doi.org/10.1109/AVSS.2010.85","url":null,"abstract":"Visual tracking is a fundamental task in computer vision. However there has been no systematic way of analyzing visual trackers so far. In this paper we propose a method that can help researchers determine strengths and weaknesses of any visual tracker. To this end, we consider visual tracking as an isolated problem and decompose it into fundamental and independent subproblems. Each subproblem is designed to associate with a different tracking circumstance. By evaluating a visual tracker onto a specific subproblem, we can determine how good it is with respect to that dimension. In total we come up with thirteen subproblems in our decomposition. We demonstrate the use of our proposed method by analyzing working conditions of two state-of-theart trackers.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"32 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125709716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
期刊
2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1