首页 > 最新文献

2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
Statistical Background Modeling: An Edge Segment Based Moving Object Detection Approach 统计背景建模:一种基于边缘段的运动目标检测方法
M. Murshed, Adín Ramírez Rivera, O. Chae
We propose an edge segment based statistical backgroundmodeling algorithm and a moving edge detectionframework for the detection of moving objects. We analyzethe performance of the proposed segment based statisticalbackground model with traditional pixel based, edge pixelbased and edge segment based approaches. Existing edgebased moving object detection algorithms fetches difficultydue to the change in background motion, object shape, illuminationvariation and noise. The proposed algorithmmakes efficient use of statistical background model usingthe edge-segment structure. Experiments with natural imagesequences show that our method can detect moving objectsefficiently under the above mentioned environments.
我们提出了一种基于边缘段的统计背景建模算法和一种运动边缘检测框架来检测运动物体。我们用传统的基于像素的、基于边缘像素的和基于边缘段的方法来分析所提出的基于统计背景模型的性能。现有的基于边缘的运动目标检测算法由于背景运动、目标形状、光照变化和噪声的变化而存在困难。该算法利用边缘段结构有效地利用了统计背景模型。用自然图像序列进行的实验表明,该方法可以有效地检测出上述环境下的运动目标。
{"title":"Statistical Background Modeling: An Edge Segment Based Moving Object Detection Approach","authors":"M. Murshed, Adín Ramírez Rivera, O. Chae","doi":"10.1109/AVSS.2010.18","DOIUrl":"https://doi.org/10.1109/AVSS.2010.18","url":null,"abstract":"We propose an edge segment based statistical backgroundmodeling algorithm and a moving edge detectionframework for the detection of moving objects. We analyzethe performance of the proposed segment based statisticalbackground model with traditional pixel based, edge pixelbased and edge segment based approaches. Existing edgebased moving object detection algorithms fetches difficultydue to the change in background motion, object shape, illuminationvariation and noise. The proposed algorithmmakes efficient use of statistical background model usingthe edge-segment structure. Experiments with natural imagesequences show that our method can detect moving objectsefficiently under the above mentioned environments.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123161279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Who, what, when, where, why and how in video analysis: an application centric view 视频分析中的谁、什么、何时、何地、为什么以及如何:以应用程序为中心的观点
S. Guler, Jason A. Silverstein, Ian A. Pushee, Xiang Ma, Ashutosh Morde
This paper presents an end-user application centric view of surveillance video analysis and describes a flexible, extensible and modular approach to video content extraction. Various detection and extraction components including tracking of moving objects, detection of text, faces, and face based soft biometric for gender, age and ethnicity classification are described within the general framework for real-time and post event analysis applications Panoptes and VideoRecall. Some end-user applications that are built on this framework are discussed.
本文提出了一种以终端用户应用为中心的监控视频分析观点,并描述了一种灵活、可扩展和模块化的视频内容提取方法。各种检测和提取组件,包括跟踪移动物体、检测文本、人脸和基于性别、年龄和种族分类的基于人脸的软生物识别,在实时和事件后分析应用程序Panoptes和VideoRecall的通用框架内进行了描述。讨论了在此框架上构建的一些最终用户应用程序。
{"title":"Who, what, when, where, why and how in video analysis: an application centric view","authors":"S. Guler, Jason A. Silverstein, Ian A. Pushee, Xiang Ma, Ashutosh Morde","doi":"10.1109/AVSS.2010.5767512","DOIUrl":"https://doi.org/10.1109/AVSS.2010.5767512","url":null,"abstract":"This paper presents an end-user application centric view of surveillance video analysis and describes a flexible, extensible and modular approach to video content extraction. Various detection and extraction components including tracking of moving objects, detection of text, faces, and face based soft biometric for gender, age and ethnicity classification are described within the general framework for real-time and post event analysis applications Panoptes and VideoRecall. Some end-user applications that are built on this framework are discussed.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131054881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SVM-Based Biometric Authentication Using Intra-Body Propagation Signals 基于svm的体内传播信号生物特征认证
I. Nakanishi, Yuuta Sodani
To use intra-body propagation signals for biometric authenticationhave been proposed. The intra-body propagationsignals are hid in human bodies; therefore, they havetolerability to circumvention using artifacts. Additionally,utilizing the signals in the body enables liveness detectionwith no additional scheme. The problem is, however, verificationperformance using the intra-body propagation signalis not so high. In this paper, in order to improve the performancewe propose to use user-specific frequency bandsfor all users in verification. The verification performance isimproved to 70 %. Furthermore, we introduce the supportvector machine (SVM) into the verification process. It isconfirmed that verification rate of about 86 % is achieved.
提出了利用体内传播信号进行生物识别认证的方法。体内传播信号隐藏在人体内;因此,它们可以容忍使用工件进行规避。此外,利用体内的信号可以在没有额外方案的情况下进行活体检测。然而,问题是使用体内传播信号的验证性能不高。在本文中,为了提高性能,我们建议在验证中对所有用户使用用户特定的频段。验证性能提高到70%。此外,我们将支持向量机(SVM)引入到验证过程中。经证实,该系统的验证率约为86%。
{"title":"SVM-Based Biometric Authentication Using Intra-Body Propagation Signals","authors":"I. Nakanishi, Yuuta Sodani","doi":"10.1109/AVSS.2010.12","DOIUrl":"https://doi.org/10.1109/AVSS.2010.12","url":null,"abstract":"To use intra-body propagation signals for biometric authenticationhave been proposed. The intra-body propagationsignals are hid in human bodies; therefore, they havetolerability to circumvention using artifacts. Additionally,utilizing the signals in the body enables liveness detectionwith no additional scheme. The problem is, however, verificationperformance using the intra-body propagation signalis not so high. In this paper, in order to improve the performancewe propose to use user-specific frequency bandsfor all users in verification. The verification performance isimproved to 70 %. Furthermore, we introduce the supportvector machine (SVM) into the verification process. It isconfirmed that verification rate of about 86 % is achieved.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126832743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Intelligent Video Systems: A Review of Performance Evaluation Metrics that Use Mapping Procedures 智能视频系统:使用映射程序的性能评估指标综述
X. Desurmont, C. Carincotte, F. Brémond
In Intelligent Video Systems, most of the recent advanced performance evaluation metrics perform a stage of mapping data between the system results and ground truth. This paper aims to review these metrics using a proposed framework. It will focus on metrics for events detection, objects detection and objects tracking systems.
在智能视频系统中,大多数最新的高级性能评估指标都是在系统结果和实际情况之间进行数据映射的阶段。本文旨在使用提议的框架来回顾这些指标。它将专注于事件检测、对象检测和对象跟踪系统的度量。
{"title":"Intelligent Video Systems: A Review of Performance Evaluation Metrics that Use Mapping Procedures","authors":"X. Desurmont, C. Carincotte, F. Brémond","doi":"10.1109/AVSS.2010.88","DOIUrl":"https://doi.org/10.1109/AVSS.2010.88","url":null,"abstract":"In Intelligent Video Systems, most of the recent advanced performance evaluation metrics perform a stage of mapping data between the system results and ground truth. This paper aims to review these metrics using a proposed framework. It will focus on metrics for events detection, objects detection and objects tracking systems.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122660150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Crowd Counting Using Group Tracking and Local Features 人群计数使用组跟踪和本地功能
D. Ryan, S. Denman, C. Fookes, S. Sridharan
In public venues, crowd size is a key indicator of crowdsafety and stability. In this paper we propose a crowd count-ing algorithm that uses tracking and local features to countthe number of people in each group as represented by a fore-ground blob segment, so that the total crowd estimate is thesum of the group sizes. Tracking is employed to improve therobustness of the estimate, by analysing the history of eachgroup, including splitting and merging events. A simpli-fied ground truth annotation strategy results in an approachwith minimal setup requirements that is highly accurate.
在公共场所,人群规模是衡量人群安全和稳定的关键指标。在本文中,我们提出了一种人群计数算法,该算法使用跟踪和局部特征来计算每个群体中由前景blob段表示的人数,从而使总人群估计值是群体规模的总和。通过分析每个组的历史,包括分裂和合并事件,跟踪被用来提高估计的可靠性。一个简化的基础真值注释策略产生了一种具有最低设置要求且高度准确的方法。
{"title":"Crowd Counting Using Group Tracking and Local Features","authors":"D. Ryan, S. Denman, C. Fookes, S. Sridharan","doi":"10.1109/AVSS.2010.30","DOIUrl":"https://doi.org/10.1109/AVSS.2010.30","url":null,"abstract":"In public venues, crowd size is a key indicator of crowdsafety and stability. In this paper we propose a crowd count-ing algorithm that uses tracking and local features to countthe number of people in each group as represented by a fore-ground blob segment, so that the total crowd estimate is thesum of the group sizes. Tracking is employed to improve therobustness of the estimate, by analysing the history of eachgroup, including splitting and merging events. A simpli-fied ground truth annotation strategy results in an approachwith minimal setup requirements that is highly accurate.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122838705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Local Directional Pattern (LDP) – A Robust Image Descriptor for Object Recognition 局部方向模式(LDP)——一种用于目标识别的鲁棒图像描述符
T. Jabid, M. H. Kabir, O. Chae
This paper presents a novel local feature descriptor, theLocal Directional Pattern (LDP), for describing localimage feature. A LDP feature is obtained by computing theedge response values in all eight directions at each pixelposition and generating a code from the relative strengthmagnitude. Each bit of code sequence is determined byconsidering a local neighborhood hence becomes robust innoisy situation. A rotation invariant LDP code is alsointroduced which uses the direction of the most prominentedge response. Finally an image descriptor is formed todescribe the image (or image region) by accumulating theoccurrence of LDP feature over the whole input image (orimage region). Experimental results on the Brodatz texturedatabase show that LDP impressively outperforms theother commonly used dense descriptors (e.g.,Gabor-wavelet and LBP).
本文提出了一种新的局部特征描述符——局部定向模式(LDP),用于描述局部图像特征。通过计算每个像素位置上所有八个方向的边缘响应值并从相对强度大小生成代码来获得LDP特征。每个码位序列通过考虑局部邻域来确定,从而成为鲁棒噪声情况。介绍了一种旋转不变的LDP编码,该编码利用了最突出边缘响应的方向。最后,通过累积整个输入图像(或图像区域)上LDP特征的出现次数,形成图像描述符来描述图像(或图像区域)。在Brodatz纹理数据库上的实验结果表明,LDP显著优于其他常用的密集描述符(例如,Gabor-wavelet和LBP)。
{"title":"Local Directional Pattern (LDP) – A Robust Image Descriptor for Object Recognition","authors":"T. Jabid, M. H. Kabir, O. Chae","doi":"10.1109/AVSS.2010.17","DOIUrl":"https://doi.org/10.1109/AVSS.2010.17","url":null,"abstract":"This paper presents a novel local feature descriptor, theLocal Directional Pattern (LDP), for describing localimage feature. A LDP feature is obtained by computing theedge response values in all eight directions at each pixelposition and generating a code from the relative strengthmagnitude. Each bit of code sequence is determined byconsidering a local neighborhood hence becomes robust innoisy situation. A rotation invariant LDP code is alsointroduced which uses the direction of the most prominentedge response. Finally an image descriptor is formed todescribe the image (or image region) by accumulating theoccurrence of LDP feature over the whole input image (orimage region). Experimental results on the Brodatz texturedatabase show that LDP impressively outperforms theother commonly used dense descriptors (e.g.,Gabor-wavelet and LBP).","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128619582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 114
Pose Estimation of Interacting People using Pictorial Structures 基于图像结构的互动人姿态估计
P. Fihl, T. Moeslund
Pose estimation of people have had great progress in recentyears but so far research has dealt with single persons.In this paper we address some of the challenges that arisewhen doing pose estimation of interacting people. We buildon the pictorial structures framework and make importantcontributions by combining color-based appearance andedge information using a measure of the local quality ofthe appearance feature. In this way we not only combinethe two types of features but dynamically find the optimalweighting of them. We further enable the method to handleocclusions by searching a foreground mask for possibleoccluded body parts and then applying extra strong kinematicconstraints to find the true occluded body parts. Theeffect of applying our two contributions are show throughboth qualitative and quantitative tests and show a clear improvementon the ability to correctly localize body parts.
人体姿态估计近年来取得了很大的进展,但迄今为止的研究都是针对单个人的。在本文中,我们解决了一些在对交互人员进行姿势估计时出现的挑战。我们构建了图像结构框架,并通过使用外观特征的局部质量度量将基于颜色的外观和边缘信息相结合,做出了重要贡献。通过这种方式,我们不仅结合了两种类型的特征,而且动态地找到了它们的最优权重。我们进一步通过搜索前景掩模来搜索可能被遮挡的身体部位,然后应用额外的强运动学约束来找到真正被遮挡的身体部位,从而使该方法能够处理遮挡。应用我们的两个贡献的效果是通过定性和定量测试来显示的,并且显示出正确定位身体部位的能力有明显的提高。
{"title":"Pose Estimation of Interacting People using Pictorial Structures","authors":"P. Fihl, T. Moeslund","doi":"10.1109/AVSS.2010.27","DOIUrl":"https://doi.org/10.1109/AVSS.2010.27","url":null,"abstract":"Pose estimation of people have had great progress in recentyears but so far research has dealt with single persons.In this paper we address some of the challenges that arisewhen doing pose estimation of interacting people. We buildon the pictorial structures framework and make importantcontributions by combining color-based appearance andedge information using a measure of the local quality ofthe appearance feature. In this way we not only combinethe two types of features but dynamically find the optimalweighting of them. We further enable the method to handleocclusions by searching a foreground mask for possibleoccluded body parts and then applying extra strong kinematicconstraints to find the true occluded body parts. Theeffect of applying our two contributions are show throughboth qualitative and quantitative tests and show a clear improvementon the ability to correctly localize body parts.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"536 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134379792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Traffic Abnormality Detection through Directional Motion Behavior Map 基于定向运动行为图的交通异常检测
Nan Dong, Zhen Jia, Jie Shao, Ziyou Xiong, Zhi-peng Li, Fuqiang Liu, Jianwei Zhao, Pei-Yuan Peng
Automatic traffic abnormality detection through visualsurveillance is one of the critical requirements for IntelligentTransportation Systems (ITS). In this paper, wepresent a novel algorithm to detect abnormal traffic eventsin crowded scenes. Our algorithm can be deployed with fewsetup steps to automatically monitor traffic status. Differentfrom other approaches, we don’t need to define region ofinterests (ROI) or tripwires nor to configure object detectionand tracking parameters. A novel object behavior descriptor- directional motion behavior descriptors are proposed.The directional motion behavior descriptors collectforeground objects’ direction and speed information from avideo sequence with normal traffic events, and then thesedescriptors are accumulated to generate a directional motionbehavior map which models the normal traffic status.During detection steps, we first extract the directional motionbehavior map from the newly observed video and thenmeasure the differences between the normal behavior mapand the new map. If new direction motion behaviors arevery different from the descriptors in the normal behaviormap, then the corresponding regions in the observed videocontain traffic abnormalities. Our proposed algorithm hasbeen tested using both synthesized and real surveillancevideos. Experimental results demonstrated that our algorithmis effective and efficient for practical real-time trafficsurveillance applications.
通过视觉监控自动检测交通异常是智能交通系统(ITS)的关键要求之一。本文提出了一种检测拥挤场景中异常交通事件的新算法。我们的算法可以用很少的设置步骤来部署,以自动监控交通状态。与其他方法不同,我们不需要定义兴趣区域(ROI)或绊线,也不需要配置目标检测和跟踪参数。提出了一种新的物体行为描述符——定向运动行为描述符。定向运动行为描述符从具有正常交通事件的视频序列中收集前景物体的方向和速度信息,然后将这些描述符累积生成一个模拟正常交通状态的定向运动行为图。在检测步骤中,我们首先从新观察到的视频中提取方向运动行为图,然后测量正常行为图和新行为图之间的差异。如果新的方向运动行为与正常行为图中的描述符非常不同,则观察到的视频中相应的区域包含交通异常。我们提出的算法已经在合成视频和真实监控视频中进行了测试。实验结果表明,该算法在实际的实时交通监控应用中是有效的。
{"title":"Traffic Abnormality Detection through Directional Motion Behavior Map","authors":"Nan Dong, Zhen Jia, Jie Shao, Ziyou Xiong, Zhi-peng Li, Fuqiang Liu, Jianwei Zhao, Pei-Yuan Peng","doi":"10.1109/AVSS.2010.61","DOIUrl":"https://doi.org/10.1109/AVSS.2010.61","url":null,"abstract":"Automatic traffic abnormality detection through visualsurveillance is one of the critical requirements for IntelligentTransportation Systems (ITS). In this paper, wepresent a novel algorithm to detect abnormal traffic eventsin crowded scenes. Our algorithm can be deployed with fewsetup steps to automatically monitor traffic status. Differentfrom other approaches, we don’t need to define region ofinterests (ROI) or tripwires nor to configure object detectionand tracking parameters. A novel object behavior descriptor- directional motion behavior descriptors are proposed.The directional motion behavior descriptors collectforeground objects’ direction and speed information from avideo sequence with normal traffic events, and then thesedescriptors are accumulated to generate a directional motionbehavior map which models the normal traffic status.During detection steps, we first extract the directional motionbehavior map from the newly observed video and thenmeasure the differences between the normal behavior mapand the new map. If new direction motion behaviors arevery different from the descriptors in the normal behaviormap, then the corresponding regions in the observed videocontain traffic abnormalities. Our proposed algorithm hasbeen tested using both synthesized and real surveillancevideos. Experimental results demonstrated that our algorithmis effective and efficient for practical real-time trafficsurveillance applications.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"61 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133672703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
MuHAVi: A Multicamera Human Action Video Dataset for the Evaluation of Action Recognition Methods MuHAVi:一个用于动作识别方法评价的多摄像头人体动作视频数据集
Sanchit Singh, S. Velastín, Hossein Ragheb
This paper describes a body of multicamera humanaction video data with manually annotated silhouette datathat has been generated for the purpose of evaluatingsilhouette-based human action recognition methods. Itprovides a realistic challenge to both the segmentationand human action recognition communities and can act asa benchmark to objectively compare proposed algorithms.The public multi-camera, multi-action dataset is animprovement over existing datasets (e.g. PETS, CAVIAR,soccerdataset) that have not been developed specificallyfor human action recognition and complements otheraction recognition datasets (KTH, Weizmann, IXMAS,HumanEva, CMU Motion). It consists of 17 action classes,14 actors and 8 cameras. Each actor performs an actionseveral times in the action zone. The paper describes thedataset and illustrates a possible approach to algorithmevaluation using a previously published action simplerecognition method. In addition to showing an evaluationmethodology, these results establish a baseline for otherresearchers to improve upon.
本文描述了一组带有手动注释轮廓数据的多摄像头人体动作视频数据,这些数据是为了评估基于轮廓的人体动作识别方法而生成的。它为分割和人类行为识别社区提供了一个现实的挑战,可以作为客观比较所提出算法的基准。公共多相机,多动作数据集是对现有数据集(例如pet, CAVIAR,soccerdataset)的改进,这些数据集尚未专门为人类动作识别开发,并补充了其他动作识别数据集(KTH, Weizmann, IXMAS,HumanEva, CMU Motion)。它由17个动作班、14名演员和8台摄像机组成。每个参与者在动作区域执行一个动作数次。本文描述了数据集,并说明了使用先前发布的动作简单识别方法进行算法评估的可能方法。除了展示评估方法之外,这些结果还为其他研究人员建立了一个改进的基线。
{"title":"MuHAVi: A Multicamera Human Action Video Dataset for the Evaluation of Action Recognition Methods","authors":"Sanchit Singh, S. Velastín, Hossein Ragheb","doi":"10.1109/AVSS.2010.63","DOIUrl":"https://doi.org/10.1109/AVSS.2010.63","url":null,"abstract":"This paper describes a body of multicamera humanaction video data with manually annotated silhouette datathat has been generated for the purpose of evaluatingsilhouette-based human action recognition methods. Itprovides a realistic challenge to both the segmentationand human action recognition communities and can act asa benchmark to objectively compare proposed algorithms.The public multi-camera, multi-action dataset is animprovement over existing datasets (e.g. PETS, CAVIAR,soccerdataset) that have not been developed specificallyfor human action recognition and complements otheraction recognition datasets (KTH, Weizmann, IXMAS,HumanEva, CMU Motion). It consists of 17 action classes,14 actors and 8 cameras. Each actor performs an actionseveral times in the action zone. The paper describes thedataset and illustrates a possible approach to algorithmevaluation using a previously published action simplerecognition method. In addition to showing an evaluationmethodology, these results establish a baseline for otherresearchers to improve upon.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127123367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 183
Multi-Camera Analysis of Soccer Sequences 足球序列的多摄像机分析
C. Poppe, S. D. Bruyne, S. Verstockt, R. Walle
The automatic detection of meaningful phases in a soccergame depends on the accurate localization of playersand the ball at each moment. However, the automatic analysisof soccer sequences is a challenging task due to thepresence of fast moving multiple objects. For this purpose,we present a multi-camera analysis system that yields theposition of the ball and players on a common ground plane.The detection in each camera is based on a code-book algorithmand different features are used to classify the detectedblobs. The detection results of each camera are transformedusing homography to a virtual top-view of the playing field.Within this virtual top-view we merge trajectory informationof the different cameras allowing to refine the foundpositions. In this paper, we evaluate the system on a publicSOCCER dataset and end with a discussion of possibleimprovements of the dataset.
足球比赛中有意义阶段的自动检测依赖于球员和球在每个时刻的准确定位。然而,由于存在快速移动的多个物体,足球序列的自动分析是一项具有挑战性的任务。为此,我们提出了一个多摄像机分析系统,该系统可以产生球和球员在共同地平面上的位置。每个相机的检测基于代码本算法,并使用不同的特征对检测到的blobs进行分类。每个摄像机的检测结果都使用单应性转换为比赛场地的虚拟顶视图。在这个虚拟俯视图中,我们合并了不同摄像机的轨迹信息,从而可以优化基础位置。在本文中,我们在一个公共足球数据集上评估了该系统,并讨论了该数据集可能的改进。
{"title":"Multi-Camera Analysis of Soccer Sequences","authors":"C. Poppe, S. D. Bruyne, S. Verstockt, R. Walle","doi":"10.1109/AVSS.2010.64","DOIUrl":"https://doi.org/10.1109/AVSS.2010.64","url":null,"abstract":"The automatic detection of meaningful phases in a soccergame depends on the accurate localization of playersand the ball at each moment. However, the automatic analysisof soccer sequences is a challenging task due to thepresence of fast moving multiple objects. For this purpose,we present a multi-camera analysis system that yields theposition of the ball and players on a common ground plane.The detection in each camera is based on a code-book algorithmand different features are used to classify the detectedblobs. The detection results of each camera are transformedusing homography to a virtual top-view of the playing field.Within this virtual top-view we merge trajectory informationof the different cameras allowing to refine the foundpositions. In this paper, we evaluate the system on a publicSOCCER dataset and end with a discussion of possibleimprovements of the dataset.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122389046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
期刊
2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1