{"title":"MuHAVi: A Multicamera Human Action Video Dataset for the Evaluation of Action Recognition Methods","authors":"Sanchit Singh, S. Velastín, Hossein Ragheb","doi":"10.1109/AVSS.2010.63","DOIUrl":null,"url":null,"abstract":"This paper describes a body of multicamera humanaction video data with manually annotated silhouette datathat has been generated for the purpose of evaluatingsilhouette-based human action recognition methods. Itprovides a realistic challenge to both the segmentationand human action recognition communities and can act asa benchmark to objectively compare proposed algorithms.The public multi-camera, multi-action dataset is animprovement over existing datasets (e.g. PETS, CAVIAR,soccerdataset) that have not been developed specificallyfor human action recognition and complements otheraction recognition datasets (KTH, Weizmann, IXMAS,HumanEva, CMU Motion). It consists of 17 action classes,14 actors and 8 cameras. Each actor performs an actionseveral times in the action zone. The paper describes thedataset and illustrates a possible approach to algorithmevaluation using a previously published action simplerecognition method. In addition to showing an evaluationmethodology, these results establish a baseline for otherresearchers to improve upon.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"183","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AVSS.2010.63","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 183
Abstract
This paper describes a body of multicamera humanaction video data with manually annotated silhouette datathat has been generated for the purpose of evaluatingsilhouette-based human action recognition methods. Itprovides a realistic challenge to both the segmentationand human action recognition communities and can act asa benchmark to objectively compare proposed algorithms.The public multi-camera, multi-action dataset is animprovement over existing datasets (e.g. PETS, CAVIAR,soccerdataset) that have not been developed specificallyfor human action recognition and complements otheraction recognition datasets (KTH, Weizmann, IXMAS,HumanEva, CMU Motion). It consists of 17 action classes,14 actors and 8 cameras. Each actor performs an actionseveral times in the action zone. The paper describes thedataset and illustrates a possible approach to algorithmevaluation using a previously published action simplerecognition method. In addition to showing an evaluationmethodology, these results establish a baseline for otherresearchers to improve upon.