A. Jacoby, M. Pattichis, Sylvia Celedón-Pattichis, Carlos A. LópezLeiva
{"title":"协作学习环境中上下文敏感的人类活动分类","authors":"A. Jacoby, M. Pattichis, Sylvia Celedón-Pattichis, Carlos A. LópezLeiva","doi":"10.1109/SSIAI.2018.8470331","DOIUrl":null,"url":null,"abstract":"Human activity classification remains challenging due to the strong need to eliminate structural noise, the multitude of possible activities, and the strong variations in video acquisition. The current paper explores the study of human activity classification in a collaborative learning environment.This paper explores the use of color based object detection in conjunction with contextualization of object interaction to isolate motion vectors specific to each human activity. The basic approach is to make use of separate classifiers for each activity. Here, we consider the detection of typing, writing, and talking activities in raw videos.The method was tested using 43 uncropped video clips with 620 video frames for writing, 1050 for typing, and 1755 frames for talking. Using simple KNN classifiers, the method gave accuracies of 72.6% for writing, 71% for typing and 84.6% for talking. Classification accuracy improved to 92.5% (writing), 82.5% (typing) and 99.7% (talking) with the use of Deep Neural Networks.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Context-Sensitive Human Activity Classification in Collaborative Learning Environments\",\"authors\":\"A. Jacoby, M. Pattichis, Sylvia Celedón-Pattichis, Carlos A. LópezLeiva\",\"doi\":\"10.1109/SSIAI.2018.8470331\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human activity classification remains challenging due to the strong need to eliminate structural noise, the multitude of possible activities, and the strong variations in video acquisition. The current paper explores the study of human activity classification in a collaborative learning environment.This paper explores the use of color based object detection in conjunction with contextualization of object interaction to isolate motion vectors specific to each human activity. The basic approach is to make use of separate classifiers for each activity. Here, we consider the detection of typing, writing, and talking activities in raw videos.The method was tested using 43 uncropped video clips with 620 video frames for writing, 1050 for typing, and 1755 frames for talking. Using simple KNN classifiers, the method gave accuracies of 72.6% for writing, 71% for typing and 84.6% for talking. Classification accuracy improved to 92.5% (writing), 82.5% (typing) and 99.7% (talking) with the use of Deep Neural Networks.\",\"PeriodicalId\":422209,\"journal\":{\"name\":\"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-04-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSIAI.2018.8470331\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSIAI.2018.8470331","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Context-Sensitive Human Activity Classification in Collaborative Learning Environments
Human activity classification remains challenging due to the strong need to eliminate structural noise, the multitude of possible activities, and the strong variations in video acquisition. The current paper explores the study of human activity classification in a collaborative learning environment.This paper explores the use of color based object detection in conjunction with contextualization of object interaction to isolate motion vectors specific to each human activity. The basic approach is to make use of separate classifiers for each activity. Here, we consider the detection of typing, writing, and talking activities in raw videos.The method was tested using 43 uncropped video clips with 620 video frames for writing, 1050 for typing, and 1755 frames for talking. Using simple KNN classifiers, the method gave accuracies of 72.6% for writing, 71% for typing and 84.6% for talking. Classification accuracy improved to 92.5% (writing), 82.5% (typing) and 99.7% (talking) with the use of Deep Neural Networks.