{"title":"Summarization of Wearable Videos Based on User Activity Analysis","authors":"R. Katpelly, Tiecheng Liu, Chin-Tser Huang","doi":"10.1109/ISM.2007.16","DOIUrl":null,"url":null,"abstract":"This paper presents a model for automatic summarization of videos recorded by wearable cameras. The proposed model detects various user activities by computing the transform of matching image features among video frames. Four basic types of user activities are proposed, including \"moving closer /farther\", \"panning\", \"making a turn\", and \"rotation\". Different summarization techniques are provided for different activity types, and a wearable video sequence can be summarized as a compact set of panoramic images. The user activity analysis is solely based on the analysis of images, without resorting to the information of other sensors. Experimental results on a 19- minute video sequence demonstrate the effectiveness of our proposed model.","PeriodicalId":129680,"journal":{"name":"Ninth IEEE International Symposium on Multimedia (ISM 2007)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ninth IEEE International Symposium on Multimedia (ISM 2007)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISM.2007.16","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents a model for automatic summarization of videos recorded by wearable cameras. The proposed model detects various user activities by computing the transform of matching image features among video frames. Four basic types of user activities are proposed, including "moving closer /farther", "panning", "making a turn", and "rotation". Different summarization techniques are provided for different activity types, and a wearable video sequence can be summarized as a compact set of panoramic images. The user activity analysis is solely based on the analysis of images, without resorting to the information of other sensors. Experimental results on a 19- minute video sequence demonstrate the effectiveness of our proposed model.