Xiangbin Shi, Yaguang Lu, Cuiwei Liu, Deyuan Zhang, Fang Liu
{"title":"A Novel Unsupervised Method for Temporal Segmentation of Videos","authors":"Xiangbin Shi, Yaguang Lu, Cuiwei Liu, Deyuan Zhang, Fang Liu","doi":"10.1109/ICVRV.2017.00017","DOIUrl":null,"url":null,"abstract":"In this paper, we aim to address the problem of temporal segmentation of videos. Videos acquired from real world usually contain several continuous actions. Some literatures divide these real-world videos into many video clips with fixed length, since the features obtained from a single frame cannot fully describe human motion in a period. But a fixed-length video clip may contain frames from several adjacent actions, which would significantly affect the performance of action segmentation and recognition. Here we propose a novel unsupervised method based on the directions of velocity to divide an input video into a series of clips with unfixed length. Experiments conducted on the IXMAS dataset verify the effectiveness of our method.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICVRV.2017.00017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we aim to address the problem of temporal segmentation of videos. Videos acquired from real world usually contain several continuous actions. Some literatures divide these real-world videos into many video clips with fixed length, since the features obtained from a single frame cannot fully describe human motion in a period. But a fixed-length video clip may contain frames from several adjacent actions, which would significantly affect the performance of action segmentation and recognition. Here we propose a novel unsupervised method based on the directions of velocity to divide an input video into a series of clips with unfixed length. Experiments conducted on the IXMAS dataset verify the effectiveness of our method.