{"title":"基于人脸的视频摘要方法","authors":"R. Hari, C. P. Roopesh, M. Wilscy","doi":"10.1109/RAICS.2013.6745481","DOIUrl":null,"url":null,"abstract":"In video summarization, a short video clip is made from lengthy video without losing its semantic content using significant scenes containing important frames, called keyframes. This process finds importance in video content management systems. The proposed method involves automatic summarization of motion picture based on human face. In this method, those frames within which the appearances of an actor or actress, selected by the user, occurs are treated as keyframes. In the first step, the video is segmented into shots by Mutual Information. Then it detects the available faces in the frames of each shot using the local Successive Mean Quantization Transform (SMQT) features and Sparse Network of Winnows (SNoW) classifier. Then the face of an actor of interest is selected to match with different available faces, already extracted, using Eigenfaces method. A shot is taken into consideration, if the method succeeds in finding at least one matched face in the shot. The selected shots are finally combined to create summarized video.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Human face based approach for video summarization\",\"authors\":\"R. Hari, C. P. Roopesh, M. Wilscy\",\"doi\":\"10.1109/RAICS.2013.6745481\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In video summarization, a short video clip is made from lengthy video without losing its semantic content using significant scenes containing important frames, called keyframes. This process finds importance in video content management systems. The proposed method involves automatic summarization of motion picture based on human face. In this method, those frames within which the appearances of an actor or actress, selected by the user, occurs are treated as keyframes. In the first step, the video is segmented into shots by Mutual Information. Then it detects the available faces in the frames of each shot using the local Successive Mean Quantization Transform (SMQT) features and Sparse Network of Winnows (SNoW) classifier. Then the face of an actor of interest is selected to match with different available faces, already extracted, using Eigenfaces method. A shot is taken into consideration, if the method succeeds in finding at least one matched face in the shot. The selected shots are finally combined to create summarized video.\",\"PeriodicalId\":184155,\"journal\":{\"name\":\"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RAICS.2013.6745481\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RAICS.2013.6745481","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In video summarization, a short video clip is made from lengthy video without losing its semantic content using significant scenes containing important frames, called keyframes. This process finds importance in video content management systems. The proposed method involves automatic summarization of motion picture based on human face. In this method, those frames within which the appearances of an actor or actress, selected by the user, occurs are treated as keyframes. In the first step, the video is segmented into shots by Mutual Information. Then it detects the available faces in the frames of each shot using the local Successive Mean Quantization Transform (SMQT) features and Sparse Network of Winnows (SNoW) classifier. Then the face of an actor of interest is selected to match with different available faces, already extracted, using Eigenfaces method. A shot is taken into consideration, if the method succeeds in finding at least one matched face in the shot. The selected shots are finally combined to create summarized video.