Jeong-Woo Son, Wonjoo Park, Sang-Yun Lee, Sun-Joong Kim
{"title":"Video scene title generation based on explicit and implicit relations among caption words","authors":"Jeong-Woo Son, Wonjoo Park, Sang-Yun Lee, Sun-Joong Kim","doi":"10.23919/ICACT.2018.8323835","DOIUrl":null,"url":null,"abstract":"Titles of videos are the most important aspect to provide various services. Recently, scene based or video fragments based services have been launched. Since videos in such services are often automatically generated by segmenting a video, these contents cannot have their own titles. As a result, titles of the video fragments are annotated by human hands. To reduce the cost for manual annotation of video titles, this paper proposes a novel method to generate titles of videos by selecting informative sentences from closed captions. The proposed method utilizes explicit and implicit relations among words occurred in closed captions by constructing a stochastic matrix. And then, the proposed method picks important words up based on their weights estimated with TextRank. A title is generated by selecting a sentence with important words from closed captions. Experimental results shows several cases with well-known Korean TV programs.","PeriodicalId":228625,"journal":{"name":"2018 20th International Conference on Advanced Communication Technology (ICACT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 20th International Conference on Advanced Communication Technology (ICACT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ICACT.2018.8323835","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Titles of videos are the most important aspect to provide various services. Recently, scene based or video fragments based services have been launched. Since videos in such services are often automatically generated by segmenting a video, these contents cannot have their own titles. As a result, titles of the video fragments are annotated by human hands. To reduce the cost for manual annotation of video titles, this paper proposes a novel method to generate titles of videos by selecting informative sentences from closed captions. The proposed method utilizes explicit and implicit relations among words occurred in closed captions by constructing a stochastic matrix. And then, the proposed method picks important words up based on their weights estimated with TextRank. A title is generated by selecting a sentence with important words from closed captions. Experimental results shows several cases with well-known Korean TV programs.