Long Doan, Tho Nguyen Duc, Chuanzhe Jing, E. Kamioka, Phan Xuan Tan
{"title":"Automatic Keyword Extraction for Viewport Prediction of 360-degree Virtual Tourism Video","authors":"Long Doan, Tho Nguyen Duc, Chuanzhe Jing, E. Kamioka, Phan Xuan Tan","doi":"10.1109/ICOCO56118.2022.10032026","DOIUrl":null,"url":null,"abstract":"In 360-degree streaming videos, viewport prediction can reduce the bandwidth needed during the stream while still maintaining a quality experience for the users by streaming only the area that is visible to the user. Existing research in viewport prediction aims to predict the user’s viewport with data from the user’s head movement trajectory, video saliency, and subtitles of the video. While these subtitles can contain much information necessary for viewport prediction, previous studies can only extract these information manually, which requires in-depth knowledge about the topic of the video. Moreover, extracting these information by hand can still miss some important keywords from the subtitles and limit the accuracy of the viewport prediction. In this paper, we focus on automate this extraction process by proposing three types of automatic keyword extraction methods, namely Adverb, NER (Named entity recognition) and Adverb+NER. We provide an analysis to demonstrate the effectiveness of our automatic methods compared to extracting important keywords by hand. We also incorporate our methods into an existing viewport prediction model to improve prediction accuracy. The experimental results show that the model with our automatic keyword extraction methods outperforms baseline methods which only use manually extracted information.","PeriodicalId":319652,"journal":{"name":"2022 IEEE International Conference on Computing (ICOCO)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Computing (ICOCO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOCO56118.2022.10032026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In 360-degree streaming videos, viewport prediction can reduce the bandwidth needed during the stream while still maintaining a quality experience for the users by streaming only the area that is visible to the user. Existing research in viewport prediction aims to predict the user’s viewport with data from the user’s head movement trajectory, video saliency, and subtitles of the video. While these subtitles can contain much information necessary for viewport prediction, previous studies can only extract these information manually, which requires in-depth knowledge about the topic of the video. Moreover, extracting these information by hand can still miss some important keywords from the subtitles and limit the accuracy of the viewport prediction. In this paper, we focus on automate this extraction process by proposing three types of automatic keyword extraction methods, namely Adverb, NER (Named entity recognition) and Adverb+NER. We provide an analysis to demonstrate the effectiveness of our automatic methods compared to extracting important keywords by hand. We also incorporate our methods into an existing viewport prediction model to improve prediction accuracy. The experimental results show that the model with our automatic keyword extraction methods outperforms baseline methods which only use manually extracted information.