自我中心视频的快进方法:综述

M. Silva, W. Ramos, Alan C. Neves, Edson Roteia Araujo Junior, M. Campos, E. R. Nascimento
{"title":"自我中心视频的快进方法:综述","authors":"M. Silva, W. Ramos, Alan C. Neves, Edson Roteia Araujo Junior, M. Campos, E. R. Nascimento","doi":"10.1109/SIBGRAPI-T.2019.00009","DOIUrl":null,"url":null,"abstract":"The emergence of low-cost, high-quality personal wearable cameras combined with a large and increasing storage capacity of video-sharing websites have evoked a growing interest in first-person videos. A First-Person Video is usually composed of monotonous long-running unedited streams captured by a device attached to the user body, which makes it visually unpleasant and tedious to watch. Thus, there is a rise in the need to provide quick access to the information therein. In the last few years, a popular approach to retrieve the information from videos is to produce a short version of the input video by creating a video summary; however, this approach disrupts the temporal context of the recording. Fast-Forward is another approach that creates a shorter version of the video preserving the video context by increasing its playback speed. Although Fast-Forward methods keep the recording story, they do not consider the semantic load of the input video. The Semantic Fast-Forward approach creates a shorter version of First-Person Videos dealing with both video context and emphasis of the relevant portions to keep the semantic load of the input video. In this paper, we present a review of the representative methods in both fast-forward and semantic fast-forward methods and discuss the future directions of the area.","PeriodicalId":371584,"journal":{"name":"2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fast-Forward Methods for Egocentric Videos: A Review\",\"authors\":\"M. Silva, W. Ramos, Alan C. Neves, Edson Roteia Araujo Junior, M. Campos, E. R. Nascimento\",\"doi\":\"10.1109/SIBGRAPI-T.2019.00009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The emergence of low-cost, high-quality personal wearable cameras combined with a large and increasing storage capacity of video-sharing websites have evoked a growing interest in first-person videos. A First-Person Video is usually composed of monotonous long-running unedited streams captured by a device attached to the user body, which makes it visually unpleasant and tedious to watch. Thus, there is a rise in the need to provide quick access to the information therein. In the last few years, a popular approach to retrieve the information from videos is to produce a short version of the input video by creating a video summary; however, this approach disrupts the temporal context of the recording. Fast-Forward is another approach that creates a shorter version of the video preserving the video context by increasing its playback speed. Although Fast-Forward methods keep the recording story, they do not consider the semantic load of the input video. The Semantic Fast-Forward approach creates a shorter version of First-Person Videos dealing with both video context and emphasis of the relevant portions to keep the semantic load of the input video. In this paper, we present a review of the representative methods in both fast-forward and semantic fast-forward methods and discuss the future directions of the area.\",\"PeriodicalId\":371584,\"journal\":{\"name\":\"2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)\",\"volume\":\"90 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SIBGRAPI-T.2019.00009\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIBGRAPI-T.2019.00009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

低成本、高质量的个人可穿戴相机的出现,加上视频分享网站的大容量和不断增长的存储容量,引起了人们对第一人称视频日益增长的兴趣。第一人称视频通常由连接到用户身体上的设备捕获的单调的长时间未编辑的流组成,这使得它在视觉上令人不快且观看乏味。因此,提供对其中的信息的快速访问的需求增加了。在过去的几年里,从视频中检索信息的一种流行的方法是通过创建视频摘要来制作输入视频的简短版本;然而,这种方法破坏了记录的时间背景。快进是另一种方法,它通过提高播放速度来创建更短的视频版本,以保留视频上下文。虽然快进方法保留了记录故事,但它们不考虑输入视频的语义负载。语义快进方法创造了第一人称视频的较短版本,同时处理视频上下文和相关部分的强调,以保持输入视频的语义负荷。在本文中,我们对快进和语义快进的代表性方法进行了综述,并讨论了该领域的未来发展方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Fast-Forward Methods for Egocentric Videos: A Review
The emergence of low-cost, high-quality personal wearable cameras combined with a large and increasing storage capacity of video-sharing websites have evoked a growing interest in first-person videos. A First-Person Video is usually composed of monotonous long-running unedited streams captured by a device attached to the user body, which makes it visually unpleasant and tedious to watch. Thus, there is a rise in the need to provide quick access to the information therein. In the last few years, a popular approach to retrieve the information from videos is to produce a short version of the input video by creating a video summary; however, this approach disrupts the temporal context of the recording. Fast-Forward is another approach that creates a shorter version of the video preserving the video context by increasing its playback speed. Although Fast-Forward methods keep the recording story, they do not consider the semantic load of the input video. The Semantic Fast-Forward approach creates a shorter version of First-Person Videos dealing with both video context and emphasis of the relevant portions to keep the semantic load of the input video. In this paper, we present a review of the representative methods in both fast-forward and semantic fast-forward methods and discuss the future directions of the area.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Message from the Tutorial Program Chairs Title Page III A Survey of Transfer Learning for Convolutional Neural Networks Fast-Forward Methods for Egocentric Videos: A Review Perfect Storm: DSAs Embrace Deep Learning for GPU-Based Computer Vision
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1