Video Summarization using Text Subjectivity Classification

L. Moraes, R. Marcacini, R. Goularte
{"title":"Video Summarization using Text Subjectivity Classification","authors":"L. Moraes, R. Marcacini, R. Goularte","doi":"10.1145/3539637.3556998","DOIUrl":null,"url":null,"abstract":"Video summarization has attracted researchers’ attention because it provides a compact and informative video version, supporting users and systems to save efforts in searching and understanding content of interest. Current techniques employ different strategies to select which video segments should be included in the final summary. The challenge is to process multimodal data present in the video looking for relevance clues (like redundant or complementary information) that help make a decision. A recent strategy is to use subjectivity detection. The presence or the absence of subjectivity can be explored as a relevance clue, helping to bring video summaries closer to the final user’s expectations. However, despite this potential, there is a gap on how to capture subjectivity information from videos. This paper investigates video summarization through subjectivity classification from video transcripts. This approach requires dealing with recent challenges that are important in video summarization tasks, such as detecting subjectivity in different languages and across multiple domains. We propose a multilingual machine learning model trained to deal with subjectivity classification in multiple domains. An experimental evaluation with different benchmark datasets indicates that our multilingual and multi-domain method achieves competitive results, even compared to language-specific models. Furthermore, such a model can be used to provide subjectivity as a content selection criterion in the video summarization task, filtering out segments that are not relevant to a video domain of interest.","PeriodicalId":350776,"journal":{"name":"Proceedings of the Brazilian Symposium on Multimedia and the Web","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Brazilian Symposium on Multimedia and the Web","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3539637.3556998","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Video summarization has attracted researchers’ attention because it provides a compact and informative video version, supporting users and systems to save efforts in searching and understanding content of interest. Current techniques employ different strategies to select which video segments should be included in the final summary. The challenge is to process multimodal data present in the video looking for relevance clues (like redundant or complementary information) that help make a decision. A recent strategy is to use subjectivity detection. The presence or the absence of subjectivity can be explored as a relevance clue, helping to bring video summaries closer to the final user’s expectations. However, despite this potential, there is a gap on how to capture subjectivity information from videos. This paper investigates video summarization through subjectivity classification from video transcripts. This approach requires dealing with recent challenges that are important in video summarization tasks, such as detecting subjectivity in different languages and across multiple domains. We propose a multilingual machine learning model trained to deal with subjectivity classification in multiple domains. An experimental evaluation with different benchmark datasets indicates that our multilingual and multi-domain method achieves competitive results, even compared to language-specific models. Furthermore, such a model can be used to provide subjectivity as a content selection criterion in the video summarization task, filtering out segments that are not relevant to a video domain of interest.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于文本主体性分类的视频摘要
视频摘要因为提供了一个紧凑的、信息丰富的视频版本,支持用户和系统节省搜索和理解感兴趣内容的努力而引起了研究人员的注意。目前的技术采用不同的策略来选择哪些视频片段应该包括在最后的摘要中。挑战在于处理视频中的多模态数据,寻找有助于做出决策的相关线索(如冗余或补充信息)。最近的一种策略是使用主观性检测。主观性的存在或缺失可以作为相关性线索进行探索,有助于使视频摘要更接近最终用户的期望。然而,尽管有这种潜力,如何从视频中捕捉主观性信息仍存在差距。本文通过对视频文本的主体性分类来研究视频摘要。这种方法需要处理最近在视频摘要任务中重要的挑战,例如在不同语言和跨多个领域中检测主观性。我们提出了一个多语言机器学习模型来训练处理多领域的主观性分类。不同基准数据集的实验评估表明,即使与特定语言的模型相比,我们的多语言和多领域方法也取得了具有竞争力的结果。此外,该模型可用于在视频摘要任务中提供主观性作为内容选择标准,过滤掉与感兴趣的视频域不相关的片段。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Evaluating Topic Modeling Pre-processing Pipelines for Portuguese Texts A Proposal to Apply SignWriting in IMSC1 Standard for the Next-Generation of Brazilian DTV Broadcasting System Once Learning for Looking and Identifying Based on YOLO-v5 Object Detection I can’t pay! Accessibility analysis of mobile banking apps Should We Translate? Evaluating Toxicity in Online Comments when Translating from Portuguese to English
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1