Video Captioning: a comparative review of where we are and which could be the route

Daniela Moctezuma, Tania Ram'irez-delReal, Guillermo Ruiz, Oth'on Gonz'alez-Ch'avez
{"title":"Video Captioning: a comparative review of where we are and which could be the route","authors":"Daniela Moctezuma, Tania Ram'irez-delReal, Guillermo Ruiz, Oth'on Gonz'alez-Ch'avez","doi":"10.48550/arXiv.2204.05976","DOIUrl":null,"url":null,"abstract":"Video captioning is the process of describing the content of a sequence of images capturing its semantic relationships and meanings. Dealing with this task with a single image is arduous, not to mention how difficult it is for a video (or images sequence). The amount and relevance of the applications of video captioning are vast, mainly to deal with a significant amount of video recordings in video surveillance, or assisting people visually impaired, to mention a few. To analyze where the efforts of our community to solve the video captioning task are, as well as what route could be better to follow, this manuscript presents an extensive review of more than 105 papers for the period of 2016 to 2021. As a result, the most-used datasets and metrics are identified. Also, the main approaches used and the best ones. We compute a set of rankings based on several performance metrics to obtain, according to its performance, the best method with the best result on the video captioning task. Finally, some insights are concluded about which could be the next steps or opportunity areas to improve dealing with this complex task.","PeriodicalId":10549,"journal":{"name":"Comput. Vis. Image Underst.","volume":"97 1","pages":"103671"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Comput. Vis. Image Underst.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2204.05976","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Video captioning is the process of describing the content of a sequence of images capturing its semantic relationships and meanings. Dealing with this task with a single image is arduous, not to mention how difficult it is for a video (or images sequence). The amount and relevance of the applications of video captioning are vast, mainly to deal with a significant amount of video recordings in video surveillance, or assisting people visually impaired, to mention a few. To analyze where the efforts of our community to solve the video captioning task are, as well as what route could be better to follow, this manuscript presents an extensive review of more than 105 papers for the period of 2016 to 2021. As a result, the most-used datasets and metrics are identified. Also, the main approaches used and the best ones. We compute a set of rankings based on several performance metrics to obtain, according to its performance, the best method with the best result on the video captioning task. Finally, some insights are concluded about which could be the next steps or opportunity areas to improve dealing with this complex task.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
视频字幕:对我们所处的位置和可能的路线进行比较回顾
视频字幕是描述一系列图像内容的过程,捕捉其语义关系和含义。处理单个图像的任务是艰巨的,更不用说视频(或图像序列)的难度了。视频字幕的应用数量和相关性是巨大的,主要是处理视频监控中大量的视频记录,或协助视障人士,仅举几例。为了分析我们社区在解决视频字幕任务方面的努力在哪里,以及可以更好地遵循什么路线,本文对2016年至2021年期间的105多篇论文进行了广泛的回顾。因此,确定了最常用的数据集和度量标准。此外,使用的主要方法和最好的方法。我们基于几个性能指标计算一组排名,以根据其性能获得在视频字幕任务上具有最佳效果的最佳方法。最后,总结了一些见解,这些见解可以是下一步或机会领域,以改进处理这一复杂任务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Real-time distributed video analytics for privacy-aware person search PAGML: Precise Alignment Guided Metric Learning for sketch-based 3D shape retrieval Robust Teacher: Self-correcting pseudo-label-guided semi-supervised learning for object detection Unpaired sonar image denoising with simultaneous contrastive learning 3DF-FCOS: Small object detection with 3D features based on FCOS
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1