Memory-enhanced hierarchical transformer for video paragraph captioning

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neurocomputing Pub Date : 2024-11-09 DOI:10.1016/j.neucom.2024.128835
Benhui Zhang , Junyu Gao , Yuan Yuan
{"title":"Memory-enhanced hierarchical transformer for video paragraph captioning","authors":"Benhui Zhang ,&nbsp;Junyu Gao ,&nbsp;Yuan Yuan","doi":"10.1016/j.neucom.2024.128835","DOIUrl":null,"url":null,"abstract":"<div><div>Video paragraph captioning aims to describe a video that contains multiple events with a paragraph of generated coherent sentences. Such a captioning task is full of challenges since the high requirements for visual–textual relevance and semantic coherence across the captioning paragraph of a video. In this work, we introduce a memory-enhanced hierarchical transformer for video paragraph captioning. Our model adopts a hierarchical structure, where the outer layer transformer extracts visual information from a global perspective and captures the relevancy between event segments throughout the entire video, while the inner layer transformer further mines local details within each event segment. By thoroughly exploring both global and local visual information at the video and event levels, our model can provide comprehensive visual feature cues for promising paragraph caption generation. Additionally, we design a memory module to capture similar patterns among event segments within a video, which preserves contextual information across event segments and updates its memory state accordingly. Experimental results on two popular datasets, ActivityNet Captions and YouCook2, demonstrate that our proposed model can achieve superior performance, generating higher quality caption while maintaining consistency in the content of video.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"615 ","pages":"Article 128835"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224016060","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Video paragraph captioning aims to describe a video that contains multiple events with a paragraph of generated coherent sentences. Such a captioning task is full of challenges since the high requirements for visual–textual relevance and semantic coherence across the captioning paragraph of a video. In this work, we introduce a memory-enhanced hierarchical transformer for video paragraph captioning. Our model adopts a hierarchical structure, where the outer layer transformer extracts visual information from a global perspective and captures the relevancy between event segments throughout the entire video, while the inner layer transformer further mines local details within each event segment. By thoroughly exploring both global and local visual information at the video and event levels, our model can provide comprehensive visual feature cues for promising paragraph caption generation. Additionally, we design a memory module to capture similar patterns among event segments within a video, which preserves contextual information across event segments and updates its memory state accordingly. Experimental results on two popular datasets, ActivityNet Captions and YouCook2, demonstrate that our proposed model can achieve superior performance, generating higher quality caption while maintaining consistency in the content of video.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于视频段落字幕的内存增强型分层变换器
视频段落字幕旨在用一段连贯的句子描述一段包含多个事件的视频。这种字幕任务充满挑战,因为对视频字幕段落的视觉-文本相关性和语义连贯性要求很高。在这项工作中,我们介绍了一种用于视频段落字幕的记忆增强型分层转换器。我们的模型采用分层结构,外层转换器从全局角度提取视觉信息,捕捉整个视频中事件段之间的相关性,而内层转换器则进一步挖掘每个事件段中的局部细节。通过深入挖掘视频和事件层面的全局和局部视觉信息,我们的模型可以为有望生成的段落标题提供全面的视觉特征线索。此外,我们还设计了一个记忆模块来捕捉视频中事件片段之间的相似模式,该模块会保留事件片段之间的上下文信息,并相应地更新其记忆状态。在 ActivityNet Captions 和 YouCook2 这两个流行数据集上的实验结果表明,我们提出的模型可以实现卓越的性能,在保持视频内容一致性的同时生成更高质量的字幕。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
期刊最新文献
Editorial Board Virtual sample generation for small sample learning: A survey, recent developments and future prospects Adaptive selection of spectral–spatial features for hyperspectral image classification using a modified-CBAM-based network FPGA-based component-wise LSTM training accelerator for neural granger causality analysis Multi-sensor information fusion in Internet of Vehicles based on deep learning: A review
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1