From visual features to key concepts: A Dynamic and Static Concept-driven approach for video captioning

IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pattern Recognition Letters Pub Date : 2025-07-01 Epub Date: 2025-04-19 DOI:10.1016/j.patrec.2025.04.007
Xin Ren, Yufeng Han, Bing Wei, Xue-song Tang, Kuangrong Hao
{"title":"From visual features to key concepts: A Dynamic and Static Concept-driven approach for video captioning","authors":"Xin Ren,&nbsp;Yufeng Han,&nbsp;Bing Wei,&nbsp;Xue-song Tang,&nbsp;Kuangrong Hao","doi":"10.1016/j.patrec.2025.04.007","DOIUrl":null,"url":null,"abstract":"<div><div>In video captioning, accurately identifying and summarizing key concepts while ignoring irrelevant details remains a significant challenge. Mainstream approaches often suffer from the inclusion of semantically irrelevant features, leading to inaccuracies and hallucinations in the generated captions. This study aims to develop a novel framework, <strong>D</strong>ynam<strong>i</strong>c and <strong>S</strong>tatic <strong>Co</strong>ncept-driven video captioning model(DiSCo), to enhance the accuracy and coherence of video captions by effectively leveraging pre-trained models and addressing the issue of semantic irrelevance. DiSCo builds upon the conventional encoder–decoder architecture by incorporating a Semantic Feature Extractor (SFE) and a Static-Dynamic Concept Detector (S-DCD). The SFE filters out semantically irrelevant features extracted by the visual model, while the S-DCD identifies critical concepts to guide the large language model (LLM) in generating captions. Both the visual model and the LLM are pre-trained and their parameters are frozen; only the SFE and S-DCD are trained to optimize the feature extraction and concept detection processes. Comprehensive experiments conducted on the MSVD and MSR-VTT datasets show that DiSCo significantly outperforms existing methods, achieving notable improvements in the quality and relevance of the generated captions. The proposed DiSCo framework demonstrates a robust solution for enhancing the accuracy and coherence of video captions by effectively integrating semantic feature extraction and concept-driven guidance.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"193 ","pages":"Pages 64-70"},"PeriodicalIF":3.3000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525001394","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/19 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In video captioning, accurately identifying and summarizing key concepts while ignoring irrelevant details remains a significant challenge. Mainstream approaches often suffer from the inclusion of semantically irrelevant features, leading to inaccuracies and hallucinations in the generated captions. This study aims to develop a novel framework, Dynamic and Static Concept-driven video captioning model(DiSCo), to enhance the accuracy and coherence of video captions by effectively leveraging pre-trained models and addressing the issue of semantic irrelevance. DiSCo builds upon the conventional encoder–decoder architecture by incorporating a Semantic Feature Extractor (SFE) and a Static-Dynamic Concept Detector (S-DCD). The SFE filters out semantically irrelevant features extracted by the visual model, while the S-DCD identifies critical concepts to guide the large language model (LLM) in generating captions. Both the visual model and the LLM are pre-trained and their parameters are frozen; only the SFE and S-DCD are trained to optimize the feature extraction and concept detection processes. Comprehensive experiments conducted on the MSVD and MSR-VTT datasets show that DiSCo significantly outperforms existing methods, achieving notable improvements in the quality and relevance of the generated captions. The proposed DiSCo framework demonstrates a robust solution for enhancing the accuracy and coherence of video captions by effectively integrating semantic feature extraction and concept-driven guidance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从视觉特征到关键概念:视频字幕的动态和静态概念驱动方法
在视频字幕中,准确地识别和总结关键概念,同时忽略无关的细节仍然是一个重大挑战。主流的方法通常会包含语义上不相关的特征,导致生成的标题不准确和产生幻觉。本研究旨在开发一种新的框架——动态和静态概念驱动视频字幕模型(DiSCo),通过有效利用预训练模型和解决语义不相关问题来提高视频字幕的准确性和连贯性。DiSCo基于传统的编码器-解码器架构,结合了语义特征提取器(SFE)和静态-动态概念检测器(S-DCD)。SFE过滤掉由视觉模型提取的语义无关的特征,而S-DCD识别关键概念,以指导大型语言模型(LLM)生成字幕。对视觉模型和LLM进行预训练,并对其参数进行冻结;只有SFE和S-DCD被训练来优化特征提取和概念检测过程。在MSVD和MSR-VTT数据集上进行的综合实验表明,DiSCo显著优于现有方法,在生成字幕的质量和相关性方面取得了显著改善。本文提出的DiSCo框架通过有效地集成语义特征提取和概念驱动引导,为提高视频字幕的准确性和连贯性提供了一种鲁棒的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Pattern Recognition Letters
Pattern Recognition Letters 工程技术-计算机:人工智能
CiteScore
12.40
自引率
5.90%
发文量
287
审稿时长
9.1 months
期刊介绍: Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition. Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.
期刊最新文献
DM-SR: Diffusion-based multimodal semantic restoration within semantic communication systems VoMarkSplat: Robust watermarking for 3D Gaussian splatting with patch and multi-convolutional voting Object – PSF: A unified representation framework for end-to-end panoptic segmentation forecasting Task-Driven learned image compression with explainability preservation for image classification VWENet: Volumetric wavelet and mixture-of-experts network for 3D medical image segmentation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1