Image Captioning with multi-level similarity-guided semantic matching

IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Visual Informatics Pub Date : 2021-12-01 DOI:10.1016/j.visinf.2021.11.003
Jiesi Li , Ning Xu , Weizhi Nie , Shenyuan Zhang
{"title":"Image Captioning with multi-level similarity-guided semantic matching","authors":"Jiesi Li ,&nbsp;Ning Xu ,&nbsp;Weizhi Nie ,&nbsp;Shenyuan Zhang","doi":"10.1016/j.visinf.2021.11.003","DOIUrl":null,"url":null,"abstract":"<div><p>Image Captioning is a cross-modal task that needs to automatically generate coherent natural sentences to describe the image contents. Due to the large gap between vision and language modalities, most of the existing methods have the problem of inaccurate semantic matching between images and generated captions. To solve the problem, this paper proposes a novel multi-level similarity-guided semantic matching method for image captioning, which can fuse local and global semantic similarities to learn the latent semantic correlation between images and generated captions. Specifically, we extract the semantic units containing fine-grained semantic information of images and generated captions, respectively. Based on the comparison of the semantic units, we design a local semantic similarity evaluation mechanism. Meanwhile, we employ the CIDEr score to characterize the global semantic similarity. The local and global two-level similarities are finally fused using the reinforcement learning theory, to guide the model optimization to obtain better semantic matching. The quantitative and qualitative experiments on large-scale MSCOCO dataset illustrate the superiority of the proposed method, which can achieve fine-grained semantic matching of images and generated captions.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"5 4","pages":"Pages 41-48"},"PeriodicalIF":3.8000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000590/pdfft?md5=f944bc3d86f6d64595ece2bbaa4a94c8&pid=1-s2.0-S2468502X21000590-main.pdf","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Informatics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468502X21000590","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 6

Abstract

Image Captioning is a cross-modal task that needs to automatically generate coherent natural sentences to describe the image contents. Due to the large gap between vision and language modalities, most of the existing methods have the problem of inaccurate semantic matching between images and generated captions. To solve the problem, this paper proposes a novel multi-level similarity-guided semantic matching method for image captioning, which can fuse local and global semantic similarities to learn the latent semantic correlation between images and generated captions. Specifically, we extract the semantic units containing fine-grained semantic information of images and generated captions, respectively. Based on the comparison of the semantic units, we design a local semantic similarity evaluation mechanism. Meanwhile, we employ the CIDEr score to characterize the global semantic similarity. The local and global two-level similarities are finally fused using the reinforcement learning theory, to guide the model optimization to obtain better semantic matching. The quantitative and qualitative experiments on large-scale MSCOCO dataset illustrate the superiority of the proposed method, which can achieve fine-grained semantic matching of images and generated captions.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于多级相似度引导语义匹配的图像字幕
图像字幕是一项跨模态任务,需要自动生成连贯的自然句子来描述图像内容。由于视觉和语言模式之间存在较大的差异,现有的大多数方法存在图像与生成的字幕之间语义匹配不准确的问题。为了解决这一问题,本文提出了一种新的多级相似度引导的图像字幕语义匹配方法,该方法可以融合局部和全局的语义相似度,从而学习图像与生成的字幕之间的潜在语义相关性。具体来说,我们分别提取图像和生成的标题中包含细粒度语义信息的语义单元。在语义单元比较的基础上,设计了局部语义相似度评价机制。同时,我们使用CIDEr评分来描述全局语义相似度。最后利用强化学习理论融合局部和全局两级相似度,指导模型优化以获得更好的语义匹配。在大规模MSCOCO数据集上的定量和定性实验证明了该方法的优越性,该方法可以实现图像和生成的标题的细粒度语义匹配。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Visual Informatics
Visual Informatics Computer Science-Computer Graphics and Computer-Aided Design
CiteScore
6.70
自引率
3.30%
发文量
33
审稿时长
79 days
期刊最新文献
Intelligent CAD 2.0 Editorial Board RelicCARD: Enhancing cultural relics exploration through semantics-based augmented reality tangible interaction design JobViz: Skill-driven visual exploration of job advertisements Visual evaluation of graph representation learning based on the presentation of community structures
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1