Novel cross-dimensional coarse-fine-grained complementary network for image-text matching.

IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE PeerJ Computer Science Pub Date : 2025-03-03 eCollection Date: 2025-01-01 DOI:10.7717/peerj-cs.2725
Meizhen Liu, Anis Salwa Mohd Khairuddin, Khairunnisa Hasikin, Weitong Liu
{"title":"Novel cross-dimensional coarse-fine-grained complementary network for image-text matching.","authors":"Meizhen Liu, Anis Salwa Mohd Khairuddin, Khairunnisa Hasikin, Weitong Liu","doi":"10.7717/peerj-cs.2725","DOIUrl":null,"url":null,"abstract":"<p><p>The fundamental aspects of multimodal applications such as image-text matching, and cross-modal heterogeneity gap between images and texts have always been challenging and complex. Researchers strive to overcome the challenges by proposing numerous significant efforts directed toward narrowing the semantic gap between visual and textual modalities. However, existing methods are usually limited to computing the similarity between images (image regions) and text (text words), ignoring the semantic consistency between fine-grained matching of word regions and coarse-grained overall matching of image and text. Additionally, these methods often ignore the semantic differences across different feature dimensions. Such limitations may result in an overemphasis on specific details at the expense of holistic understanding during image-text matching. To tackle this challenge, this article proposes a new Cross-Dimensional Coarse-Fine-Grained Complementary Network (CDGCN). Firstly, the proposed CDGCN performs fine-grained semantic alignment of image regions and sentence words based on cross-dimensional dependencies. Next, a Coarse-Grained Cross-Dimensional Semantic Aggregation module (CGDSA) is developed to complement local alignment with global image-text matching ensuring semantic consistency. This module aggregates local features across different dimensions as well as within the same dimension to form coherent global features, thus preserving the semantic integrity of the information. The proposed CDGCN is evaluated on two multimodal datasets, Flickr30K and MS-COCO against state-of-the-art methods. The proposed CDGCN achieved substantial improvements with performance increment of 7.7-16% for both datasets.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2725"},"PeriodicalIF":2.5000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11888920/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PeerJ Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.7717/peerj-cs.2725","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The fundamental aspects of multimodal applications such as image-text matching, and cross-modal heterogeneity gap between images and texts have always been challenging and complex. Researchers strive to overcome the challenges by proposing numerous significant efforts directed toward narrowing the semantic gap between visual and textual modalities. However, existing methods are usually limited to computing the similarity between images (image regions) and text (text words), ignoring the semantic consistency between fine-grained matching of word regions and coarse-grained overall matching of image and text. Additionally, these methods often ignore the semantic differences across different feature dimensions. Such limitations may result in an overemphasis on specific details at the expense of holistic understanding during image-text matching. To tackle this challenge, this article proposes a new Cross-Dimensional Coarse-Fine-Grained Complementary Network (CDGCN). Firstly, the proposed CDGCN performs fine-grained semantic alignment of image regions and sentence words based on cross-dimensional dependencies. Next, a Coarse-Grained Cross-Dimensional Semantic Aggregation module (CGDSA) is developed to complement local alignment with global image-text matching ensuring semantic consistency. This module aggregates local features across different dimensions as well as within the same dimension to form coherent global features, thus preserving the semantic integrity of the information. The proposed CDGCN is evaluated on two multimodal datasets, Flickr30K and MS-COCO against state-of-the-art methods. The proposed CDGCN achieved substantial improvements with performance increment of 7.7-16% for both datasets.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于图像文本匹配的新型跨维粗粒度互补网络。
多模态应用的基本方面,如图像-文本匹配和图像与文本之间的跨模态异质性差距一直是具有挑战性和复杂性的。研究人员努力克服挑战,提出了许多重要的努力,旨在缩小视觉和文本模式之间的语义差距。然而,现有的方法通常局限于计算图像(图像区域)与文本(文本词)之间的相似度,忽略了词区域的细粒度匹配与图像和文本的粗粒度整体匹配之间的语义一致性。此外,这些方法往往忽略了不同特征维度之间的语义差异。这种限制可能会导致在图像-文本匹配过程中过分强调特定的细节,而牺牲整体理解。为了解决这一挑战,本文提出了一种新的跨维粗-细粒度互补网络(CDGCN)。首先,本文提出的CDGCN基于跨维依赖关系对图像区域和句子单词进行细粒度语义对齐;其次,开发了粗粒度跨维语义聚合模块(cggdsa),以补充局部对齐与全局图像-文本匹配,确保语义一致性。该模块将不同维度和同一维度内的局部特征聚合在一起,形成连贯的全局特征,从而保持信息的语义完整性。采用最先进的方法在两个多模态数据集Flickr30K和MS-COCO上对所提出的CDGCN进行了评估。提出的CDGCN在两个数据集上都取得了实质性的改进,性能提高了7.7-16%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
PeerJ Computer Science
PeerJ Computer Science Computer Science-General Computer Science
CiteScore
6.10
自引率
5.30%
发文量
332
审稿时长
10 weeks
期刊介绍: PeerJ Computer Science is the new open access journal covering all subject areas in computer science, with the backing of a prestigious advisory board and more than 300 academic editors.
期刊最新文献
A new era in identification of tick genera; artificial intelligence for precision and speed. MS-YieldStackNet: multi-source data fusion for wheat yield estimation using a stacked ensemble neural network. A hybrid algorithmic model for enhancing security in intelligent reflecting surface-assisted wireless communication. Robust coffee plant disease classification using deep learning and advanced feature engineering techniques. KomoTrip: a multi-day travel itinerary recommendation method based on the discrete komodo mlipir algorithm.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1