Ran Jin, Tengda Hou, Tao Jin, Jie Yuan, Chenjie Du
{"title":"A method for image–text matching based on semantic filtering and adaptive adjustment","authors":"Ran Jin, Tengda Hou, Tao Jin, Jie Yuan, Chenjie Du","doi":"10.1186/s13640-024-00639-y","DOIUrl":null,"url":null,"abstract":"<p>As image–text matching (a critical task in the field of computer vision) links cross-modal data, it has captured extensive attention. Most of the existing methods intended for matching images and texts explore the local similarity levels between images and sentences to align images with texts. Even though this fine-grained approach has remarkable gains, how to further mine the deep semantics between data pairs and focus on the essential semantics in data remains to be quested. In this work, a new semantic filtering and adaptive approach (FAAR) was proposed to ease the above problem. To be specific, the filtered attention (FA) module selectively focuses on typical alignments with the interference of meaningless comparisons eliminated. Next, the adaptive regulator (AR) further adjusts the attention weights of key segments for filtered regions and words. The superiority of our proposed method was validated by a number of qualitative experiments and analyses on the Flickr30K and MSCOCO data sets.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":"26 1","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Eurasip Journal on Image and Video Processing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1186/s13640-024-00639-y","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As image–text matching (a critical task in the field of computer vision) links cross-modal data, it has captured extensive attention. Most of the existing methods intended for matching images and texts explore the local similarity levels between images and sentences to align images with texts. Even though this fine-grained approach has remarkable gains, how to further mine the deep semantics between data pairs and focus on the essential semantics in data remains to be quested. In this work, a new semantic filtering and adaptive approach (FAAR) was proposed to ease the above problem. To be specific, the filtered attention (FA) module selectively focuses on typical alignments with the interference of meaningless comparisons eliminated. Next, the adaptive regulator (AR) further adjusts the attention weights of key segments for filtered regions and words. The superiority of our proposed method was validated by a number of qualitative experiments and analyses on the Flickr30K and MSCOCO data sets.
期刊介绍:
EURASIP Journal on Image and Video Processing is intended for researchers from both academia and industry, who are active in the multidisciplinary field of image and video processing. The scope of the journal covers all theoretical and practical aspects of the domain, from basic research to development of application.