Adaptive Fusion Learning for Compositional Zero-Shot Recognition

IF 9.7 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI:10.1109/TMM.2024.3521852
Lingtong Min;Ziman Fan;Shunzhou Wang;Feiyang Dou;Xin Li;Binglu Wang
{"title":"Adaptive Fusion Learning for Compositional Zero-Shot Recognition","authors":"Lingtong Min;Ziman Fan;Shunzhou Wang;Feiyang Dou;Xin Li;Binglu Wang","doi":"10.1109/TMM.2024.3521852","DOIUrl":null,"url":null,"abstract":"Compositional Zero-Shot Learning (CZSL) aims to learn visual concepts (i.e., attributes and objects) from seen compositions and combine them to predict unseen compositions. Existing visual encoders in CZSL typically use traditional visual encoders (i.e., CNN and Transformer) or image encoders from Visual-Language Models (VLMs) to encode image features. However, traditional visual encoders need more multi-modal textual information, and image encoders of VLMs exhibit dependence on pre-training data, making them less effective when used independently for predicting unseen compositions. To overcome this limitation, we propose a novel approach based on the joint modeling of traditional visual encoders and VLMs visual encoders to enhance the prediction ability for uncommon and unseen compositions. Specifically, we design an adaptive fusion module that automatically adjusts the weighted parameters of similarity scores between traditional and VLMs methods during training, and these weighted parameters are inherited during the inference process. Given the significance of disentangling attributes and objects, we design a Multi-Attribute Object Module that, during the training phase, incorporates multiple pairs of attributes and objects as prior knowledge, leveraging this rich prior knowledge to facilitate the disentanglement of attributes and objects. Building upon this, we select the text encoder from VLMs to construct the Adaptive Fusion Network. We conduct extensive experiments on the Clothing16 K, UT-Zappos50 K, and C-GQA datasets, achieving excellent performance on the Clothing16 K and UT-Zappos50 K datasets.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1193-1204"},"PeriodicalIF":9.7000,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10814709/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Compositional Zero-Shot Learning (CZSL) aims to learn visual concepts (i.e., attributes and objects) from seen compositions and combine them to predict unseen compositions. Existing visual encoders in CZSL typically use traditional visual encoders (i.e., CNN and Transformer) or image encoders from Visual-Language Models (VLMs) to encode image features. However, traditional visual encoders need more multi-modal textual information, and image encoders of VLMs exhibit dependence on pre-training data, making them less effective when used independently for predicting unseen compositions. To overcome this limitation, we propose a novel approach based on the joint modeling of traditional visual encoders and VLMs visual encoders to enhance the prediction ability for uncommon and unseen compositions. Specifically, we design an adaptive fusion module that automatically adjusts the weighted parameters of similarity scores between traditional and VLMs methods during training, and these weighted parameters are inherited during the inference process. Given the significance of disentangling attributes and objects, we design a Multi-Attribute Object Module that, during the training phase, incorporates multiple pairs of attributes and objects as prior knowledge, leveraging this rich prior knowledge to facilitate the disentanglement of attributes and objects. Building upon this, we select the text encoder from VLMs to construct the Adaptive Fusion Network. We conduct extensive experiments on the Clothing16 K, UT-Zappos50 K, and C-GQA datasets, achieving excellent performance on the Clothing16 K and UT-Zappos50 K datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于自适应融合学习的构图零射击识别
composition Zero-Shot Learning (CZSL)旨在从看到的构图中学习视觉概念(即属性和对象),并将它们组合起来预测看不见的构图。现有的CZSL视觉编码器通常使用传统的视觉编码器(即CNN和Transformer)或来自视觉语言模型(VLMs)的图像编码器对图像特征进行编码。然而,传统的视觉编码器需要更多的多模态文本信息,vlm的图像编码器表现出对预训练数据的依赖,这使得它们在独立用于预测未见的组合时效率较低。为了克服这一限制,我们提出了一种基于传统视觉编码器和VLMs视觉编码器联合建模的新方法,以提高对不常见和未见组合的预测能力。具体来说,我们设计了一个自适应融合模块,在训练过程中自动调整传统方法和VLMs方法之间相似度得分的加权参数,并在推理过程中继承这些加权参数。考虑到属性和对象解纠缠的重要性,我们设计了一个多属性对象模块,该模块在训练阶段将多对属性和对象作为先验知识,利用这些丰富的先验知识促进属性和对象的解纠缠。在此基础上,我们从vlm中选择文本编码器构建自适应融合网络。我们在clothing16k、ut - zappos50k和C-GQA数据集上进行了广泛的实验,在clothing16k和ut - zappos50k数据集上取得了优异的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Multimedia
IEEE Transactions on Multimedia 工程技术-电信学
CiteScore
11.70
自引率
11.00%
发文量
576
审稿时长
5.5 months
期刊介绍: The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.
期刊最新文献
Screen Detection from Egocentric Image Streams Leveraging Multi-View Vision Language Model. TMT: Tri-Modal Translation Between Speech, Image, and Text by Processing Different Modalities as Different Languages HMS2Net: Heterogeneous Multimodal State Space Network via CLIP for Dynamic Scene Classification in Livestreaming Soundscape Captioning Using Sound Affective Quality Network and Large Language Model Denoised Semantic Features for Local Consistent No-Reference Image Quality Assessment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1