Lingtong Min;Ziman Fan;Shunzhou Wang;Feiyang Dou;Xin Li;Binglu Wang
{"title":"Adaptive Fusion Learning for Compositional Zero-Shot Recognition","authors":"Lingtong Min;Ziman Fan;Shunzhou Wang;Feiyang Dou;Xin Li;Binglu Wang","doi":"10.1109/TMM.2024.3521852","DOIUrl":null,"url":null,"abstract":"Compositional Zero-Shot Learning (CZSL) aims to learn visual concepts (i.e., attributes and objects) from seen compositions and combine them to predict unseen compositions. Existing visual encoders in CZSL typically use traditional visual encoders (i.e., CNN and Transformer) or image encoders from Visual-Language Models (VLMs) to encode image features. However, traditional visual encoders need more multi-modal textual information, and image encoders of VLMs exhibit dependence on pre-training data, making them less effective when used independently for predicting unseen compositions. To overcome this limitation, we propose a novel approach based on the joint modeling of traditional visual encoders and VLMs visual encoders to enhance the prediction ability for uncommon and unseen compositions. Specifically, we design an adaptive fusion module that automatically adjusts the weighted parameters of similarity scores between traditional and VLMs methods during training, and these weighted parameters are inherited during the inference process. Given the significance of disentangling attributes and objects, we design a Multi-Attribute Object Module that, during the training phase, incorporates multiple pairs of attributes and objects as prior knowledge, leveraging this rich prior knowledge to facilitate the disentanglement of attributes and objects. Building upon this, we select the text encoder from VLMs to construct the Adaptive Fusion Network. We conduct extensive experiments on the Clothing16 K, UT-Zappos50 K, and C-GQA datasets, achieving excellent performance on the Clothing16 K and UT-Zappos50 K datasets.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1193-1204"},"PeriodicalIF":8.4000,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10814709/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Compositional Zero-Shot Learning (CZSL) aims to learn visual concepts (i.e., attributes and objects) from seen compositions and combine them to predict unseen compositions. Existing visual encoders in CZSL typically use traditional visual encoders (i.e., CNN and Transformer) or image encoders from Visual-Language Models (VLMs) to encode image features. However, traditional visual encoders need more multi-modal textual information, and image encoders of VLMs exhibit dependence on pre-training data, making them less effective when used independently for predicting unseen compositions. To overcome this limitation, we propose a novel approach based on the joint modeling of traditional visual encoders and VLMs visual encoders to enhance the prediction ability for uncommon and unseen compositions. Specifically, we design an adaptive fusion module that automatically adjusts the weighted parameters of similarity scores between traditional and VLMs methods during training, and these weighted parameters are inherited during the inference process. Given the significance of disentangling attributes and objects, we design a Multi-Attribute Object Module that, during the training phase, incorporates multiple pairs of attributes and objects as prior knowledge, leveraging this rich prior knowledge to facilitate the disentanglement of attributes and objects. Building upon this, we select the text encoder from VLMs to construct the Adaptive Fusion Network. We conduct extensive experiments on the Clothing16 K, UT-Zappos50 K, and C-GQA datasets, achieving excellent performance on the Clothing16 K and UT-Zappos50 K datasets.
期刊介绍:
The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.