Shun Qian , Bingquan Liu , Chengjie Sun , Zhen Xu , Baoxun Wang
{"title":"Stimulating conversation-style emergencies of multi-modal LMs","authors":"Shun Qian , Bingquan Liu , Chengjie Sun , Zhen Xu , Baoxun Wang","doi":"10.1016/j.inffus.2025.103047","DOIUrl":null,"url":null,"abstract":"<div><div>The multi-modal Language Models (LMs) perform very well on alignment-style tasks such as Image–Text Retrieval and Image Captioning, benefiting mainly from pre-training on numerous image–text pairs. However, our evaluations indicate that these models underperform on conversation-style multi-modal tasks, such as Image-Chat and Visual Dialog, which constitute a crucial segment of multi-modal applications. To bridge this gap, this paper proposes a novel pre-training task, named as MBCG, to stimulate the abilities of existing multi-modal LMs on conversation-style multi-modal tasks without hurting their intrinsic abilities. For this purpose, we collect two image–text-comments triplet multi-modal datasets in both English and Chinese to apply the new pre-training task to existing models. The experimental results reveal that the MBCG task can significantly boost the performance of these models on conversation-style tasks, without any noticeable performance decline on their original evaluation tasks.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103047"},"PeriodicalIF":14.7000,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525001204","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The multi-modal Language Models (LMs) perform very well on alignment-style tasks such as Image–Text Retrieval and Image Captioning, benefiting mainly from pre-training on numerous image–text pairs. However, our evaluations indicate that these models underperform on conversation-style multi-modal tasks, such as Image-Chat and Visual Dialog, which constitute a crucial segment of multi-modal applications. To bridge this gap, this paper proposes a novel pre-training task, named as MBCG, to stimulate the abilities of existing multi-modal LMs on conversation-style multi-modal tasks without hurting their intrinsic abilities. For this purpose, we collect two image–text-comments triplet multi-modal datasets in both English and Chinese to apply the new pre-training task to existing models. The experimental results reveal that the MBCG task can significantly boost the performance of these models on conversation-style tasks, without any noticeable performance decline on their original evaluation tasks.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.