Stimulating conversation-style emergencies of multi-modal LMs

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Information Fusion Pub Date : 2025-08-01 Epub Date: 2025-03-02 DOI:10.1016/j.inffus.2025.103047
Shun Qian , Bingquan Liu , Chengjie Sun , Zhen Xu , Baoxun Wang
{"title":"Stimulating conversation-style emergencies of multi-modal LMs","authors":"Shun Qian ,&nbsp;Bingquan Liu ,&nbsp;Chengjie Sun ,&nbsp;Zhen Xu ,&nbsp;Baoxun Wang","doi":"10.1016/j.inffus.2025.103047","DOIUrl":null,"url":null,"abstract":"<div><div>The multi-modal Language Models (LMs) perform very well on alignment-style tasks such as Image–Text Retrieval and Image Captioning, benefiting mainly from pre-training on numerous image–text pairs. However, our evaluations indicate that these models underperform on conversation-style multi-modal tasks, such as Image-Chat and Visual Dialog, which constitute a crucial segment of multi-modal applications. To bridge this gap, this paper proposes a novel pre-training task, named as MBCG, to stimulate the abilities of existing multi-modal LMs on conversation-style multi-modal tasks without hurting their intrinsic abilities. For this purpose, we collect two image–text-comments triplet multi-modal datasets in both English and Chinese to apply the new pre-training task to existing models. The experimental results reveal that the MBCG task can significantly boost the performance of these models on conversation-style tasks, without any noticeable performance decline on their original evaluation tasks.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103047"},"PeriodicalIF":15.5000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525001204","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/2 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The multi-modal Language Models (LMs) perform very well on alignment-style tasks such as Image–Text Retrieval and Image Captioning, benefiting mainly from pre-training on numerous image–text pairs. However, our evaluations indicate that these models underperform on conversation-style multi-modal tasks, such as Image-Chat and Visual Dialog, which constitute a crucial segment of multi-modal applications. To bridge this gap, this paper proposes a novel pre-training task, named as MBCG, to stimulate the abilities of existing multi-modal LMs on conversation-style multi-modal tasks without hurting their intrinsic abilities. For this purpose, we collect two image–text-comments triplet multi-modal datasets in both English and Chinese to apply the new pre-training task to existing models. The experimental results reveal that the MBCG task can significantly boost the performance of these models on conversation-style tasks, without any noticeable performance decline on their original evaluation tasks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
多模态lm的刺激对话式突发事件
多模态语言模型(LMs)在图像文本检索和图像字幕等对齐式任务上表现良好,主要得益于对大量图像文本对的预训练。然而,我们的评估表明,这些模型在会话式多模态任务上表现不佳,例如图像聊天和视觉对话,这些任务构成了多模态应用的关键部分。为了弥补这一差距,本文提出了一种新的预训练任务,称为MBCG,以在不损害现有多模态LMs内在能力的情况下,激发其在会话式多模态任务上的能力。为此,我们收集了两个中英文图像-文本-评论三联体多模态数据集,将新的预训练任务应用于现有模型。实验结果表明,MBCG任务可以显著提高这些模型在会话型任务上的性能,而在原始评估任务上没有明显的性能下降。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
期刊最新文献
GTEE: A global timestamp encoding enhanced method for robust time series imputation in complex missing scenarios Resilient distributed Kalman filtering for cyber-physical systems via mean subsequence reduction Learning across modalities: a systematic survey of multimodal models for financial analysis MuBe4D: A mutual benefit framework for generalizable motion segmentation and geometry-first 4D reconstruction Decoding multilingual imagined speech from scalp EEG via dynamic differentiable graph hierarchical fusion network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1