Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning

Xingchen Zeng;Haichuan Lin;Yilin Ye;Wei Zeng
{"title":"Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning","authors":"Xingchen Zeng;Haichuan Lin;Yilin Ye;Wei Zeng","doi":"10.1109/TVCG.2024.3456159","DOIUrl":null,"url":null,"abstract":"Emerging multimodal large language models (MLLMs) exhibit great potential for chart question answering (CQA). Recent efforts primarily focus on scaling up training datasets (i.e., charts, data tables, and question-answer (QA) pairs) through data collection and synthesis. However, our empirical study on existing MLLMs and CQA datasets reveals notable gaps. First, current data collection and synthesis focus on data volume and lack consideration of fine-grained visual encodings and QA tasks, resulting in unbalanced data distribution divergent from practical CQA scenarios. Second, existing work follows the training recipe of the base MLLMs initially designed for natural images, under-exploring the adaptation to unique chart characteristics, such as rich text elements. To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. Specifically, we propose a novel data engine to effectively filter diverse and high-quality data from existing datasets and subsequently refine and augment the data using LLM-based generation techniques to better align with practical QA tasks and visual encodings. Then, to facilitate the adaptation to chart characteristics, we utilize the enriched data to train an MLLM by unfreezing the vision encoder and incorporating a mixture-of-resolution adaptation strategy for enhanced fine-grained recognition. Experimental results validate the effectiveness of our approach. Even with fewer training examples, our model consistently outperforms state-of-the-art CQA models on established benchmarks. We also contribute a dataset split as a benchmark for future research. Source codes and datasets of this paper are available at https://github.com/zengxingchen/ChartQA-MLLM.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"525-535"},"PeriodicalIF":6.5000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10670526/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Emerging multimodal large language models (MLLMs) exhibit great potential for chart question answering (CQA). Recent efforts primarily focus on scaling up training datasets (i.e., charts, data tables, and question-answer (QA) pairs) through data collection and synthesis. However, our empirical study on existing MLLMs and CQA datasets reveals notable gaps. First, current data collection and synthesis focus on data volume and lack consideration of fine-grained visual encodings and QA tasks, resulting in unbalanced data distribution divergent from practical CQA scenarios. Second, existing work follows the training recipe of the base MLLMs initially designed for natural images, under-exploring the adaptation to unique chart characteristics, such as rich text elements. To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. Specifically, we propose a novel data engine to effectively filter diverse and high-quality data from existing datasets and subsequently refine and augment the data using LLM-based generation techniques to better align with practical QA tasks and visual encodings. Then, to facilitate the adaptation to chart characteristics, we utilize the enriched data to train an MLLM by unfreezing the vision encoder and incorporating a mixture-of-resolution adaptation strategy for enhanced fine-grained recognition. Experimental results validate the effectiveness of our approach. Even with fewer training examples, our model consistently outperforms state-of-the-art CQA models on established benchmarks. We also contribute a dataset split as a benchmark for future research. Source codes and datasets of this paper are available at https://github.com/zengxingchen/ChartQA-MLLM.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用可视化参考指令调整推进图表问题解答中的多模态大语言模型
新兴的多模态大型语言模型(MLLMs)在图表问题解答(CQA)方面展现出巨大的潜力。最近的努力主要集中在通过数据收集和合成来扩大训练数据集(即图表、数据表和问答(QA)对)。然而,我们对现有 MLLM 和 CQA 数据集的实证研究发现了明显的差距。首先,目前的数据收集和合成侧重于数据量,缺乏对细粒度视觉编码和 QA 任务的考虑,导致数据分布不均衡,与实际的 CQA 应用场景不符。其次,现有工作沿用了最初为自然图像设计的基础 MLLM 的训练配方,对适应独特的图表特征(如丰富的文本元素)探索不足。为了填补这一空白,我们提出了一种可视化参照指令调整方法,用于指导训练数据集的增强和模型的开发。具体来说,我们提出了一种新颖的数据引擎,可以有效地从现有数据集中筛选出多样化的高质量数据,然后使用基于 LLM 的生成技术对数据进行完善和增强,以便更好地与实际的质量保证任务和可视化编码保持一致。然后,为了便于适应图表特征,我们利用丰富的数据,通过解冻视觉编码器来训练 MLLM,并采用混合分辨率适应策略来增强细粒度识别能力。实验结果验证了我们方法的有效性。即使使用较少的训练实例,我们的模型在既定基准上的表现也始终优于最先进的 CQA 模型。我们还提供了一个数据集,作为未来研究的基准。本文的源代码和数据集见 https://github.com/zengxingchen/ChartQA-MLLM。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
HYVE: Hybrid Vertex Encoder for Neural Distance Fields. Errata to "DiffCap: Diffusion-Based Real-Time Human Motion Capture Using Sparse IMUs and a Monocular Camera". FISN: FInding Spatial Neighborhoods for Generalizable Novel View Synthesis. Hi-LSplat: Hierarchical 3D Language Gaussian Splatting. Spatial Multiple Importance Sampling for Real-time Irradiance Probes.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1