Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, Hongsheng Li
{"title":"Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset","authors":"Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, Hongsheng Li","doi":"arxiv-2402.14804","DOIUrl":null,"url":null,"abstract":"Recent advancements in Large Multimodal Models (LMMs) have shown promising\nresults in mathematical reasoning within visual contexts, with models\napproaching human-level performance on existing benchmarks such as MathVista.\nHowever, we observe significant limitations in the diversity of questions and\nbreadth of subjects covered by these benchmarks. To address this issue, we\npresent the MATH-Vision (MATH-V) dataset, a meticulously curated collection of\n3,040 high-quality mathematical problems with visual contexts sourced from real\nmath competitions. Spanning 16 distinct mathematical disciplines and graded\nacross 5 levels of difficulty, our dataset provides a comprehensive and diverse\nset of challenges for evaluating the mathematical reasoning abilities of LMMs.\nThrough extensive experimentation, we unveil a notable performance gap between\ncurrent LMMs and human performance on MATH-V, underscoring the imperative for\nfurther advancements in LMMs. Moreover, our detailed categorization allows for\na thorough error analysis of LMMs, offering valuable insights to guide future\nresearch and development. The project is available at\nhttps://mathvision-cuhk.github.io","PeriodicalId":501462,"journal":{"name":"arXiv - MATH - History and Overview","volume":"48 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - History and Overview","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2402.14804","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advancements in Large Multimodal Models (LMMs) have shown promising
results in mathematical reasoning within visual contexts, with models
approaching human-level performance on existing benchmarks such as MathVista.
However, we observe significant limitations in the diversity of questions and
breadth of subjects covered by these benchmarks. To address this issue, we
present the MATH-Vision (MATH-V) dataset, a meticulously curated collection of
3,040 high-quality mathematical problems with visual contexts sourced from real
math competitions. Spanning 16 distinct mathematical disciplines and graded
across 5 levels of difficulty, our dataset provides a comprehensive and diverse
set of challenges for evaluating the mathematical reasoning abilities of LMMs.
Through extensive experimentation, we unveil a notable performance gap between
current LMMs and human performance on MATH-V, underscoring the imperative for
further advancements in LMMs. Moreover, our detailed categorization allows for
a thorough error analysis of LMMs, offering valuable insights to guide future
research and development. The project is available at
https://mathvision-cuhk.github.io