Cross-modal multi-relational graph reasoning: A novel model for multimodal textbook comprehension

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Information Fusion Pub Date : 2025-08-01 Epub Date: 2025-03-11 DOI:10.1016/j.inffus.2025.103082
Lingyun Song , Wenqing Du , Xiaolin Han , Xinbiao Gan , Xiaoqi Wang , Xuequn Shang
{"title":"Cross-modal multi-relational graph reasoning: A novel model for multimodal textbook comprehension","authors":"Lingyun Song ,&nbsp;Wenqing Du ,&nbsp;Xiaolin Han ,&nbsp;Xinbiao Gan ,&nbsp;Xiaoqi Wang ,&nbsp;Xuequn Shang","doi":"10.1016/j.inffus.2025.103082","DOIUrl":null,"url":null,"abstract":"<div><div>The ability to comprehensively understand multimodal textbook content is crucial for developing advanced intelligent tutoring systems and educational tools powered by generative AI. Earlier studies have advanced the understanding of multimodal content in educational by examining static cross-modal graphs that illustrate the relationships between visual objects and textual words. This, however, fails to account for the changes in relationship structures that characterize the visual-textual relationships in different cross-modal tasks. To tackle this issue, we present the Cross-Modal Multi-Relational Graph Reasoning (CMRGR) model. It is capable of analyzing a wide range of interactions between visual and textual components found in textbooks, allowing it to adapt its internal representation dynamically by utilizing contextual signals across different tasks. This capability is an indispensable asset for developing generative AI systems aimed at educational applications. We evaluate CMRGR’s performance on three multimodal textbook datasets, demonstrating its superiority over state-of-the-art baselines in generating accurate classifications and answers.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103082"},"PeriodicalIF":15.5000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525001551","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/11 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The ability to comprehensively understand multimodal textbook content is crucial for developing advanced intelligent tutoring systems and educational tools powered by generative AI. Earlier studies have advanced the understanding of multimodal content in educational by examining static cross-modal graphs that illustrate the relationships between visual objects and textual words. This, however, fails to account for the changes in relationship structures that characterize the visual-textual relationships in different cross-modal tasks. To tackle this issue, we present the Cross-Modal Multi-Relational Graph Reasoning (CMRGR) model. It is capable of analyzing a wide range of interactions between visual and textual components found in textbooks, allowing it to adapt its internal representation dynamically by utilizing contextual signals across different tasks. This capability is an indispensable asset for developing generative AI systems aimed at educational applications. We evaluate CMRGR’s performance on three multimodal textbook datasets, demonstrating its superiority over state-of-the-art baselines in generating accurate classifications and answers.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
跨模态多关系图推理:多模态教科书理解的新模式
全面理解多模态教科书内容的能力对于开发由生成式人工智能驱动的先进智能辅导系统和教育工具至关重要。早期的研究通过检查静态的跨模态图来说明视觉对象和文本单词之间的关系,提高了对教育中多模态内容的理解。然而,这并不能解释在不同的跨模态任务中表征视觉-文本关系的关系结构的变化。为了解决这个问题,我们提出了跨模态多关系图推理(CMRGR)模型。它能够分析教科书中发现的视觉和文本组件之间的广泛交互,允许它通过利用不同任务中的上下文信号动态地调整其内部表示。这种能力是开发针对教育应用的生成式人工智能系统不可或缺的资产。我们在三个多模态教科书数据集上评估了CMRGR的性能,证明了它在生成准确分类和答案方面优于最先进的基线。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
期刊最新文献
GTEE: A global timestamp encoding enhanced method for robust time series imputation in complex missing scenarios Resilient distributed Kalman filtering for cyber-physical systems via mean subsequence reduction Learning across modalities: a systematic survey of multimodal models for financial analysis MuBe4D: A mutual benefit framework for generalizable motion segmentation and geometry-first 4D reconstruction Decoding multilingual imagined speech from scalp EEG via dynamic differentiable graph hierarchical fusion network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1