MKER:用于未来事件预测的多模态知识提取和推理

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Complex & Intelligent Systems Pub Date : 2025-01-09 DOI:10.1007/s40747-024-01741-4
Chenghang Lai, Shoumeng Qiu
{"title":"MKER:用于未来事件预测的多模态知识提取和推理","authors":"Chenghang Lai, Shoumeng Qiu","doi":"10.1007/s40747-024-01741-4","DOIUrl":null,"url":null,"abstract":"<p>Humans can predict what will happen shortly, which is essential for survival, but machines cannot. To equip machines with the ability, we introduce the innovative multi-modal knowledge extraction and reasoning (MKER) framework. This framework combines external commonsense knowledge, internal visual relation knowledge, and basic information to make inference. This framework is built on an encoder-decoder structure with three essential components: a visual language reasoning module, an adaptive cross-modality feature fusion module, and a future event description generation module. The visual language reasoning module extracts the object relationships among the most informative objects and the dynamic evolution of the relationship, which comes from the sequence scene graphs and commonsense graphs. The long short-term memory model is employed to explore changes in the object relationships at different times to form a dynamic object relationship. Furthermore, the adaptive cross-modality feature fusion module aligns video and language information by using object relationship knowledge as guidance to learn vision-language representation. Finally, the future event description generation module decodes the fused information and generates the language description of the next event. Experimental results demonstrate that MKER outperforms existing methods. Ablation studies further illustrate the effectiveness of the designed module. This work advances the field by providing a way to predict future events, enhance machine understanding, and interact with dynamic environments.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"28 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MKER: multi-modal knowledge extraction and reasoning for future event prediction\",\"authors\":\"Chenghang Lai, Shoumeng Qiu\",\"doi\":\"10.1007/s40747-024-01741-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Humans can predict what will happen shortly, which is essential for survival, but machines cannot. To equip machines with the ability, we introduce the innovative multi-modal knowledge extraction and reasoning (MKER) framework. This framework combines external commonsense knowledge, internal visual relation knowledge, and basic information to make inference. This framework is built on an encoder-decoder structure with three essential components: a visual language reasoning module, an adaptive cross-modality feature fusion module, and a future event description generation module. The visual language reasoning module extracts the object relationships among the most informative objects and the dynamic evolution of the relationship, which comes from the sequence scene graphs and commonsense graphs. The long short-term memory model is employed to explore changes in the object relationships at different times to form a dynamic object relationship. Furthermore, the adaptive cross-modality feature fusion module aligns video and language information by using object relationship knowledge as guidance to learn vision-language representation. Finally, the future event description generation module decodes the fused information and generates the language description of the next event. Experimental results demonstrate that MKER outperforms existing methods. Ablation studies further illustrate the effectiveness of the designed module. This work advances the field by providing a way to predict future events, enhance machine understanding, and interact with dynamic environments.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"28 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-01-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-024-01741-4\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01741-4","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

人类可以预测即将发生的事情,这对生存至关重要,但机器不能。为了使机器具备这种能力,我们引入了创新的多模态知识提取和推理(MKER)框架。该框架结合外部常识性知识、内部视觉关系知识和基本信息进行推理。该框架建立在一个编码器-解码器结构上,包含三个基本组件:视觉语言推理模块、自适应跨模态特征融合模块和未来事件描述生成模块。视觉语言推理模块从序列场景图和常识图中提取信息量最大的对象之间的对象关系及其动态演化。利用长短期记忆模型探索对象关系在不同时间的变化,形成动态的对象关系。此外,自适应跨模态特征融合模块以对象关系知识为指导,对视频信息和语言信息进行对齐,学习视觉语言表示。最后,未来事件描述生成模块对融合后的信息进行解码,生成下一个事件的语言描述。实验结果表明,MKER算法优于现有算法。烧蚀实验进一步验证了所设计模块的有效性。这项工作通过提供一种预测未来事件、增强机器理解和与动态环境交互的方法,推动了该领域的发展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MKER: multi-modal knowledge extraction and reasoning for future event prediction

Humans can predict what will happen shortly, which is essential for survival, but machines cannot. To equip machines with the ability, we introduce the innovative multi-modal knowledge extraction and reasoning (MKER) framework. This framework combines external commonsense knowledge, internal visual relation knowledge, and basic information to make inference. This framework is built on an encoder-decoder structure with three essential components: a visual language reasoning module, an adaptive cross-modality feature fusion module, and a future event description generation module. The visual language reasoning module extracts the object relationships among the most informative objects and the dynamic evolution of the relationship, which comes from the sequence scene graphs and commonsense graphs. The long short-term memory model is employed to explore changes in the object relationships at different times to form a dynamic object relationship. Furthermore, the adaptive cross-modality feature fusion module aligns video and language information by using object relationship knowledge as guidance to learn vision-language representation. Finally, the future event description generation module decodes the fused information and generates the language description of the next event. Experimental results demonstrate that MKER outperforms existing methods. Ablation studies further illustrate the effectiveness of the designed module. This work advances the field by providing a way to predict future events, enhance machine understanding, and interact with dynamic environments.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
期刊最新文献
Tailored meta-learning for dual trajectory transformer: advancing generalized trajectory prediction Control strategy of robotic manipulator based on multi-task reinforcement learning Explainable and secure framework for autism prediction using multimodal eye tracking and kinematic data A novel three-way distance-based fuzzy large margin distribution machine for imbalance classification Chaos-enhanced metaheuristics: classification, comparison, and convergence analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1