{"title":"MKER:用于未来事件预测的多模态知识提取和推理","authors":"Chenghang Lai, Shoumeng Qiu","doi":"10.1007/s40747-024-01741-4","DOIUrl":null,"url":null,"abstract":"<p>Humans can predict what will happen shortly, which is essential for survival, but machines cannot. To equip machines with the ability, we introduce the innovative multi-modal knowledge extraction and reasoning (MKER) framework. This framework combines external commonsense knowledge, internal visual relation knowledge, and basic information to make inference. This framework is built on an encoder-decoder structure with three essential components: a visual language reasoning module, an adaptive cross-modality feature fusion module, and a future event description generation module. The visual language reasoning module extracts the object relationships among the most informative objects and the dynamic evolution of the relationship, which comes from the sequence scene graphs and commonsense graphs. The long short-term memory model is employed to explore changes in the object relationships at different times to form a dynamic object relationship. Furthermore, the adaptive cross-modality feature fusion module aligns video and language information by using object relationship knowledge as guidance to learn vision-language representation. Finally, the future event description generation module decodes the fused information and generates the language description of the next event. Experimental results demonstrate that MKER outperforms existing methods. Ablation studies further illustrate the effectiveness of the designed module. This work advances the field by providing a way to predict future events, enhance machine understanding, and interact with dynamic environments.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"28 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MKER: multi-modal knowledge extraction and reasoning for future event prediction\",\"authors\":\"Chenghang Lai, Shoumeng Qiu\",\"doi\":\"10.1007/s40747-024-01741-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Humans can predict what will happen shortly, which is essential for survival, but machines cannot. To equip machines with the ability, we introduce the innovative multi-modal knowledge extraction and reasoning (MKER) framework. This framework combines external commonsense knowledge, internal visual relation knowledge, and basic information to make inference. This framework is built on an encoder-decoder structure with three essential components: a visual language reasoning module, an adaptive cross-modality feature fusion module, and a future event description generation module. The visual language reasoning module extracts the object relationships among the most informative objects and the dynamic evolution of the relationship, which comes from the sequence scene graphs and commonsense graphs. The long short-term memory model is employed to explore changes in the object relationships at different times to form a dynamic object relationship. Furthermore, the adaptive cross-modality feature fusion module aligns video and language information by using object relationship knowledge as guidance to learn vision-language representation. Finally, the future event description generation module decodes the fused information and generates the language description of the next event. Experimental results demonstrate that MKER outperforms existing methods. Ablation studies further illustrate the effectiveness of the designed module. This work advances the field by providing a way to predict future events, enhance machine understanding, and interact with dynamic environments.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"28 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-01-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-024-01741-4\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01741-4","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
MKER: multi-modal knowledge extraction and reasoning for future event prediction
Humans can predict what will happen shortly, which is essential for survival, but machines cannot. To equip machines with the ability, we introduce the innovative multi-modal knowledge extraction and reasoning (MKER) framework. This framework combines external commonsense knowledge, internal visual relation knowledge, and basic information to make inference. This framework is built on an encoder-decoder structure with three essential components: a visual language reasoning module, an adaptive cross-modality feature fusion module, and a future event description generation module. The visual language reasoning module extracts the object relationships among the most informative objects and the dynamic evolution of the relationship, which comes from the sequence scene graphs and commonsense graphs. The long short-term memory model is employed to explore changes in the object relationships at different times to form a dynamic object relationship. Furthermore, the adaptive cross-modality feature fusion module aligns video and language information by using object relationship knowledge as guidance to learn vision-language representation. Finally, the future event description generation module decodes the fused information and generates the language description of the next event. Experimental results demonstrate that MKER outperforms existing methods. Ablation studies further illustrate the effectiveness of the designed module. This work advances the field by providing a way to predict future events, enhance machine understanding, and interact with dynamic environments.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.