Haoxuan Li, Zhengmao Yang, Yunshan Ma, Yi Bin, Yang Yang, Tat-Seng Chua
{"title":"MM-Forecast:利用大型语言模型进行时态事件预测的多模态方法","authors":"Haoxuan Li, Zhengmao Yang, Yunshan Ma, Yi Bin, Yang Yang, Tat-Seng Chua","doi":"arxiv-2408.04388","DOIUrl":null,"url":null,"abstract":"We study an emerging and intriguing problem of multimodal temporal event\nforecasting with large language models. Compared to using text or graph\nmodalities, the investigation of utilizing images for temporal event\nforecasting has not been fully explored, especially in the era of large\nlanguage models (LLMs). To bridge this gap, we are particularly interested in\ntwo key questions of: 1) why images will help in temporal event forecasting,\nand 2) how to integrate images into the LLM-based forecasting framework. To\nanswer these research questions, we propose to identify two essential functions\nthat images play in the scenario of temporal event forecasting, i.e.,\nhighlighting and complementary. Then, we develop a novel framework, named\nMM-Forecast. It employs an Image Function Identification module to recognize\nthese functions as verbal descriptions using multimodal large language models\n(MLLMs), and subsequently incorporates these function descriptions into\nLLM-based forecasting models. To evaluate our approach, we construct a new\nmultimodal dataset, MidEast-TE-mm, by extending an existing event dataset\nMidEast-TE-mini with images. Empirical studies demonstrate that our MM-Forecast\ncan correctly identify the image functions, and further more, incorporating\nthese verbal function descriptions significantly improves the forecasting\nperformance. The dataset, code, and prompts are available at\nhttps://github.com/LuminosityX/MM-Forecast.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"30 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MM-Forecast: A Multimodal Approach to Temporal Event Forecasting with Large Language Models\",\"authors\":\"Haoxuan Li, Zhengmao Yang, Yunshan Ma, Yi Bin, Yang Yang, Tat-Seng Chua\",\"doi\":\"arxiv-2408.04388\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We study an emerging and intriguing problem of multimodal temporal event\\nforecasting with large language models. Compared to using text or graph\\nmodalities, the investigation of utilizing images for temporal event\\nforecasting has not been fully explored, especially in the era of large\\nlanguage models (LLMs). To bridge this gap, we are particularly interested in\\ntwo key questions of: 1) why images will help in temporal event forecasting,\\nand 2) how to integrate images into the LLM-based forecasting framework. To\\nanswer these research questions, we propose to identify two essential functions\\nthat images play in the scenario of temporal event forecasting, i.e.,\\nhighlighting and complementary. Then, we develop a novel framework, named\\nMM-Forecast. It employs an Image Function Identification module to recognize\\nthese functions as verbal descriptions using multimodal large language models\\n(MLLMs), and subsequently incorporates these function descriptions into\\nLLM-based forecasting models. To evaluate our approach, we construct a new\\nmultimodal dataset, MidEast-TE-mm, by extending an existing event dataset\\nMidEast-TE-mini with images. Empirical studies demonstrate that our MM-Forecast\\ncan correctly identify the image functions, and further more, incorporating\\nthese verbal function descriptions significantly improves the forecasting\\nperformance. The dataset, code, and prompts are available at\\nhttps://github.com/LuminosityX/MM-Forecast.\",\"PeriodicalId\":501480,\"journal\":{\"name\":\"arXiv - CS - Multimedia\",\"volume\":\"30 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.04388\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.04388","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MM-Forecast: A Multimodal Approach to Temporal Event Forecasting with Large Language Models
We study an emerging and intriguing problem of multimodal temporal event
forecasting with large language models. Compared to using text or graph
modalities, the investigation of utilizing images for temporal event
forecasting has not been fully explored, especially in the era of large
language models (LLMs). To bridge this gap, we are particularly interested in
two key questions of: 1) why images will help in temporal event forecasting,
and 2) how to integrate images into the LLM-based forecasting framework. To
answer these research questions, we propose to identify two essential functions
that images play in the scenario of temporal event forecasting, i.e.,
highlighting and complementary. Then, we develop a novel framework, named
MM-Forecast. It employs an Image Function Identification module to recognize
these functions as verbal descriptions using multimodal large language models
(MLLMs), and subsequently incorporates these function descriptions into
LLM-based forecasting models. To evaluate our approach, we construct a new
multimodal dataset, MidEast-TE-mm, by extending an existing event dataset
MidEast-TE-mini with images. Empirical studies demonstrate that our MM-Forecast
can correctly identify the image functions, and further more, incorporating
these verbal function descriptions significantly improves the forecasting
performance. The dataset, code, and prompts are available at
https://github.com/LuminosityX/MM-Forecast.