多模态多转弯对话姿态检测:挑战数据集和有效模型

Fuqiang Niu, Zebang Cheng, Xianghua Fu, Xiaojiang Peng, Genan Dai, Yin Chen, Hu Huang, Bowen Zhang
{"title":"多模态多转弯对话姿态检测:挑战数据集和有效模型","authors":"Fuqiang Niu, Zebang Cheng, Xianghua Fu, Xiaojiang Peng, Genan Dai, Yin Chen, Hu Huang, Bowen Zhang","doi":"arxiv-2409.00597","DOIUrl":null,"url":null,"abstract":"Stance detection, which aims to identify public opinion towards specific\ntargets using social media data, is an important yet challenging task. With the\nproliferation of diverse multimodal social media content including text, and\nimages multimodal stance detection (MSD) has become a crucial research area.\nHowever, existing MSD studies have focused on modeling stance within individual\ntext-image pairs, overlooking the multi-party conversational contexts that\nnaturally occur on social media. This limitation stems from a lack of datasets\nthat authentically capture such conversational scenarios, hindering progress in\nconversational MSD. To address this, we introduce a new multimodal multi-turn\nconversational stance detection dataset (called MmMtCSD). To derive stances\nfrom this challenging dataset, we propose a novel multimodal large language\nmodel stance detection framework (MLLM-SD), that learns joint stance\nrepresentations from textual and visual modalities. Experiments on MmMtCSD show\nstate-of-the-art performance of our proposed MLLM-SD approach for multimodal\nstance detection. We believe that MmMtCSD will contribute to advancing\nreal-world applications of stance detection research.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"31 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multimodal Multi-turn Conversation Stance Detection: A Challenge Dataset and Effective Model\",\"authors\":\"Fuqiang Niu, Zebang Cheng, Xianghua Fu, Xiaojiang Peng, Genan Dai, Yin Chen, Hu Huang, Bowen Zhang\",\"doi\":\"arxiv-2409.00597\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Stance detection, which aims to identify public opinion towards specific\\ntargets using social media data, is an important yet challenging task. With the\\nproliferation of diverse multimodal social media content including text, and\\nimages multimodal stance detection (MSD) has become a crucial research area.\\nHowever, existing MSD studies have focused on modeling stance within individual\\ntext-image pairs, overlooking the multi-party conversational contexts that\\nnaturally occur on social media. This limitation stems from a lack of datasets\\nthat authentically capture such conversational scenarios, hindering progress in\\nconversational MSD. To address this, we introduce a new multimodal multi-turn\\nconversational stance detection dataset (called MmMtCSD). To derive stances\\nfrom this challenging dataset, we propose a novel multimodal large language\\nmodel stance detection framework (MLLM-SD), that learns joint stance\\nrepresentations from textual and visual modalities. Experiments on MmMtCSD show\\nstate-of-the-art performance of our proposed MLLM-SD approach for multimodal\\nstance detection. We believe that MmMtCSD will contribute to advancing\\nreal-world applications of stance detection research.\",\"PeriodicalId\":501480,\"journal\":{\"name\":\"arXiv - CS - Multimedia\",\"volume\":\"31 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.00597\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.00597","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

立场检测旨在利用社交媒体数据识别公众对特定目标的看法,是一项重要而又具有挑战性的任务。然而,现有的多模态姿态检测研究主要集中在对单个文本-图像对的姿态建模,忽略了社交媒体上自然出现的多方对话语境。这一局限源于缺乏能真实捕捉此类对话场景的数据集,从而阻碍了对话式 MSD 的研究进展。为了解决这个问题,我们引入了一个新的多模态多回合对话姿态检测数据集(称为 MmMtCSD)。为了从这一具有挑战性的数据集中得出语态,我们提出了一种新颖的多模态大型语言模型语态检测框架(MLLM-SD),它可以从文本和视觉模态中学习联合语态表示。在 MmMtCSD 上进行的实验表明,我们提出的 MLLM-SD 方法在多模态姿态检测方面具有最先进的性能。我们相信,MmMtCSD 将有助于推进姿态检测研究在现实世界中的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multimodal Multi-turn Conversation Stance Detection: A Challenge Dataset and Effective Model
Stance detection, which aims to identify public opinion towards specific targets using social media data, is an important yet challenging task. With the proliferation of diverse multimodal social media content including text, and images multimodal stance detection (MSD) has become a crucial research area. However, existing MSD studies have focused on modeling stance within individual text-image pairs, overlooking the multi-party conversational contexts that naturally occur on social media. This limitation stems from a lack of datasets that authentically capture such conversational scenarios, hindering progress in conversational MSD. To address this, we introduce a new multimodal multi-turn conversational stance detection dataset (called MmMtCSD). To derive stances from this challenging dataset, we propose a novel multimodal large language model stance detection framework (MLLM-SD), that learns joint stance representations from textual and visual modalities. Experiments on MmMtCSD show state-of-the-art performance of our proposed MLLM-SD approach for multimodal stance detection. We believe that MmMtCSD will contribute to advancing real-world applications of stance detection research.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Vista3D: Unravel the 3D Darkside of a Single Image MoRAG -- Multi-Fusion Retrieval Augmented Generation for Human Motion Efficient Low-Resolution Face Recognition via Bridge Distillation Enhancing Few-Shot Classification without Forgetting through Multi-Level Contrastive Constraints NVLM: Open Frontier-Class Multimodal LLMs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1