{"title":"Injecting Multimodal Information Into Pre-Trained Language Model for Multimodal Sentiment Analysis","authors":"Sijie Mai;Ying Zeng;Aolin Xiong;Haifeng Hu","doi":"10.1109/TAFFC.2025.3553149","DOIUrl":null,"url":null,"abstract":"With the increasing availability of computational and data resources, numerous powerful pre-trained language models (PLMs) have emerged for natural language processing tasks. However, how to inject nonverbal modalities into PLMs to handle multimodal information remains a practical problem. In this paper, we explore the application of PLM on multimodal sentiment analysis from a different perspective. Unlike many recent methods that develop multimodal fusion layers that are sequential to attention layers, we investigate the effectiveness of cross-modal additive attention that is parallel to attention layers, which takes the language modality as dominant modality. Moreover, we devise a gating mechanism to control the flow of nonverbal information by estimating its discriminative level. In this way, we can prevent noisy multimodal information from damaging the performance of pre-trained language model. In our framework, nonverbal modalities serve as auxiliary roles to provide the model with additional information and improve the understanding of multimodal human language. Additionally, cross-modal margin and matching losses are proposed to align the distributions of various modalities and simultaneously retain modality-specific information, which to some extent address the shortcoming of contrastive learning loss. Comprehensive experiments show that our approach surpasses existing state-of-the-art methods on multimodal sentiment analysis and emotion recognition tasks.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 3","pages":"2074-2089"},"PeriodicalIF":9.8000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10935629/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
With the increasing availability of computational and data resources, numerous powerful pre-trained language models (PLMs) have emerged for natural language processing tasks. However, how to inject nonverbal modalities into PLMs to handle multimodal information remains a practical problem. In this paper, we explore the application of PLM on multimodal sentiment analysis from a different perspective. Unlike many recent methods that develop multimodal fusion layers that are sequential to attention layers, we investigate the effectiveness of cross-modal additive attention that is parallel to attention layers, which takes the language modality as dominant modality. Moreover, we devise a gating mechanism to control the flow of nonverbal information by estimating its discriminative level. In this way, we can prevent noisy multimodal information from damaging the performance of pre-trained language model. In our framework, nonverbal modalities serve as auxiliary roles to provide the model with additional information and improve the understanding of multimodal human language. Additionally, cross-modal margin and matching losses are proposed to align the distributions of various modalities and simultaneously retain modality-specific information, which to some extent address the shortcoming of contrastive learning loss. Comprehensive experiments show that our approach surpasses existing state-of-the-art methods on multimodal sentiment analysis and emotion recognition tasks.
期刊介绍:
The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.