{"title":"网络媒体中的情感强度检测:基于注意力机制的多模态深度学习方法","authors":"Yuanchen Chai","doi":"10.17559/tv-20230628001154","DOIUrl":null,"url":null,"abstract":": With the increasing influence of online public opinion, mining opinions and trend analysis from massive data of online media is important for understanding user sentiment, managing brand reputation, analyzing public opinion and optimizing marketing strategies. By combining data from multiple perceptual modalities, more comprehensive and accurate sentiment analysis results can be obtained. However, using multimodal data for sentiment analysis may face challenges such as data fusion, modal imbalance and inter-modal correlation. To overcome these challenges, the paper introduces an attention mechanism to multimodal sentiment analysis by constructing text, image, and audio feature extractors and using a custom cross-modal attention layer to compute the attention weights between different modalities, and finally fusing the attention-weighted features for sentiment classification. Through the cross-modal attention mechanism, the model can automatically learn the correlation between different modalities, dynamically adjust the modal weights, and selectively fuse features from different modalities, thus improving the accuracy and expressiveness of sentiment analysis.","PeriodicalId":510054,"journal":{"name":"Tehnicki vjesnik - Technical Gazette","volume":"21 S8","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Emotion Intensity Detection in Online Media: An Attention Mechanism Based Multimodal Deep Learning Approach\",\"authors\":\"Yuanchen Chai\",\"doi\":\"10.17559/tv-20230628001154\",\"DOIUrl\":null,\"url\":null,\"abstract\":\": With the increasing influence of online public opinion, mining opinions and trend analysis from massive data of online media is important for understanding user sentiment, managing brand reputation, analyzing public opinion and optimizing marketing strategies. By combining data from multiple perceptual modalities, more comprehensive and accurate sentiment analysis results can be obtained. However, using multimodal data for sentiment analysis may face challenges such as data fusion, modal imbalance and inter-modal correlation. To overcome these challenges, the paper introduces an attention mechanism to multimodal sentiment analysis by constructing text, image, and audio feature extractors and using a custom cross-modal attention layer to compute the attention weights between different modalities, and finally fusing the attention-weighted features for sentiment classification. Through the cross-modal attention mechanism, the model can automatically learn the correlation between different modalities, dynamically adjust the modal weights, and selectively fuse features from different modalities, thus improving the accuracy and expressiveness of sentiment analysis.\",\"PeriodicalId\":510054,\"journal\":{\"name\":\"Tehnicki vjesnik - Technical Gazette\",\"volume\":\"21 S8\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Tehnicki vjesnik - Technical Gazette\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.17559/tv-20230628001154\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Tehnicki vjesnik - Technical Gazette","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17559/tv-20230628001154","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Emotion Intensity Detection in Online Media: An Attention Mechanism Based Multimodal Deep Learning Approach
: With the increasing influence of online public opinion, mining opinions and trend analysis from massive data of online media is important for understanding user sentiment, managing brand reputation, analyzing public opinion and optimizing marketing strategies. By combining data from multiple perceptual modalities, more comprehensive and accurate sentiment analysis results can be obtained. However, using multimodal data for sentiment analysis may face challenges such as data fusion, modal imbalance and inter-modal correlation. To overcome these challenges, the paper introduces an attention mechanism to multimodal sentiment analysis by constructing text, image, and audio feature extractors and using a custom cross-modal attention layer to compute the attention weights between different modalities, and finally fusing the attention-weighted features for sentiment classification. Through the cross-modal attention mechanism, the model can automatically learn the correlation between different modalities, dynamically adjust the modal weights, and selectively fuse features from different modalities, thus improving the accuracy and expressiveness of sentiment analysis.