Soyeon Hong, Jeonghoon Kim, Donghoon Lee, Hyunsouk Cho
{"title":"Demusa:多模态情感分析演示","authors":"Soyeon Hong, Jeonghoon Kim, Donghoon Lee, Hyunsouk Cho","doi":"10.1109/ICMEW56448.2022.9859289","DOIUrl":null,"url":null,"abstract":"Recently, a lot of Multimodal Sentiment Analysis (MSA) models appeared to understanding opinions in multimedia. To accelerate MSA researches, CMU-MOSI and CMU-MOSEI were released as the open-datasets. However, it is hard to observe the input data elements in detail and analyze the prediction model results with each video clip for qualitative evaluation. For these reasons, this paper suggests DeMuSA, demo for multimodal sentiment analysis to explore raw data instance and compare prediction models by utterance-level.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Demusa: Demo for Multimodal Sentiment Analysis\",\"authors\":\"Soyeon Hong, Jeonghoon Kim, Donghoon Lee, Hyunsouk Cho\",\"doi\":\"10.1109/ICMEW56448.2022.9859289\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, a lot of Multimodal Sentiment Analysis (MSA) models appeared to understanding opinions in multimedia. To accelerate MSA researches, CMU-MOSI and CMU-MOSEI were released as the open-datasets. However, it is hard to observe the input data elements in detail and analyze the prediction model results with each video clip for qualitative evaluation. For these reasons, this paper suggests DeMuSA, demo for multimodal sentiment analysis to explore raw data instance and compare prediction models by utterance-level.\",\"PeriodicalId\":106759,\"journal\":{\"name\":\"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMEW56448.2022.9859289\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMEW56448.2022.9859289","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Recently, a lot of Multimodal Sentiment Analysis (MSA) models appeared to understanding opinions in multimedia. To accelerate MSA researches, CMU-MOSI and CMU-MOSEI were released as the open-datasets. However, it is hard to observe the input data elements in detail and analyze the prediction model results with each video clip for qualitative evaluation. For these reasons, this paper suggests DeMuSA, demo for multimodal sentiment analysis to explore raw data instance and compare prediction models by utterance-level.