{"title":"Real-time Human-Music Emotional Interaction Based on Multimodal Analysis","authors":"Tianyue Jiang, Sanhong Deng, Peng Wu, Haibi Jiang","doi":"10.1109/cost57098.2022.00020","DOIUrl":null,"url":null,"abstract":"Music, as an important part of the culture, occupies a significant position and can be easily accessed. The research on the sentiment represented by music and its effect on the listener’s emotion is increasing gradually, but the existing research is often subjective and neglects the real-time expression of emotion. In this article, two labeled datasets are established. The deep learning method is used to classify music sentiment while the decision-level fusion method is used for real-time listener multimodal sentiment. We combine the sentiment analysis with a traditional online music playback system and propose innovatively a human-music emotional interaction system, using multimodal sentiment analysis based on the deep learning method. By means of individual observation and questionnaire survey, the interaction between human-music sentiments is proved to have a positive influence on listeners’ negative emotions.","PeriodicalId":135595,"journal":{"name":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","volume":"308 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/cost57098.2022.00020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Music, as an important part of the culture, occupies a significant position and can be easily accessed. The research on the sentiment represented by music and its effect on the listener’s emotion is increasing gradually, but the existing research is often subjective and neglects the real-time expression of emotion. In this article, two labeled datasets are established. The deep learning method is used to classify music sentiment while the decision-level fusion method is used for real-time listener multimodal sentiment. We combine the sentiment analysis with a traditional online music playback system and propose innovatively a human-music emotional interaction system, using multimodal sentiment analysis based on the deep learning method. By means of individual observation and questionnaire survey, the interaction between human-music sentiments is proved to have a positive influence on listeners’ negative emotions.