基于自适应网络的模糊音乐情感识别

IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Journal of New Music Research Pub Date : 2021-08-08 DOI:10.1080/09298215.2021.1977339
Paulo Sergio da Conceição Moreira, D. Tsunoda
{"title":"基于自适应网络的模糊音乐情感识别","authors":"Paulo Sergio da Conceição Moreira, D. Tsunoda","doi":"10.1080/09298215.2021.1977339","DOIUrl":null,"url":null,"abstract":"This study aims to recognise emotions in music through the Adaptive-Network-Based Fuzzy (ANFIS). For this, we applied such structure in 877 MP3 files with thirty seconds duration each, collected directly on the YouTube platform, which represent the emotions anger, fear, happiness, sadness, and surprise. We developed four classification strategies, consisting of sets of five, four, three, and two emotions. The results were considered promising, especially for three and two emotions, whose highest hit rates were 65.83% for anger, happiness and sadness, and 88.75% for anger and sadness. A reduction in the hit rate was observed when the emotions fear and happiness were in the same set, raising the hypothesis that only the audio content is not enough to distinguish between these emotions. Based on the results, we identified potential in the application of the ANFIS framework for problems with uncertainty and subjectivity.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"342 - 354"},"PeriodicalIF":1.1000,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Recognition of emotions in music through the Adaptive-Network-Based Fuzzy (ANFIS)\",\"authors\":\"Paulo Sergio da Conceição Moreira, D. Tsunoda\",\"doi\":\"10.1080/09298215.2021.1977339\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study aims to recognise emotions in music through the Adaptive-Network-Based Fuzzy (ANFIS). For this, we applied such structure in 877 MP3 files with thirty seconds duration each, collected directly on the YouTube platform, which represent the emotions anger, fear, happiness, sadness, and surprise. We developed four classification strategies, consisting of sets of five, four, three, and two emotions. The results were considered promising, especially for three and two emotions, whose highest hit rates were 65.83% for anger, happiness and sadness, and 88.75% for anger and sadness. A reduction in the hit rate was observed when the emotions fear and happiness were in the same set, raising the hypothesis that only the audio content is not enough to distinguish between these emotions. Based on the results, we identified potential in the application of the ANFIS framework for problems with uncertainty and subjectivity.\",\"PeriodicalId\":16553,\"journal\":{\"name\":\"Journal of New Music Research\",\"volume\":\"50 1\",\"pages\":\"342 - 354\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2021-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of New Music Research\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1080/09298215.2021.1977339\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of New Music Research","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1080/09298215.2021.1977339","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 2

摘要

本研究旨在通过基于自适应网络的模糊(ANFIS)来识别音乐中的情绪。为此,我们在877个MP3文件中应用了这种结构,每个文件的持续时间为30秒,这些文件直接在YouTube平台上收集,代表了愤怒、恐惧、幸福、悲伤和惊讶的情绪。我们开发了四种分类策略,由五种、四种、三种和两种情绪组成。结果被认为是有希望的,尤其是对三种和两种情绪,其愤怒、快乐和悲伤的最高命中率为65.83%,愤怒和悲伤的命中率为88.75%。当恐惧和快乐情绪处于同一组时,观察到命中率降低,这提出了一种假设,即只有音频内容不足以区分这些情绪。基于这些结果,我们确定了ANFIS框架在解决不确定性和主观性问题方面的应用潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Recognition of emotions in music through the Adaptive-Network-Based Fuzzy (ANFIS)
This study aims to recognise emotions in music through the Adaptive-Network-Based Fuzzy (ANFIS). For this, we applied such structure in 877 MP3 files with thirty seconds duration each, collected directly on the YouTube platform, which represent the emotions anger, fear, happiness, sadness, and surprise. We developed four classification strategies, consisting of sets of five, four, three, and two emotions. The results were considered promising, especially for three and two emotions, whose highest hit rates were 65.83% for anger, happiness and sadness, and 88.75% for anger and sadness. A reduction in the hit rate was observed when the emotions fear and happiness were in the same set, raising the hypothesis that only the audio content is not enough to distinguish between these emotions. Based on the results, we identified potential in the application of the ANFIS framework for problems with uncertainty and subjectivity.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of New Music Research
Journal of New Music Research 工程技术-计算机:跨学科应用
CiteScore
3.20
自引率
0.00%
发文量
5
审稿时长
>12 weeks
期刊介绍: The Journal of New Music Research (JNMR) publishes material which increases our understanding of music and musical processes by systematic, scientific and technological means. Research published in the journal is innovative, empirically grounded and often, but not exclusively, uses quantitative methods. Articles are both musically relevant and scientifically rigorous, giving full technical details. No bounds are placed on the music or musical behaviours at issue: popular music, music of diverse cultures and the canon of western classical music are all within the Journal’s scope. Articles deal with theory, analysis, composition, performance, uses of music, instruments and other music technologies. The Journal was founded in 1972 with the original title Interface to reflect its interdisciplinary nature, drawing on musicology (including music theory), computer science, psychology, acoustics, philosophy, and other disciplines.
期刊最新文献
Data structures for music encoding: tables, trees, and graphs ‘Texting Scarlatti’: unlocking a standard edition with a digital toolkit Detecting chord tone alterations and suspensions Digital critical edition of Čiurlionis' piano music: a case study Tempering the clavier: a corpus-based examination of Bach’s cognition of intonation in the Well-Tempered Clavier
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1