学习产生情感音乐与音乐结构特征相关

IF 1.2 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Cognitive Computation and Systems Pub Date : 2022-02-04 DOI:10.1049/ccs2.12037
Lin Ma, Wei Zhong, Xin Ma, Long Ye, Qin Zhang
{"title":"学习产生情感音乐与音乐结构特征相关","authors":"Lin Ma,&nbsp;Wei Zhong,&nbsp;Xin Ma,&nbsp;Long Ye,&nbsp;Qin Zhang","doi":"10.1049/ccs2.12037","DOIUrl":null,"url":null,"abstract":"<p>Music can be regarded as an art of expressing inner feelings. However, most of the existing networks for music generation ignore the analysis of its emotional expression. In this paper, we propose to synthesise music according to the specified emotion, and also integrate the internal structural characteristics of music into the generation process. Specifically, we embed the emotional labels along with music structure features as the conditional input and then investigate the GRU network for generating emotional music. In addition to the generator, we also design a novel perceptually optimised emotion classification model which aims for promoting the generated music close to the emotion expression of real music. In order to validate the effectiveness of the proposed framework, both the subjective and objective experiments are conducted to verify that our method can produce emotional music correlated to the specified emotion and music structures.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2022-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12037","citationCount":"1","resultStr":"{\"title\":\"Learning to generate emotional music correlated with music structure features\",\"authors\":\"Lin Ma,&nbsp;Wei Zhong,&nbsp;Xin Ma,&nbsp;Long Ye,&nbsp;Qin Zhang\",\"doi\":\"10.1049/ccs2.12037\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Music can be regarded as an art of expressing inner feelings. However, most of the existing networks for music generation ignore the analysis of its emotional expression. In this paper, we propose to synthesise music according to the specified emotion, and also integrate the internal structural characteristics of music into the generation process. Specifically, we embed the emotional labels along with music structure features as the conditional input and then investigate the GRU network for generating emotional music. In addition to the generator, we also design a novel perceptually optimised emotion classification model which aims for promoting the generated music close to the emotion expression of real music. In order to validate the effectiveness of the proposed framework, both the subjective and objective experiments are conducted to verify that our method can produce emotional music correlated to the specified emotion and music structures.</p>\",\"PeriodicalId\":33652,\"journal\":{\"name\":\"Cognitive Computation and Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2022-02-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12037\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Computation and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12037\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1

摘要

音乐可以看作是一种表达内心情感的艺术。然而,现有的音乐生成网络大多忽略了对其情感表达的分析。在本文中,我们提出根据特定的情感来合成音乐,并将音乐的内在结构特征融入到生成过程中。具体来说,我们将情感标签与音乐结构特征一起嵌入作为条件输入,然后研究GRU网络生成情感音乐。除了生成器之外,我们还设计了一种新的感知优化的情感分类模型,旨在促进生成的音乐更接近真实音乐的情感表达。为了验证所提出的框架的有效性,进行了主观和客观实验,以验证我们的方法可以产生与特定情感和音乐结构相关的情感音乐。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Learning to generate emotional music correlated with music structure features

Music can be regarded as an art of expressing inner feelings. However, most of the existing networks for music generation ignore the analysis of its emotional expression. In this paper, we propose to synthesise music according to the specified emotion, and also integrate the internal structural characteristics of music into the generation process. Specifically, we embed the emotional labels along with music structure features as the conditional input and then investigate the GRU network for generating emotional music. In addition to the generator, we also design a novel perceptually optimised emotion classification model which aims for promoting the generated music close to the emotion expression of real music. In order to validate the effectiveness of the proposed framework, both the subjective and objective experiments are conducted to verify that our method can produce emotional music correlated to the specified emotion and music structures.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognitive Computation and Systems
Cognitive Computation and Systems Computer Science-Computer Science Applications
CiteScore
2.50
自引率
0.00%
发文量
39
审稿时长
10 weeks
期刊最新文献
EF-CorrCA: A multi-modal EEG-fNIRS subject independent model to assess speech quality on brain activity using correlated component analysis Detection of autism spectrum disorder using multi-scale enhanced graph convolutional network Evolving usability heuristics for visualising Augmented Reality/Mixed Reality applications using cognitive model of information processing and fuzzy analytical hierarchy process Emotion classification with multi-modal physiological signals using multi-attention-based neural network Optimisation of deep neural network model using Reptile meta learning approach
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1