Deep Music Information Dynamics Novel Framework for Reduced Neural-Network Music Representation with Applications to Midi and Audio Analysis and Improvisation

Q2 Arts and Humanities Journal of Creative Music Systems Pub Date : 2022-04-30 DOI:10.5920/jcms.894
S. Dubnov, K. Chen, Kevin Huang
{"title":"Deep Music Information Dynamics Novel Framework for Reduced Neural-Network Music Representation with Applications to Midi and Audio Analysis and Improvisation","authors":"S. Dubnov, K. Chen, Kevin Huang","doi":"10.5920/jcms.894","DOIUrl":null,"url":null,"abstract":"Generative musical models often comprise of multiple levels of structure, presuming that the process of composition moves between background to foreground, or between generating musical surface and some deeper and reduced representation that governs hidden or latent dimensions of music.  In this paper we are using a recently proposed framework called Deep Musical Information Dynamics (DMID) to explore information contents of deep neural models of music through rate reduction of latent representation streams, which is contrasted with hight rate information dynamics of the musical surface. This approach is partially motivated by rate-distortion theories of human cognition, providing a framework for exploring possible relations between imaginary anticipations existing in the listener's or composer's mind, and the information dynamics of the sensory (acoustic) or symbolic score data. In the paper the DMID framework is demonstrated using several experiments with symbolic (MIDI) and acoustic (spectral) music representations. We use variational encoding to learn a latent representation of the musical surface. This embedding is further reduced using a bit-allocation method into a second stream of low bit-rate encoding. The combined loss includes temporal information in terms of predictive properties for each encoding stream, and accuracy loss measured in terms of mutual information between the encoding at low rate and the high rate surface representations. For the case of counterpoint, we also study the mutual information between two voices in a musical piece at different levels of information reduction.The DMID framework allows to explore aspects of computational creativity in terms of juxtaposition of latent/imaginary surprisal aspects of deeper structure with music surprisal on the surface level, done in a manner that is quantifiable and computationally tractable. The relevant information theory modeling and analysis methods are discussed in the paper, suggesting that a trade off between compression and prediction play an important factor in the analysis and design of creative musical systems.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Creative Music Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5920/jcms.894","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 1

Abstract

Generative musical models often comprise of multiple levels of structure, presuming that the process of composition moves between background to foreground, or between generating musical surface and some deeper and reduced representation that governs hidden or latent dimensions of music.  In this paper we are using a recently proposed framework called Deep Musical Information Dynamics (DMID) to explore information contents of deep neural models of music through rate reduction of latent representation streams, which is contrasted with hight rate information dynamics of the musical surface. This approach is partially motivated by rate-distortion theories of human cognition, providing a framework for exploring possible relations between imaginary anticipations existing in the listener's or composer's mind, and the information dynamics of the sensory (acoustic) or symbolic score data. In the paper the DMID framework is demonstrated using several experiments with symbolic (MIDI) and acoustic (spectral) music representations. We use variational encoding to learn a latent representation of the musical surface. This embedding is further reduced using a bit-allocation method into a second stream of low bit-rate encoding. The combined loss includes temporal information in terms of predictive properties for each encoding stream, and accuracy loss measured in terms of mutual information between the encoding at low rate and the high rate surface representations. For the case of counterpoint, we also study the mutual information between two voices in a musical piece at different levels of information reduction.The DMID framework allows to explore aspects of computational creativity in terms of juxtaposition of latent/imaginary surprisal aspects of deeper structure with music surprisal on the surface level, done in a manner that is quantifiable and computationally tractable. The relevant information theory modeling and analysis methods are discussed in the paper, suggesting that a trade off between compression and prediction play an important factor in the analysis and design of creative musical systems.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
深度音乐信息动力学简化神经网络音乐表示的新框架及其在迷笛和音频分析和即兴创作中的应用
生成音乐模型通常包括多个层次的结构,假设创作过程在背景到前景之间移动,或者在生成音乐表面和一些更深层次的、简化的表示之间移动,这些表示控制着音乐的隐藏或潜在维度。在本文中,我们使用最近提出的一个名为深度音乐信息动力学(DMID)的框架,通过潜在表示流的速率降低来探索音乐的深度神经模型的信息内容,这与音乐表面的高速率信息动力学形成了对比。这种方法的部分动机是人类认知的率失真理论,为探索听众或作曲家头脑中存在的想象预期与感官(声学)或符号得分数据的信息动力学之间的可能关系提供了一个框架。在本文中,DMID框架是通过符号(MIDI)和声学(频谱)音乐表示的几个实验来演示的。我们使用变分编码来学习音乐表面的潜在表示。使用比特分配方法将这种嵌入进一步减少到低比特率编码的第二流中。组合损失包括根据每个编码流的预测特性的时间信息,以及根据低速率编码和高速率表面表示之间的相互信息测量的精度损失。对于对位法,我们还研究了音乐作品中两个声音在不同信息减少水平下的相互信息。DMID框架允许探索计算创造力的各个方面,将深层结构的潜在/想象的奇异方面与表面层面的音乐奇异并置,以可量化和可计算的方式进行。本文讨论了相关的信息理论建模和分析方法,表明压缩和预测之间的权衡在创造性音乐系统的分析和设计中起着重要作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Creative Music Systems
Journal of Creative Music Systems Arts and Humanities-Music
CiteScore
1.20
自引率
0.00%
发文量
8
审稿时长
12 weeks
期刊最新文献
Title Pending 1311 Oktoechos Classification and Generation of Liturgical Music using Deep Learning Frameworks Editorial: JCMS Special Issue of the first Conference on AI Music Creativity Contemporary music genre rhythm generation with machine learning Deep Music Information Dynamics Novel Framework for Reduced Neural-Network Music Representation with Applications to Midi and Audio Analysis and Improvisation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1