首页 > 最新文献

Journal of Creative Music Systems最新文献

英文 中文
Title Pending 1311 1311未决标题
Q2 Arts and Humanities Pub Date : 2023-09-04 DOI: 10.5920/jcms.1311
Gerald Moshammer
Abstract animation inthe form of “visual music” facilitates both discovery and priming of musicalmotion that synthesises diverse acoustic parameters. In this article, twoscenes of AudioVisualizer, an open-source Chrome extension, are appliedto the nine musical poems of Robert Schumann’s Forest Scenes, with thegoal to establish a basic framework of expressive cross-modal qualities that inaudiovisual synchrony become apparent through visual abstraction and theemergence of defined dynamic Gestalts. The animations that build thisarticle’s core exemplify hands-on how particular ways of real-time analoguemusic tracking convert score structure and acoustic information into continuousdynamic images. The interplay between basic principles of information captureand concrete simulation in the processing of music provides one crucial entrypoint to fundamental questions as to how music generates meaning andnon-acoustic signification. Additionally, the considerations in this articlemay motivate the creation of new stimuli in empirical music research as well asstimulate new approaches to the teaching of music.
“视觉音乐”形式的抽象动画促进了音乐运动的发现和启动,这些音乐运动综合了不同的声学参数。在这篇文章中,AudioVisualizer的两个场景,一个开源的Chrome扩展,被应用到罗伯特·舒曼的森林场景的九首音乐诗中,目的是建立一个表达跨模态质量的基本框架,通过视觉抽象和定义动态格式塔的出现,视听同步变得明显。构建本文核心的动画举例说明了实时模拟音乐跟踪的特定方式如何将乐谱结构和声学信息转换为连续动态图像。在音乐处理中,信息捕获的基本原则和具体模拟之间的相互作用为音乐如何产生意义和非声学意义等基本问题提供了一个关键的切入点。此外,本文中的考虑可能会激发在实证音乐研究中创造新的刺激,并激发音乐教学的新方法。
{"title":"Title Pending 1311","authors":"Gerald Moshammer","doi":"10.5920/jcms.1311","DOIUrl":"https://doi.org/10.5920/jcms.1311","url":null,"abstract":"Abstract animation inthe form of “visual music” facilitates both discovery and priming of musicalmotion that synthesises diverse acoustic parameters. In this article, twoscenes of AudioVisualizer, an open-source Chrome extension, are appliedto the nine musical poems of Robert Schumann’s Forest Scenes, with thegoal to establish a basic framework of expressive cross-modal qualities that inaudiovisual synchrony become apparent through visual abstraction and theemergence of defined dynamic Gestalts. The animations that build thisarticle’s core exemplify hands-on how particular ways of real-time analoguemusic tracking convert score structure and acoustic information into continuousdynamic images. The interplay between basic principles of information captureand concrete simulation in the processing of music provides one crucial entrypoint to fundamental questions as to how music generates meaning andnon-acoustic signification. Additionally, the considerations in this articlemay motivate the creation of new stimuli in empirical music research as well asstimulate new approaches to the teaching of music.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135369217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Oktoechos Classification and Generation of Liturgical Music using Deep Learning Frameworks 基于深度学习框架的礼仪音乐分类与生成
Q2 Arts and Humanities Pub Date : 2023-07-10 DOI: 10.5920/jcms.1014
R. Rajan, Varsha Shiburaj, Amlu Anna Joshy
An important feature of the music repertoire of the Syrian tradition is the system of classifying melodies into eight tunes,  called ’oktoe={c}hos’.  In oktoe={c}hos tradition, liturgical hymns are sung in eight modes or eight colours (known as eight ’niram’ in Indian tradition). In this paper, recurrent neural network (RNN) models are  used for  oktoe={c}hos genre classification with the help of musical texture features (MTF) and i-vectors.The performance of the proposed approaches is evaluated using a newly created corpus of liturgical music in the South Indian language, Malayalam. Long short-term memory (LSTM)-based and gated recurrent unit(GRU)-based experiments report the average classification accuracy of  83.76%  and 77.77%, respectively, with a significant margin over the i-vector-DNN framework.   The experiments demonstrate the potential of RNN models in learning temporal information through MTF in recognizing eight modes of oktoe={c}hos system. Furthermore, since the Greek liturgy and Gregorian chant also share similar musical traits with Syrian tradition, the musicological insights observed can potentially be applied to those traditions. Generation of oktoe={c}hos genre music style has also been discussed using an encoder-decoder framework. The quality of the generated files is evaluated using a  perception test.
叙利亚传统音乐曲目的一个重要特点是将旋律分为八个曲调,称为“oktoe”={c}hos”。在oktoe={c}hos传统上,礼拜赞美诗有八种模式或八种颜色(在印度传统中被称为八种“niram”)。本文将递归神经网络(RNN)模型用于oktoe={c}hos借助音乐纹理特征(MTF)和i-vectors进行流派分类。使用新创建的南印度语马拉雅拉姆语礼拜音乐语料库来评估所提出方法的性能。基于长短期记忆(LSTM)和基于门控递归单元(GRU)的实验报告的平均分类准确率分别为83.76%和77.77%,与i-vector-DNN框架相比有显著差距。实验证明了RNN模型在识别八种oktoe模式时通过MTF学习时间信息的潜力={c}hos系统此外,由于希腊礼拜仪式和格里高利圣歌也与叙利亚传统具有相似的音乐特征,所观察到的音乐学见解可能适用于这些传统。秋葵的生成={c}hos流派音乐风格也已经使用编码器-解码器框架进行了讨论。使用感知测试来评估生成的文件的质量。
{"title":"Oktoechos Classification and Generation of Liturgical Music using Deep Learning Frameworks","authors":"R. Rajan, Varsha Shiburaj, Amlu Anna Joshy","doi":"10.5920/jcms.1014","DOIUrl":"https://doi.org/10.5920/jcms.1014","url":null,"abstract":"An important feature of the music repertoire of the Syrian tradition is the system of classifying melodies into eight tunes,  called ’oktoe={c}hos’.  In oktoe={c}hos tradition, liturgical hymns are sung in eight modes or eight colours (known as eight ’niram’ in Indian tradition). In this paper, recurrent neural network (RNN) models are  used for  oktoe={c}hos genre classification with the help of musical texture features (MTF) and i-vectors.The performance of the proposed approaches is evaluated using a newly created corpus of liturgical music in the South Indian language, Malayalam. Long short-term memory (LSTM)-based and gated recurrent unit(GRU)-based experiments report the average classification accuracy of  83.76%  and 77.77%, respectively, with a significant margin over the i-vector-DNN framework.   The experiments demonstrate the potential of RNN models in learning temporal information through MTF in recognizing eight modes of oktoe={c}hos system. Furthermore, since the Greek liturgy and Gregorian chant also share similar musical traits with Syrian tradition, the musicological insights observed can potentially be applied to those traditions. Generation of oktoe={c}hos genre music style has also been discussed using an encoder-decoder framework. The quality of the generated files is evaluated using a  perception test.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46704140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: JCMS Special Issue of the first Conference on AI Music Creativity 编辑:首届人工智能音乐创意大会JCMS特刊
Q2 Arts and Humanities Pub Date : 2022-08-30 DOI: 10.5920/jcms.1246
Cale Plut, Philippe Pasquier, Anna Jordanous
The International conference on AI Music Creativity (AIMC, https://aimusiccreativity.org/) is the merger of the international workshop on Musical Metacreation MUME (https://musicalmetacreation.org/) and the conference series on Computer Simulation of Music Creativity (CSMC, https://csmc2018.wordpress.com/). This special issue gathers selected papers from the first edition of the conference along with paper versions of two of its keynotes.This special issue contains six papers that apply novel approaches to the generation and classification of music. Covering several generative musical tasks such as composition, rhythm generation, orchestration, as well as some machine listening task of tempo and genre recognition, these selected papers present state of the art techniques in Music AI. The issue opens up with an ode on computer Musicking, by keynote speaker Alice Eldridge, and Johan Sundberg's use of analysis-by-synthesis for musical applications.
人工智能音乐创意国际会议(AIMC, https://aimusiccreativity.org/)是由音乐元创造国际研讨会MUME (https://musicalmetacreation.org/)和音乐创意计算机模拟系列会议(CSMC, https://csmc2018.wordpress.com/)合并而成。本期特刊收集了会议第一版的论文选集以及两个主题演讲的纸质版本。这个特刊包含六篇论文,应用新的方法来产生和分类音乐。这些精选的论文涵盖了作曲、节奏生成、编曲等几个生成音乐任务,以及一些速度和类型识别的机器听力任务,介绍了音乐人工智能的最新技术。主题演讲嘉宾爱丽丝·埃尔德里奇(Alice Eldridge)和约翰·桑德伯格(Johan Sundberg)在音乐应用中使用的合成分析(analysis-by-synthesis),开篇是一首关于计算机音乐的颂歌。
{"title":"Editorial: JCMS Special Issue of the first Conference on AI Music Creativity","authors":"Cale Plut, Philippe Pasquier, Anna Jordanous","doi":"10.5920/jcms.1246","DOIUrl":"https://doi.org/10.5920/jcms.1246","url":null,"abstract":"The International conference on AI Music Creativity (AIMC, https://aimusiccreativity.org/) is the merger of the international workshop on Musical Metacreation MUME (https://musicalmetacreation.org/) and the conference series on Computer Simulation of Music Creativity (CSMC, https://csmc2018.wordpress.com/). This special issue gathers selected papers from the first edition of the conference along with paper versions of two of its keynotes.This special issue contains six papers that apply novel approaches to the generation and classification of music. Covering several generative musical tasks such as composition, rhythm generation, orchestration, as well as some machine listening task of tempo and genre recognition, these selected papers present state of the art techniques in Music AI. The issue opens up with an ode on computer Musicking, by keynote speaker Alice Eldridge, and Johan Sundberg's use of analysis-by-synthesis for musical applications.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44781547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contemporary music genre rhythm generation with machine learning 用机器学习生成当代音乐体裁节奏
Q2 Arts and Humanities Pub Date : 2022-05-17 DOI: 10.5920/jcms.902
Gabriel Vigliensoni, Louis McCallum, Esteban Maestre, R. Fiebrink
In this article, we present research on customizing a variational autoencoder (VAE) neural network to learn models and play with musical rhythms encoded within a latent space. The system uses a data structure that is capable of encoding rhythms in simple and compound meter and can learn models from little training data. To facilitate the exploration of models, we implemented a visualizer that relies on the dynamic nature of the pulsing rhythmic patterns. To test our system in real-life musical practice, we collected small-scale datasets of contemporary music genre rhythms and trained models with them. We found that the non-linearities of the learned latent spaces coupled with tactile interfaces to interact with the models were very expressive and lead to unexpected places in composition and live performance musical settings. A music album was recorded and it was premiered at a major music festival using the VAE latent space on stage.
在这篇文章中,我们提出了一种定制变分自动编码器(VAE)神经网络的研究,以学习模型并在潜在空间内演奏编码的音乐节奏。该系统使用的数据结构能够以简单和复合的节拍对节奏进行编码,并且可以从少量的训练数据中学习模型。为了促进模型的探索,我们实现了一个可视化工具,它依赖于脉动节奏模式的动态特性。为了在现实音乐实践中测试我们的系统,我们收集了当代音乐流派节奏的小规模数据集,并用它们训练模型。我们发现,学习到的潜在空间的非线性,加上与模型互动的触觉界面,非常有表现力,并在作曲和现场表演音乐环境中产生意想不到的地方。录制了一张音乐专辑,并在一个大型音乐节上使用舞台上的VAE潜在空间进行了首演。
{"title":"Contemporary music genre rhythm generation with machine learning","authors":"Gabriel Vigliensoni, Louis McCallum, Esteban Maestre, R. Fiebrink","doi":"10.5920/jcms.902","DOIUrl":"https://doi.org/10.5920/jcms.902","url":null,"abstract":"In this article, we present research on customizing a variational autoencoder (VAE) neural network to learn models and play with musical rhythms encoded within a latent space. The system uses a data structure that is capable of encoding rhythms in simple and compound meter and can learn models from little training data. To facilitate the exploration of models, we implemented a visualizer that relies on the dynamic nature of the pulsing rhythmic patterns. To test our system in real-life musical practice, we collected small-scale datasets of contemporary music genre rhythms and trained models with them. We found that the non-linearities of the learned latent spaces coupled with tactile interfaces to interact with the models were very expressive and lead to unexpected places in composition and live performance musical settings. A music album was recorded and it was premiered at a major music festival using the VAE latent space on stage.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41647964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Music Information Dynamics Novel Framework for Reduced Neural-Network Music Representation with Applications to Midi and Audio Analysis and Improvisation 深度音乐信息动力学简化神经网络音乐表示的新框架及其在迷笛和音频分析和即兴创作中的应用
Q2 Arts and Humanities Pub Date : 2022-04-30 DOI: 10.5920/jcms.894
S. Dubnov, K. Chen, Kevin Huang
Generative musical models often comprise of multiple levels of structure, presuming that the process of composition moves between background to foreground, or between generating musical surface and some deeper and reduced representation that governs hidden or latent dimensions of music.  In this paper we are using a recently proposed framework called Deep Musical Information Dynamics (DMID) to explore information contents of deep neural models of music through rate reduction of latent representation streams, which is contrasted with hight rate information dynamics of the musical surface. This approach is partially motivated by rate-distortion theories of human cognition, providing a framework for exploring possible relations between imaginary anticipations existing in the listener's or composer's mind, and the information dynamics of the sensory (acoustic) or symbolic score data. In the paper the DMID framework is demonstrated using several experiments with symbolic (MIDI) and acoustic (spectral) music representations. We use variational encoding to learn a latent representation of the musical surface. This embedding is further reduced using a bit-allocation method into a second stream of low bit-rate encoding. The combined loss includes temporal information in terms of predictive properties for each encoding stream, and accuracy loss measured in terms of mutual information between the encoding at low rate and the high rate surface representations. For the case of counterpoint, we also study the mutual information between two voices in a musical piece at different levels of information reduction.The DMID framework allows to explore aspects of computational creativity in terms of juxtaposition of latent/imaginary surprisal aspects of deeper structure with music surprisal on the surface level, done in a manner that is quantifiable and computationally tractable. The relevant information theory modeling and analysis methods are discussed in the paper, suggesting that a trade off between compression and prediction play an important factor in the analysis and design of creative musical systems.
生成音乐模型通常包括多个层次的结构,假设创作过程在背景到前景之间移动,或者在生成音乐表面和一些更深层次的、简化的表示之间移动,这些表示控制着音乐的隐藏或潜在维度。在本文中,我们使用最近提出的一个名为深度音乐信息动力学(DMID)的框架,通过潜在表示流的速率降低来探索音乐的深度神经模型的信息内容,这与音乐表面的高速率信息动力学形成了对比。这种方法的部分动机是人类认知的率失真理论,为探索听众或作曲家头脑中存在的想象预期与感官(声学)或符号得分数据的信息动力学之间的可能关系提供了一个框架。在本文中,DMID框架是通过符号(MIDI)和声学(频谱)音乐表示的几个实验来演示的。我们使用变分编码来学习音乐表面的潜在表示。使用比特分配方法将这种嵌入进一步减少到低比特率编码的第二流中。组合损失包括根据每个编码流的预测特性的时间信息,以及根据低速率编码和高速率表面表示之间的相互信息测量的精度损失。对于对位法,我们还研究了音乐作品中两个声音在不同信息减少水平下的相互信息。DMID框架允许探索计算创造力的各个方面,将深层结构的潜在/想象的奇异方面与表面层面的音乐奇异并置,以可量化和可计算的方式进行。本文讨论了相关的信息理论建模和分析方法,表明压缩和预测之间的权衡在创造性音乐系统的分析和设计中起着重要作用。
{"title":"Deep Music Information Dynamics Novel Framework for Reduced Neural-Network Music Representation with Applications to Midi and Audio Analysis and Improvisation","authors":"S. Dubnov, K. Chen, Kevin Huang","doi":"10.5920/jcms.894","DOIUrl":"https://doi.org/10.5920/jcms.894","url":null,"abstract":"Generative musical models often comprise of multiple levels of structure, presuming that the process of composition moves between background to foreground, or between generating musical surface and some deeper and reduced representation that governs hidden or latent dimensions of music.  In this paper we are using a recently proposed framework called Deep Musical Information Dynamics (DMID) to explore information contents of deep neural models of music through rate reduction of latent representation streams, which is contrasted with hight rate information dynamics of the musical surface. This approach is partially motivated by rate-distortion theories of human cognition, providing a framework for exploring possible relations between imaginary anticipations existing in the listener's or composer's mind, and the information dynamics of the sensory (acoustic) or symbolic score data. In the paper the DMID framework is demonstrated using several experiments with symbolic (MIDI) and acoustic (spectral) music representations. We use variational encoding to learn a latent representation of the musical surface. This embedding is further reduced using a bit-allocation method into a second stream of low bit-rate encoding. The combined loss includes temporal information in terms of predictive properties for each encoding stream, and accuracy loss measured in terms of mutual information between the encoding at low rate and the high rate surface representations. For the case of counterpoint, we also study the mutual information between two voices in a musical piece at different levels of information reduction.The DMID framework allows to explore aspects of computational creativity in terms of juxtaposition of latent/imaginary surprisal aspects of deeper structure with music surprisal on the surface level, done in a manner that is quantifiable and computationally tractable. The relevant information theory modeling and analysis methods are discussed in the paper, suggesting that a trade off between compression and prediction play an important factor in the analysis and design of creative musical systems.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44291148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computer Musicking as Onto-Epistemic Playground On the Joy of Developing Complexity Literacy and Learning to Let Others Be 计算机音乐作为认识论的舞台——论培养复杂性素养和学会让他人成为自己的乐趣
Q2 Arts and Humanities Pub Date : 2022-03-28 DOI: 10.5920/jcms.1038
Alice C. Eldridge
Theories across sciences and humanities posit a central role for musicking in the evolution of the social, biological and technical pat- terns that underpin modern humanity. In this talk I suggest that contemporary computer musicking can play a similarly critical role in supporting us through contemporary existential, ecological, technological and social crises, by providing a space for reworking our relationships with each other and the world, including the technologies that we make. Framed by Gregory Bateson’s analysis of the fundamental epistemological error which leads to interrelated existential, social and ecological crises, I will draw upon a range of personal projects to illustrate the value of computer music practices in learning to think better: from cybernetic generative art, through ecosystemic evolutionary art and feedback musicianship to the need for interactive approaches to algorithm interpretation in ma- chine listening to biodiversity. I will illustrate how computer musicking can help in three ways: firstly by developing complexity literacy, helping us to better understand the complex systems of the anthropocene; secondly by providing a space to explore other modes of relation through learning to let others be; and thirdly to clarify the importance of aligning technologies with and not against, the biosphere. As pre-historic musicking made us human, so contemporary computer musicking can help us learn to think through the challenges we face today and be better humans tomorrow.
科学和人文学科的理论认为,音乐在支撑现代人类的社会、生物和技术模式的演变中发挥着核心作用。在这次演讲中,我建议当代计算机音乐可以发挥类似的关键作用,通过提供一个空间来重塑我们与彼此和世界的关系,包括我们制造的技术,支持我们度过当代的生存、生态、技术和社会危机。在Gregory Bateson对导致相互关联的生存、社会和生态危机的基本认识论错误的分析的框架下,我将利用一系列个人项目来说明计算机音乐实践在学习更好地思考方面的价值:从控制论生成艺术,通过生态系统进化艺术和反馈音乐,在机器听生物多样性的过程中,需要交互式的算法解释方法。我将从三个方面阐述计算机音乐如何发挥作用:首先,通过培养复杂性素养,帮助我们更好地理解人类新世的复杂系统;其次,通过学习让他人存在,提供了一个探索其他关系模式的空间;第三,阐明使技术与生物圈保持一致而不是反对生物圈的重要性。正如史前音乐使我们成为人类一样,当代计算机音乐可以帮助我们学会思考今天面临的挑战,并在明天成为更好的人类。
{"title":"Computer Musicking as Onto-Epistemic Playground On the Joy of Developing Complexity Literacy and Learning to Let Others Be","authors":"Alice C. Eldridge","doi":"10.5920/jcms.1038","DOIUrl":"https://doi.org/10.5920/jcms.1038","url":null,"abstract":"Theories across sciences and humanities posit a central role for musicking in the evolution of the social, biological and technical pat- terns that underpin modern humanity. In this talk I suggest that contemporary computer musicking can play a similarly critical role in supporting us through contemporary existential, ecological, technological and social crises, by providing a space for reworking our relationships with each other and the world, including the technologies that we make. Framed by Gregory Bateson’s analysis of the fundamental epistemological error which leads to interrelated existential, social and ecological crises, I will draw upon a range of personal projects to illustrate the value of computer music practices in learning to think better: from cybernetic generative art, through ecosystemic evolutionary art and feedback musicianship to the need for interactive approaches to algorithm interpretation in ma- chine listening to biodiversity. I will illustrate how computer musicking can help in three ways: firstly by developing complexity literacy, helping us to better understand the complex systems of the anthropocene; secondly by providing a space to explore other modes of relation through learning to let others be; and thirdly to clarify the importance of aligning technologies with and not against, the biosphere. As pre-historic musicking made us human, so contemporary computer musicking can help us learn to think through the challenges we face today and be better humans tomorrow.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49536916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Three applications of analysis-by-synthesis in music science 综合分析法在音乐科学中的三个应用
Q2 Arts and Humanities Pub Date : 2022-03-18 DOI: 10.5920/jcms.1044
J. Sundberg
The article describes how my research has applied the analysis-by-synthesis strategy to (1) the composition of melodies in the style of nursery tunes, (2) music performance and (3) vocal singing. The descriptions are formulated as generative grammars, which consist of a set of ordered, context-dependent rules capable of producing sound examples. These examples readily reveal observable weaknesses in the descriptions, the origins of which can be traced in the rule system and eliminated. The grammar describing the compositional style of the nursery tunes demonstrates the paramount relevance of a hierarchical structure. Principles underlying the transformation from a music score file to a synthesized performance are derived from recommendations by a violinist and music performance coach, and can thus be regarded as a description of his professional skills as musician and pedagogue. Also in this case the grammar demonstrates the relevance of a hierarchical structure in terms of grouping, and reflects the role of expectation in music listening. The rule system describing singing voice synthesis specifies acoustic characteristics of performance details. The descriptions are complemented by sound examples illustrating the effects of identified compositional and performance rules in the genres analysed.
本文介绍了我的研究如何将综合分析策略应用于(1)儿歌风格的旋律构成,(2)音乐表演和(3)声乐演唱。描述被公式化为生成语法,由一组有序的、上下文相关的规则组成,这些规则能够产生合理的例子。这些例子很容易揭示描述中可观察到的弱点,其根源可以在规则系统中追溯并消除。描述儿歌创作风格的语法表明了层次结构的最高相关性。从乐谱文件到综合表演的转换原则来自小提琴家和音乐表演教练的建议,因此可以被视为对他作为音乐家和教育家的专业技能的描述。同样在这种情况下,语法展示了层次结构在分组方面的相关性,并反映了期望在音乐听力中的作用。描述歌声合成的规则体系规定了表演细节的声学特性。这些描述辅以声音示例,说明了所分析流派中已确定的作曲和表演规则的影响。
{"title":"Three applications of analysis-by-synthesis in music science","authors":"J. Sundberg","doi":"10.5920/jcms.1044","DOIUrl":"https://doi.org/10.5920/jcms.1044","url":null,"abstract":"The article describes how my research has applied the analysis-by-synthesis strategy to (1) the composition of melodies in the style of nursery tunes, (2) music performance and (3) vocal singing. The descriptions are formulated as generative grammars, which consist of a set of ordered, context-dependent rules capable of producing sound examples. These examples readily reveal observable weaknesses in the descriptions, the origins of which can be traced in the rule system and eliminated. The grammar describing the compositional style of the nursery tunes demonstrates the paramount relevance of a hierarchical structure. Principles underlying the transformation from a music score file to a synthesized performance are derived from recommendations by a violinist and music performance coach, and can thus be regarded as a description of his professional skills as musician and pedagogue. Also in this case the grammar demonstrates the relevance of a hierarchical structure in terms of grouping, and reflects the role of expectation in music listening. The rule system describing singing voice synthesis specifies acoustic characteristics of performance details. The descriptions are complemented by sound examples illustrating the effects of identified compositional and performance rules in the genres analysed.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42957988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extending Deep Rhythm for Tempo and Genre Estimation Using Complex Convolutions, Multitask Learning and Multi-input Network 使用复数卷积、多任务学习和多输入网络扩展Tempo和Genre估计的深度节奏
Q2 Arts and Humanities Pub Date : 2022-03-04 DOI: 10.5920/jcms.887
Hadrien Foroughmand Aarabi, G. Peeters
Tempo and genre are two inter-leaved aspects of music, genres are often associated to rhythm patterns which are played in specific tempo ranges.In this paper, we focus on the Deep Rhythm system based on a harmonic representation of rhythm used as an input to a convolutional neural network.To consider the relationships between frequency bands, we process complex-valued inputs through complex-convolutions.We also study the joint estimation of tempo/genre using a multitask learning approach. Finally, we study the addition of a second input convolutional branch to the system applied to a mel-spectrogram input dedicated to the timbre.This multi-input approach allows to improve the performances for tempo and genre estimation.
节奏和体裁是音乐的两个相互交错的方面,体裁通常与在特定速度范围内演奏的节奏模式有关。在本文中,我们将重点放在深度节奏系统上,该系统基于节奏的谐波表示,用作卷积神经网络的输入。为了考虑频带之间的关系,我们通过复卷积处理复值输入。我们还使用多任务学习方法研究了节奏/体裁的联合估计。最后,我们研究了将第二个输入卷积分支添加到系统中,并将其应用于专用于音色的梅尔谱输入。这种多输入方法可以提高速度和类型估计的性能。
{"title":"Extending Deep Rhythm for Tempo and Genre Estimation Using Complex Convolutions, Multitask Learning and Multi-input Network","authors":"Hadrien Foroughmand Aarabi, G. Peeters","doi":"10.5920/jcms.887","DOIUrl":"https://doi.org/10.5920/jcms.887","url":null,"abstract":"Tempo and genre are two inter-leaved aspects of music, genres are often associated to rhythm patterns which are played in specific tempo ranges.In this paper, we focus on the Deep Rhythm system based on a harmonic representation of rhythm used as an input to a convolutional neural network.To consider the relationships between frequency bands, we process complex-valued inputs through complex-convolutions.We also study the joint estimation of tempo/genre using a multitask learning approach. Finally, we study the addition of a second input convolutional branch to the system applied to a mel-spectrogram input dedicated to the timbre.This multi-input approach allows to improve the performances for tempo and genre estimation.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43671160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Models for Target-Based Computer-Assisted Musical Orchestration: A Preliminary Study 基于目标的计算机辅助音乐编排的神经模型初探
Q2 Arts and Humanities Pub Date : 2022-02-25 DOI: 10.5920/jcms.890
Luke Dzwonczyk, Carmine-Emanuele Cella, Alejandro Saldarriaga-Fuertes, Hongfu Liu, H. Crayencour
In this paper we will perform a preliminary exploration on how neural networks can be used for the task of target-based computer-assisted musical orchestration. We will show how it is possible to model this  musical problem as a classification task and we will propose two deep learning models. We will show, first, how they perform as classifiers for musical instrument recognition by comparing them with specific baselines. We will then show how they perform, both qualitatively and quantitatively, in the task of computer-assisted orchestration by comparing them with state-of-the-art systems. Finally, we will highlight benefits and problems of neural approaches for assisted orchestration and we will propose possible future steps. This paper is an extended version of the paper "A Study on Neural Models for Target-Based Computer-Assisted Musical Orchestration" published in the proceedings of The 2020 Joint Conference on AI Music Creativity. 
在本文中,我们将对神经网络如何用于基于目标的计算机辅助音乐配器的任务进行初步探索。我们将展示如何将这个音乐问题建模为分类任务,并提出两个深度学习模型。首先,我们将通过将它们与特定的基线进行比较,展示它们作为乐器识别分类器的表现。然后,我们将通过将它们与最先进的系统进行比较,展示它们在计算机辅助编排任务中的定性和定量表现。最后,我们将强调神经方法在辅助编排中的好处和问题,并提出未来可能的步骤。本文是发表在2020年人工智能音乐创意联合会议论文集上的论文“基于目标的计算机辅助音乐编排的神经模型研究”的扩展版。
{"title":"Neural Models for Target-Based Computer-Assisted Musical Orchestration: A Preliminary Study","authors":"Luke Dzwonczyk, Carmine-Emanuele Cella, Alejandro Saldarriaga-Fuertes, Hongfu Liu, H. Crayencour","doi":"10.5920/jcms.890","DOIUrl":"https://doi.org/10.5920/jcms.890","url":null,"abstract":"In this paper we will perform a preliminary exploration on how neural networks can be used for the task of target-based computer-assisted musical orchestration. We will show how it is possible to model this  musical problem as a classification task and we will propose two deep learning models. We will show, first, how they perform as classifiers for musical instrument recognition by comparing them with specific baselines. We will then show how they perform, both qualitatively and quantitatively, in the task of computer-assisted orchestration by comparing them with state-of-the-art systems. Finally, we will highlight benefits and problems of neural approaches for assisted orchestration and we will propose possible future steps. This paper is an extended version of the paper \"A Study on Neural Models for Target-Based Computer-Assisted Musical Orchestration\" published in the proceedings of The 2020 Joint Conference on AI Music Creativity. ","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42428300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Ai Music Generation Challenge 2020: Double Jigs in the Style of O'Neill's ``1001'' 2020 Ai音乐世代挑战:奥尼尔“1001”风格的双跳
Q2 Arts and Humanities Pub Date : 2021-10-22 DOI: 10.5920/jcms.950
Bob L. Sturm, H. Maruri-Aguilar
The Ai Music Generation Challenge 2020 had three objectives: 1) to promote meaningful approaches to evaluating artificial intelligence (Ai) applied to music;2) to see how music Ai research can benefit from considering traditional music, and how traditional music might benefit from music Ai research; and 3)to facilitate discussions about the ethics of music Ai research applied to traditional music practices.There were six participants and a benchmark in the challenge, each competing to build an artificial system that generates the most plausible double jigs, as judged against the 365 published in solved'', but that the evaluation of such systems can be done in meaningful ways.The article ends by reflecting on the challenge and considering the coming 2021 challenge.
2020年人工智能音乐生成挑战赛有三个目标:1)促进有意义的方法来评估应用于音乐的人工智能(Ai);2)看看音乐人工智能研究如何从考虑传统音乐中受益,以及传统音乐如何从音乐人工智能研究中受益;3)促进对应用于传统音乐实践的音乐伦理研究的讨论。这项挑战有六名参与者和一个基准,每个人都在竞争建立一个人工系统,产生最合理的双夹具,与发表在《解决》杂志上的365个结论相比较,但对这些系统的评估可以以有意义的方式完成。文章最后反思了这一挑战,并考虑了即将到来的2021年挑战。
{"title":"The Ai Music Generation Challenge 2020: Double Jigs in the Style of O'Neill's ``1001''","authors":"Bob L. Sturm, H. Maruri-Aguilar","doi":"10.5920/jcms.950","DOIUrl":"https://doi.org/10.5920/jcms.950","url":null,"abstract":"The Ai Music Generation Challenge 2020 had three objectives: 1) to promote meaningful approaches to evaluating artificial intelligence (Ai) applied to music;2) to see how music Ai research can benefit from considering traditional music, and how traditional music might benefit from music Ai research; and 3)to facilitate discussions about the ethics of music Ai research applied to traditional music practices.There were six participants and a benchmark in the challenge, each competing to build an artificial system that generates the most plausible double jigs, as judged against the 365 published in solved'', but that the evaluation of such systems can be done in meaningful ways.The article ends by reflecting on the challenge and considering the coming 2021 challenge.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46892860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Journal of Creative Music Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1