Abstract animation inthe form of “visual music” facilitates both discovery and priming of musicalmotion that synthesises diverse acoustic parameters. In this article, twoscenes of AudioVisualizer, an open-source Chrome extension, are appliedto the nine musical poems of Robert Schumann’s Forest Scenes, with thegoal to establish a basic framework of expressive cross-modal qualities that inaudiovisual synchrony become apparent through visual abstraction and theemergence of defined dynamic Gestalts. The animations that build thisarticle’s core exemplify hands-on how particular ways of real-time analoguemusic tracking convert score structure and acoustic information into continuousdynamic images. The interplay between basic principles of information captureand concrete simulation in the processing of music provides one crucial entrypoint to fundamental questions as to how music generates meaning andnon-acoustic signification. Additionally, the considerations in this articlemay motivate the creation of new stimuli in empirical music research as well asstimulate new approaches to the teaching of music.
{"title":"Title Pending 1311","authors":"Gerald Moshammer","doi":"10.5920/jcms.1311","DOIUrl":"https://doi.org/10.5920/jcms.1311","url":null,"abstract":"Abstract animation inthe form of “visual music” facilitates both discovery and priming of musicalmotion that synthesises diverse acoustic parameters. In this article, twoscenes of AudioVisualizer, an open-source Chrome extension, are appliedto the nine musical poems of Robert Schumann’s Forest Scenes, with thegoal to establish a basic framework of expressive cross-modal qualities that inaudiovisual synchrony become apparent through visual abstraction and theemergence of defined dynamic Gestalts. The animations that build thisarticle’s core exemplify hands-on how particular ways of real-time analoguemusic tracking convert score structure and acoustic information into continuousdynamic images. The interplay between basic principles of information captureand concrete simulation in the processing of music provides one crucial entrypoint to fundamental questions as to how music generates meaning andnon-acoustic signification. Additionally, the considerations in this articlemay motivate the creation of new stimuli in empirical music research as well asstimulate new approaches to the teaching of music.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":"239 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135369217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An important feature of the music repertoire of the Syrian tradition is the system of classifying melodies into eight tunes, called ’oktoe={c}hos’. In oktoe={c}hos tradition, liturgical hymns are sung in eight modes or eight colours (known as eight ’niram’ in Indian tradition). In this paper, recurrent neural network (RNN) models are used for oktoe={c}hos genre classification with the help of musical texture features (MTF) and i-vectors.The performance of the proposed approaches is evaluated using a newly created corpus of liturgical music in the South Indian language, Malayalam. Long short-term memory (LSTM)-based and gated recurrent unit(GRU)-based experiments report the average classification accuracy of 83.76% and 77.77%, respectively, with a significant margin over the i-vector-DNN framework. The experiments demonstrate the potential of RNN models in learning temporal information through MTF in recognizing eight modes of oktoe={c}hos system. Furthermore, since the Greek liturgy and Gregorian chant also share similar musical traits with Syrian tradition, the musicological insights observed can potentially be applied to those traditions. Generation of oktoe={c}hos genre music style has also been discussed using an encoder-decoder framework. The quality of the generated files is evaluated using a perception test.
{"title":"Oktoechos Classification and Generation of Liturgical Music using Deep Learning Frameworks","authors":"R. Rajan, Varsha Shiburaj, Amlu Anna Joshy","doi":"10.5920/jcms.1014","DOIUrl":"https://doi.org/10.5920/jcms.1014","url":null,"abstract":"An important feature of the music repertoire of the Syrian tradition is the system of classifying melodies into eight tunes, called ’oktoe={c}hos’. In oktoe={c}hos tradition, liturgical hymns are sung in eight modes or eight colours (known as eight ’niram’ in Indian tradition). In this paper, recurrent neural network (RNN) models are used for oktoe={c}hos genre classification with the help of musical texture features (MTF) and i-vectors.The performance of the proposed approaches is evaluated using a newly created corpus of liturgical music in the South Indian language, Malayalam. Long short-term memory (LSTM)-based and gated recurrent unit(GRU)-based experiments report the average classification accuracy of 83.76% and 77.77%, respectively, with a significant margin over the i-vector-DNN framework. The experiments demonstrate the potential of RNN models in learning temporal information through MTF in recognizing eight modes of oktoe={c}hos system. Furthermore, since the Greek liturgy and Gregorian chant also share similar musical traits with Syrian tradition, the musicological insights observed can potentially be applied to those traditions. Generation of oktoe={c}hos genre music style has also been discussed using an encoder-decoder framework. The quality of the generated files is evaluated using a perception test.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46704140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The International conference on AI Music Creativity (AIMC, https://aimusiccreativity.org/) is the merger of the international workshop on Musical Metacreation MUME (https://musicalmetacreation.org/) and the conference series on Computer Simulation of Music Creativity (CSMC, https://csmc2018.wordpress.com/). This special issue gathers selected papers from the first edition of the conference along with paper versions of two of its keynotes.This special issue contains six papers that apply novel approaches to the generation and classification of music. Covering several generative musical tasks such as composition, rhythm generation, orchestration, as well as some machine listening task of tempo and genre recognition, these selected papers present state of the art techniques in Music AI. The issue opens up with an ode on computer Musicking, by keynote speaker Alice Eldridge, and Johan Sundberg's use of analysis-by-synthesis for musical applications.
{"title":"Editorial: JCMS Special Issue of the first Conference on AI Music Creativity","authors":"Cale Plut, Philippe Pasquier, Anna Jordanous","doi":"10.5920/jcms.1246","DOIUrl":"https://doi.org/10.5920/jcms.1246","url":null,"abstract":"The International conference on AI Music Creativity (AIMC, https://aimusiccreativity.org/) is the merger of the international workshop on Musical Metacreation MUME (https://musicalmetacreation.org/) and the conference series on Computer Simulation of Music Creativity (CSMC, https://csmc2018.wordpress.com/). This special issue gathers selected papers from the first edition of the conference along with paper versions of two of its keynotes.This special issue contains six papers that apply novel approaches to the generation and classification of music. Covering several generative musical tasks such as composition, rhythm generation, orchestration, as well as some machine listening task of tempo and genre recognition, these selected papers present state of the art techniques in Music AI. The issue opens up with an ode on computer Musicking, by keynote speaker Alice Eldridge, and Johan Sundberg's use of analysis-by-synthesis for musical applications.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44781547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriel Vigliensoni, Louis McCallum, Esteban Maestre, R. Fiebrink
In this article, we present research on customizing a variational autoencoder (VAE) neural network to learn models and play with musical rhythms encoded within a latent space. The system uses a data structure that is capable of encoding rhythms in simple and compound meter and can learn models from little training data. To facilitate the exploration of models, we implemented a visualizer that relies on the dynamic nature of the pulsing rhythmic patterns. To test our system in real-life musical practice, we collected small-scale datasets of contemporary music genre rhythms and trained models with them. We found that the non-linearities of the learned latent spaces coupled with tactile interfaces to interact with the models were very expressive and lead to unexpected places in composition and live performance musical settings. A music album was recorded and it was premiered at a major music festival using the VAE latent space on stage.
{"title":"Contemporary music genre rhythm generation with machine learning","authors":"Gabriel Vigliensoni, Louis McCallum, Esteban Maestre, R. Fiebrink","doi":"10.5920/jcms.902","DOIUrl":"https://doi.org/10.5920/jcms.902","url":null,"abstract":"In this article, we present research on customizing a variational autoencoder (VAE) neural network to learn models and play with musical rhythms encoded within a latent space. The system uses a data structure that is capable of encoding rhythms in simple and compound meter and can learn models from little training data. To facilitate the exploration of models, we implemented a visualizer that relies on the dynamic nature of the pulsing rhythmic patterns. To test our system in real-life musical practice, we collected small-scale datasets of contemporary music genre rhythms and trained models with them. We found that the non-linearities of the learned latent spaces coupled with tactile interfaces to interact with the models were very expressive and lead to unexpected places in composition and live performance musical settings. A music album was recorded and it was premiered at a major music festival using the VAE latent space on stage.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41647964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative musical models often comprise of multiple levels of structure, presuming that the process of composition moves between background to foreground, or between generating musical surface and some deeper and reduced representation that governs hidden or latent dimensions of music. In this paper we are using a recently proposed framework called Deep Musical Information Dynamics (DMID) to explore information contents of deep neural models of music through rate reduction of latent representation streams, which is contrasted with hight rate information dynamics of the musical surface. This approach is partially motivated by rate-distortion theories of human cognition, providing a framework for exploring possible relations between imaginary anticipations existing in the listener's or composer's mind, and the information dynamics of the sensory (acoustic) or symbolic score data. In the paper the DMID framework is demonstrated using several experiments with symbolic (MIDI) and acoustic (spectral) music representations. We use variational encoding to learn a latent representation of the musical surface. This embedding is further reduced using a bit-allocation method into a second stream of low bit-rate encoding. The combined loss includes temporal information in terms of predictive properties for each encoding stream, and accuracy loss measured in terms of mutual information between the encoding at low rate and the high rate surface representations. For the case of counterpoint, we also study the mutual information between two voices in a musical piece at different levels of information reduction.The DMID framework allows to explore aspects of computational creativity in terms of juxtaposition of latent/imaginary surprisal aspects of deeper structure with music surprisal on the surface level, done in a manner that is quantifiable and computationally tractable. The relevant information theory modeling and analysis methods are discussed in the paper, suggesting that a trade off between compression and prediction play an important factor in the analysis and design of creative musical systems.
{"title":"Deep Music Information Dynamics Novel Framework for Reduced Neural-Network Music Representation with Applications to Midi and Audio Analysis and Improvisation","authors":"S. Dubnov, K. Chen, Kevin Huang","doi":"10.5920/jcms.894","DOIUrl":"https://doi.org/10.5920/jcms.894","url":null,"abstract":"Generative musical models often comprise of multiple levels of structure, presuming that the process of composition moves between background to foreground, or between generating musical surface and some deeper and reduced representation that governs hidden or latent dimensions of music. In this paper we are using a recently proposed framework called Deep Musical Information Dynamics (DMID) to explore information contents of deep neural models of music through rate reduction of latent representation streams, which is contrasted with hight rate information dynamics of the musical surface. This approach is partially motivated by rate-distortion theories of human cognition, providing a framework for exploring possible relations between imaginary anticipations existing in the listener's or composer's mind, and the information dynamics of the sensory (acoustic) or symbolic score data. In the paper the DMID framework is demonstrated using several experiments with symbolic (MIDI) and acoustic (spectral) music representations. We use variational encoding to learn a latent representation of the musical surface. This embedding is further reduced using a bit-allocation method into a second stream of low bit-rate encoding. The combined loss includes temporal information in terms of predictive properties for each encoding stream, and accuracy loss measured in terms of mutual information between the encoding at low rate and the high rate surface representations. For the case of counterpoint, we also study the mutual information between two voices in a musical piece at different levels of information reduction.The DMID framework allows to explore aspects of computational creativity in terms of juxtaposition of latent/imaginary surprisal aspects of deeper structure with music surprisal on the surface level, done in a manner that is quantifiable and computationally tractable. The relevant information theory modeling and analysis methods are discussed in the paper, suggesting that a trade off between compression and prediction play an important factor in the analysis and design of creative musical systems.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44291148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Theories across sciences and humanities posit a central role for musicking in the evolution of the social, biological and technical pat- terns that underpin modern humanity. In this talk I suggest that contemporary computer musicking can play a similarly critical role in supporting us through contemporary existential, ecological, technological and social crises, by providing a space for reworking our relationships with each other and the world, including the technologies that we make. Framed by Gregory Bateson’s analysis of the fundamental epistemological error which leads to interrelated existential, social and ecological crises, I will draw upon a range of personal projects to illustrate the value of computer music practices in learning to think better: from cybernetic generative art, through ecosystemic evolutionary art and feedback musicianship to the need for interactive approaches to algorithm interpretation in ma- chine listening to biodiversity. I will illustrate how computer musicking can help in three ways: firstly by developing complexity literacy, helping us to better understand the complex systems of the anthropocene; secondly by providing a space to explore other modes of relation through learning to let others be; and thirdly to clarify the importance of aligning technologies with and not against, the biosphere. As pre-historic musicking made us human, so contemporary computer musicking can help us learn to think through the challenges we face today and be better humans tomorrow.
{"title":"Computer Musicking as Onto-Epistemic Playground On the Joy of Developing Complexity Literacy and Learning to Let Others Be","authors":"Alice C. Eldridge","doi":"10.5920/jcms.1038","DOIUrl":"https://doi.org/10.5920/jcms.1038","url":null,"abstract":"Theories across sciences and humanities posit a central role for musicking in the evolution of the social, biological and technical pat- terns that underpin modern humanity. In this talk I suggest that contemporary computer musicking can play a similarly critical role in supporting us through contemporary existential, ecological, technological and social crises, by providing a space for reworking our relationships with each other and the world, including the technologies that we make. Framed by Gregory Bateson’s analysis of the fundamental epistemological error which leads to interrelated existential, social and ecological crises, I will draw upon a range of personal projects to illustrate the value of computer music practices in learning to think better: from cybernetic generative art, through ecosystemic evolutionary art and feedback musicianship to the need for interactive approaches to algorithm interpretation in ma- chine listening to biodiversity. I will illustrate how computer musicking can help in three ways: firstly by developing complexity literacy, helping us to better understand the complex systems of the anthropocene; secondly by providing a space to explore other modes of relation through learning to let others be; and thirdly to clarify the importance of aligning technologies with and not against, the biosphere. As pre-historic musicking made us human, so contemporary computer musicking can help us learn to think through the challenges we face today and be better humans tomorrow.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49536916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article describes how my research has applied the analysis-by-synthesis strategy to (1) the composition of melodies in the style of nursery tunes, (2) music performance and (3) vocal singing. The descriptions are formulated as generative grammars, which consist of a set of ordered, context-dependent rules capable of producing sound examples. These examples readily reveal observable weaknesses in the descriptions, the origins of which can be traced in the rule system and eliminated. The grammar describing the compositional style of the nursery tunes demonstrates the paramount relevance of a hierarchical structure. Principles underlying the transformation from a music score file to a synthesized performance are derived from recommendations by a violinist and music performance coach, and can thus be regarded as a description of his professional skills as musician and pedagogue. Also in this case the grammar demonstrates the relevance of a hierarchical structure in terms of grouping, and reflects the role of expectation in music listening. The rule system describing singing voice synthesis specifies acoustic characteristics of performance details. The descriptions are complemented by sound examples illustrating the effects of identified compositional and performance rules in the genres analysed.
{"title":"Three applications of analysis-by-synthesis in music science","authors":"J. Sundberg","doi":"10.5920/jcms.1044","DOIUrl":"https://doi.org/10.5920/jcms.1044","url":null,"abstract":"The article describes how my research has applied the analysis-by-synthesis strategy to (1) the composition of melodies in the style of nursery tunes, (2) music performance and (3) vocal singing. The descriptions are formulated as generative grammars, which consist of a set of ordered, context-dependent rules capable of producing sound examples. These examples readily reveal observable weaknesses in the descriptions, the origins of which can be traced in the rule system and eliminated. The grammar describing the compositional style of the nursery tunes demonstrates the paramount relevance of a hierarchical structure. Principles underlying the transformation from a music score file to a synthesized performance are derived from recommendations by a violinist and music performance coach, and can thus be regarded as a description of his professional skills as musician and pedagogue. Also in this case the grammar demonstrates the relevance of a hierarchical structure in terms of grouping, and reflects the role of expectation in music listening. The rule system describing singing voice synthesis specifies acoustic characteristics of performance details. The descriptions are complemented by sound examples illustrating the effects of identified compositional and performance rules in the genres analysed.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42957988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tempo and genre are two inter-leaved aspects of music, genres are often associated to rhythm patterns which are played in specific tempo ranges.In this paper, we focus on the Deep Rhythm system based on a harmonic representation of rhythm used as an input to a convolutional neural network.To consider the relationships between frequency bands, we process complex-valued inputs through complex-convolutions.We also study the joint estimation of tempo/genre using a multitask learning approach. Finally, we study the addition of a second input convolutional branch to the system applied to a mel-spectrogram input dedicated to the timbre.This multi-input approach allows to improve the performances for tempo and genre estimation.
{"title":"Extending Deep Rhythm for Tempo and Genre Estimation Using Complex Convolutions, Multitask Learning and Multi-input Network","authors":"Hadrien Foroughmand Aarabi, G. Peeters","doi":"10.5920/jcms.887","DOIUrl":"https://doi.org/10.5920/jcms.887","url":null,"abstract":"Tempo and genre are two inter-leaved aspects of music, genres are often associated to rhythm patterns which are played in specific tempo ranges.In this paper, we focus on the Deep Rhythm system based on a harmonic representation of rhythm used as an input to a convolutional neural network.To consider the relationships between frequency bands, we process complex-valued inputs through complex-convolutions.We also study the joint estimation of tempo/genre using a multitask learning approach. Finally, we study the addition of a second input convolutional branch to the system applied to a mel-spectrogram input dedicated to the timbre.This multi-input approach allows to improve the performances for tempo and genre estimation.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43671160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luke Dzwonczyk, Carmine-Emanuele Cella, Alejandro Saldarriaga-Fuertes, Hongfu Liu, H. Crayencour
In this paper we will perform a preliminary exploration on how neural networks can be used for the task of target-based computer-assisted musical orchestration. We will show how it is possible to model this musical problem as a classification task and we will propose two deep learning models. We will show, first, how they perform as classifiers for musical instrument recognition by comparing them with specific baselines. We will then show how they perform, both qualitatively and quantitatively, in the task of computer-assisted orchestration by comparing them with state-of-the-art systems. Finally, we will highlight benefits and problems of neural approaches for assisted orchestration and we will propose possible future steps. This paper is an extended version of the paper "A Study on Neural Models for Target-Based Computer-Assisted Musical Orchestration" published in the proceedings of The 2020 Joint Conference on AI Music Creativity.
{"title":"Neural Models for Target-Based Computer-Assisted Musical Orchestration: A Preliminary Study","authors":"Luke Dzwonczyk, Carmine-Emanuele Cella, Alejandro Saldarriaga-Fuertes, Hongfu Liu, H. Crayencour","doi":"10.5920/jcms.890","DOIUrl":"https://doi.org/10.5920/jcms.890","url":null,"abstract":"In this paper we will perform a preliminary exploration on how neural networks can be used for the task of target-based computer-assisted musical orchestration. We will show how it is possible to model this musical problem as a classification task and we will propose two deep learning models. We will show, first, how they perform as classifiers for musical instrument recognition by comparing them with specific baselines. We will then show how they perform, both qualitatively and quantitatively, in the task of computer-assisted orchestration by comparing them with state-of-the-art systems. Finally, we will highlight benefits and problems of neural approaches for assisted orchestration and we will propose possible future steps. This paper is an extended version of the paper \"A Study on Neural Models for Target-Based Computer-Assisted Musical Orchestration\" published in the proceedings of The 2020 Joint Conference on AI Music Creativity. ","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42428300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Ai Music Generation Challenge 2020 had three objectives: 1) to promote meaningful approaches to evaluating artificial intelligence (Ai) applied to music;2) to see how music Ai research can benefit from considering traditional music, and how traditional music might benefit from music Ai research; and 3)to facilitate discussions about the ethics of music Ai research applied to traditional music practices.There were six participants and a benchmark in the challenge, each competing to build an artificial system that generates the most plausible double jigs, as judged against the 365 published in solved'', but that the evaluation of such systems can be done in meaningful ways.The article ends by reflecting on the challenge and considering the coming 2021 challenge.
{"title":"The Ai Music Generation Challenge 2020: Double Jigs in the Style of O'Neill's ``1001''","authors":"Bob L. Sturm, H. Maruri-Aguilar","doi":"10.5920/jcms.950","DOIUrl":"https://doi.org/10.5920/jcms.950","url":null,"abstract":"The Ai Music Generation Challenge 2020 had three objectives: 1) to promote meaningful approaches to evaluating artificial intelligence (Ai) applied to music;2) to see how music Ai research can benefit from considering traditional music, and how traditional music might benefit from music Ai research; and 3)to facilitate discussions about the ethics of music Ai research applied to traditional music practices.There were six participants and a benchmark in the challenge, each competing to build an artificial system that generates the most plausible double jigs, as judged against the 365 published in solved'', but that the evaluation of such systems can be done in meaningful ways.The article ends by reflecting on the challenge and considering the coming 2021 challenge.","PeriodicalId":52272,"journal":{"name":"Journal of Creative Music Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46892860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}