{"title":"野外比较评估:音乐表现力渲染系统","authors":"Kyle Worrall;Zongyu Yin;Tom Collins","doi":"10.1109/TAI.2024.3408717","DOIUrl":null,"url":null,"abstract":"There have been many attempts to model the ability of human musicians to take a score and perform or render it expressively, by adding tempo, timing, loudness, and articulation changes to nonexpressive music data. While expressive rendering models exist in academic research, most of these are not open source or accessible, meaning they are difficult to evaluate empirically and have not been widely adopted in professional music software. Systematic comparative evaluation of such algorithms stopped after the last performance rendering contest (RENCON) in 2013, making it difficult to compare newer models to existing work in a fair and valid way. In this article, we introduce the first transformer-based model for expressive rendering, cue-free express + pedal (CFE + P), which predicts expressive attributes such as notewise dynamics and micro-timing adjustments, and beatwise tempo and sustain pedal use based only on the start and end times and pitches of notes (e.g., inexpressive musical instrument digital interface (MIDI) input). We perform two comparative evaluations on our model against a nonmachine learning baseline taken from professional music software and two open-source algorithms—a feedforward neural network (FFNN) and hierarchical recurrent neural network (HRNN). The results of two listening studies indicate that our model renders passages that outperform what can be done in professional music software such as Logic Pro and Ableton Live.\n<xref><sup>1</sup></xref>\n<fn><label><sup>1</sup></label><p>All data and preexisting hypotheses can be accessed via the Open Science Foundation: <uri>https://osf.io/6uwjk/</uri>.</p></fn>","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 10","pages":"5290-5303"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparative Evaluation in the Wild: Systems for the Expressive Rendering of Music\",\"authors\":\"Kyle Worrall;Zongyu Yin;Tom Collins\",\"doi\":\"10.1109/TAI.2024.3408717\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There have been many attempts to model the ability of human musicians to take a score and perform or render it expressively, by adding tempo, timing, loudness, and articulation changes to nonexpressive music data. While expressive rendering models exist in academic research, most of these are not open source or accessible, meaning they are difficult to evaluate empirically and have not been widely adopted in professional music software. Systematic comparative evaluation of such algorithms stopped after the last performance rendering contest (RENCON) in 2013, making it difficult to compare newer models to existing work in a fair and valid way. In this article, we introduce the first transformer-based model for expressive rendering, cue-free express + pedal (CFE + P), which predicts expressive attributes such as notewise dynamics and micro-timing adjustments, and beatwise tempo and sustain pedal use based only on the start and end times and pitches of notes (e.g., inexpressive musical instrument digital interface (MIDI) input). We perform two comparative evaluations on our model against a nonmachine learning baseline taken from professional music software and two open-source algorithms—a feedforward neural network (FFNN) and hierarchical recurrent neural network (HRNN). The results of two listening studies indicate that our model renders passages that outperform what can be done in professional music software such as Logic Pro and Ableton Live.\\n<xref><sup>1</sup></xref>\\n<fn><label><sup>1</sup></label><p>All data and preexisting hypotheses can be accessed via the Open Science Foundation: <uri>https://osf.io/6uwjk/</uri>.</p></fn>\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":\"5 10\",\"pages\":\"5290-5303\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10547570/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10547570/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Comparative Evaluation in the Wild: Systems for the Expressive Rendering of Music
There have been many attempts to model the ability of human musicians to take a score and perform or render it expressively, by adding tempo, timing, loudness, and articulation changes to nonexpressive music data. While expressive rendering models exist in academic research, most of these are not open source or accessible, meaning they are difficult to evaluate empirically and have not been widely adopted in professional music software. Systematic comparative evaluation of such algorithms stopped after the last performance rendering contest (RENCON) in 2013, making it difficult to compare newer models to existing work in a fair and valid way. In this article, we introduce the first transformer-based model for expressive rendering, cue-free express + pedal (CFE + P), which predicts expressive attributes such as notewise dynamics and micro-timing adjustments, and beatwise tempo and sustain pedal use based only on the start and end times and pitches of notes (e.g., inexpressive musical instrument digital interface (MIDI) input). We perform two comparative evaluations on our model against a nonmachine learning baseline taken from professional music software and two open-source algorithms—a feedforward neural network (FFNN) and hierarchical recurrent neural network (HRNN). The results of two listening studies indicate that our model renders passages that outperform what can be done in professional music software such as Logic Pro and Ableton Live.
1
All data and preexisting hypotheses can be accessed via the Open Science Foundation: https://osf.io/6uwjk/.