首页 > 最新文献

Journal of New Music Research最新文献

英文 中文
From acceleration to rhythmicity: Smartphone-assessed movement predicts properties of music 从加速到节奏:智能手机评估的动作预测音乐的特性
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2020-01-30 DOI: 10.1080/09298215.2020.1715447
M. Irrgang, J. Steffens, Hauke Egermann
ABSTRACT Querying music is still a disembodied process in Music Information Retrieval. Thus, the goal of the presented study was to explore how free and spontaneous movement captured by smartphone accelerometer data can be related to musical properties. Motion features related to tempo, smoothness, size, and regularity were extracted and shown to predict the musical qualities ‘rhythmicity’ (R² = .45), ‘pitch level + range’ (R² = .06) and ‘complexity (R² = .15). We conclude that (rhythmic) music properties can be predicted from movement, and that an embodied approach to MIR is feasible.
摘要在音乐信息检索中,查询音乐仍然是一个没有实体的过程。因此,本研究的目标是探索智能手机加速度计数据捕捉到的自由和自发运动如何与音乐特性相关。提取并展示了与节奏、流畅度、大小和规律性相关的运动特征,以预测音乐品质“节奏性”(R²=.45)、“音高水平+范围”(Rµ=.06)和“复杂性”(R΅=.15)。我们得出结论,(节奏性)音乐特性可以从运动中预测,MIR的具体方法是可行的。
{"title":"From acceleration to rhythmicity: Smartphone-assessed movement predicts properties of music","authors":"M. Irrgang, J. Steffens, Hauke Egermann","doi":"10.1080/09298215.2020.1715447","DOIUrl":"https://doi.org/10.1080/09298215.2020.1715447","url":null,"abstract":"ABSTRACT Querying music is still a disembodied process in Music Information Retrieval. Thus, the goal of the presented study was to explore how free and spontaneous movement captured by smartphone accelerometer data can be related to musical properties. Motion features related to tempo, smoothness, size, and regularity were extracted and shown to predict the musical qualities ‘rhythmicity’ (R² = .45), ‘pitch level + range’ (R² = .06) and ‘complexity (R² = .15). We conclude that (rhythmic) music properties can be predicted from movement, and that an embodied approach to MIR is feasible.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"178 - 191"},"PeriodicalIF":1.1,"publicationDate":"2020-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1715447","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46160146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Explaining harmonic inter-annotator disagreement using Hugo Riemann's theory of ‘harmonic function’ 用Hugo Riemann的“调和函数”理论解释调和注释者之间的分歧
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2020-01-29 DOI: 10.1080/09298215.2020.1716811
Anna Selway, Hendrik Vincent Koops, A. Volk, D. Bretherton, Nicholas Gibbins, R. Polfreman
ABSTRACT Harmonic transcriptions by ear rely heavily on subjective perceptions, which can lead to disagreement between annotators. The current computational metrics employed to measure annotator disagreement are useful for determining similarity on a pitch-class level, but are agnostic to the functional properties of chords. In contrast, music theories like Hugo Riemann's theory of ‘harmonic function’ acknowledge the similarity between chords currently unrecognised by computational metrics. This paper, utilises Riemann's theory to explain the harmonic annotator disagreements in the Chordify Annotator Subjectivity Dataset. This theory allows us to explain 82% of the dataset, compared to the 66% explained using pitch-class based methods alone. This new interdisiplinary application of Riemann's theory increases our understanding of harmonic disagreement and introduces a method for improving harmonic evaluation metrics that takes into account the function of a chord in relation to a tonal centre.
摘要耳朵的谐波转录在很大程度上依赖于主观感知,这可能导致注释者之间的分歧。目前用于测量注释者分歧的计算指标有助于确定音高类水平上的相似性,但对和弦的功能特性是不可知的。相比之下,像雨果·里曼的“调和函数”理论这样的音乐理论承认了目前未被计算指标识别的和弦之间的相似性。本文利用黎曼理论解释了Chordify注释主体性数据集中调和注释主体的分歧。这一理论使我们能够解释82%的数据集,而单独使用基于音高类的方法解释的数据集为66%。黎曼理论的这一新的跨学科应用增加了我们对和声不一致的理解,并引入了一种改进和声评估指标的方法,该方法考虑了和弦相对于音调中心的功能。
{"title":"Explaining harmonic inter-annotator disagreement using Hugo Riemann's theory of ‘harmonic function’","authors":"Anna Selway, Hendrik Vincent Koops, A. Volk, D. Bretherton, Nicholas Gibbins, R. Polfreman","doi":"10.1080/09298215.2020.1716811","DOIUrl":"https://doi.org/10.1080/09298215.2020.1716811","url":null,"abstract":"ABSTRACT Harmonic transcriptions by ear rely heavily on subjective perceptions, which can lead to disagreement between annotators. The current computational metrics employed to measure annotator disagreement are useful for determining similarity on a pitch-class level, but are agnostic to the functional properties of chords. In contrast, music theories like Hugo Riemann's theory of ‘harmonic function’ acknowledge the similarity between chords currently unrecognised by computational metrics. This paper, utilises Riemann's theory to explain the harmonic annotator disagreements in the Chordify Annotator Subjectivity Dataset. This theory allows us to explain 82% of the dataset, compared to the 66% explained using pitch-class based methods alone. This new interdisiplinary application of Riemann's theory increases our understanding of harmonic disagreement and introduces a method for improving harmonic evaluation metrics that takes into account the function of a chord in relation to a tonal centre.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"136 - 150"},"PeriodicalIF":1.1,"publicationDate":"2020-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1716811","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46547520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A comparative study of verbal descriptions of emotions induced by music between adults with and without visual impairments 有视觉障碍和无视觉障碍的成年人对音乐引起的情感的言语描述的比较研究
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2020-01-27 DOI: 10.1080/09298215.2020.1717544
H. Park, S. Lee, H. Chong
ABSTRACT This study aimed to investigate the differences in verbal descriptions of emotions induced by music between adults who are visually impaired (VI) and adults who have normal vision (NV). Thirty participants (15 VI, 15 NV) listened to music excerpts and were interviewed. A content analysis and a syntactic analysis were performed. Among the VI group, contextual verbalism was more highly observed compared to media or educational verbalism and a high ratio of affective words, expressions and descriptions via senses other than vision was found. The VI more frequently employed situational descriptions while the NV more often described episodic memories.
摘要本研究旨在调查视障成年人(VI)和视力正常成年人(NV)对音乐引发的情绪的言语描述的差异。30名参与者(15名VI,15名NV)听了音乐节选并接受了采访。进行了内容分析和句法分析。在VI组中,与媒体或教育言语相比,语境言语更受关注,并且通过视觉以外的感官发现情感词汇、表达和描述的比例很高。VI更频繁地使用情景描述,而NV更频繁地描述情景记忆。
{"title":"A comparative study of verbal descriptions of emotions induced by music between adults with and without visual impairments","authors":"H. Park, S. Lee, H. Chong","doi":"10.1080/09298215.2020.1717544","DOIUrl":"https://doi.org/10.1080/09298215.2020.1717544","url":null,"abstract":"ABSTRACT This study aimed to investigate the differences in verbal descriptions of emotions induced by music between adults who are visually impaired (VI) and adults who have normal vision (NV). Thirty participants (15 VI, 15 NV) listened to music excerpts and were interviewed. A content analysis and a syntactic analysis were performed. Among the VI group, contextual verbalism was more highly observed compared to media or educational verbalism and a high ratio of affective words, expressions and descriptions via senses other than vision was found. The VI more frequently employed situational descriptions while the NV more often described episodic memories.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"151 - 161"},"PeriodicalIF":1.1,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1717544","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43434575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Creative autonomy in a simple interactive music system 一个简单的交互式音乐系统中的创造性自主
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2020-01-21 DOI: 10.1080/09298215.2019.1709510
Fabio Paolizzo, Colin G. Johnson
ABSTRACT Can autonomous systems be musically creative without musical knowledge? Assumptions from interdisciplinary studies on self-reflection are evaluated using Video Interactive VST Orchestra, a system that generates music from audio and video inputs through an analysis of video motion and simultaneous sound processing. The system is able to generate material that is primary, novel and contextual. A case study provides evidence that these three simple features allow the system to identify musical salience in the material that it is generating, and for the system to act as an autonomous musical agent.
自主系统在没有音乐知识的情况下能有音乐创造力吗?通过视频互动VST管弦乐团评估跨学科自我反思研究的假设,该系统通过分析视频运动和同步声音处理从音频和视频输入生成音乐。该系统能够生成原始的、新颖的和上下文相关的材料。一个案例研究提供了证据,证明这三个简单的特征允许系统识别它所生成的材料中的音乐突出性,并使系统充当一个自主的音乐代理。
{"title":"Creative autonomy in a simple interactive music system","authors":"Fabio Paolizzo, Colin G. Johnson","doi":"10.1080/09298215.2019.1709510","DOIUrl":"https://doi.org/10.1080/09298215.2019.1709510","url":null,"abstract":"ABSTRACT Can autonomous systems be musically creative without musical knowledge? Assumptions from interdisciplinary studies on self-reflection are evaluated using Video Interactive VST Orchestra, a system that generates music from audio and video inputs through an analysis of video motion and simultaneous sound processing. The system is able to generate material that is primary, novel and contextual. A case study provides evidence that these three simple features allow the system to identify musical salience in the material that it is generating, and for the system to act as an autonomous musical agent.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"115 - 125"},"PeriodicalIF":1.1,"publicationDate":"2020-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2019.1709510","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43165199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The influence of the vocal tract on the attack transients in clarinet playing. 单簧管演奏中声道对进攻瞬变的影响。
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2020-01-20 eCollection Date: 2020-01-01 DOI: 10.1080/09298215.2019.1708412
Montserrat Pàmies-Vilà, Alex Hofmann, Vasileios Chatziioannou

When playing single-reed woodwind instruments, players can modulate the spectral content of the airflow in their vocal tract, upstream of the vibrating reed. In an empirical study with professional clarinettists ( N p = 11 ), blowing pressure and mouthpiece pressure were measured during the performance of Clarinet Concerto excerpts. By comparing mouth pressure and mouthpiece pressure signals in the time domain, a method to detect instances of vocal tract adjustments was established. Results showed that players tuned their vocal tract in both clarion and altissimo registers. Furthermore, the analysis revealed that vocal tract adjustments support shorter attack transients and help to avoid lower bore resonances.

当演奏单簧片木管乐器时,演奏者可以调节声道中气流的频谱含量,在振动簧片的上游。在对专业单簧管演奏者(N p = 11)的实证研究中,测量了单簧管协奏曲选段演奏过程中的吹管压力和吹口压力。通过对口腔压力信号和口腔压力信号在时域上的比较,建立了一种检测声道调整实例的方法。结果显示,演奏者在清音和高音两个音域都能调整他们的声道。此外,分析显示声道调整支持更短的攻击瞬态,并有助于避免较低的孔共振。
{"title":"The influence of the vocal tract on the attack transients in clarinet playing.","authors":"Montserrat Pàmies-Vilà,&nbsp;Alex Hofmann,&nbsp;Vasileios Chatziioannou","doi":"10.1080/09298215.2019.1708412","DOIUrl":"https://doi.org/10.1080/09298215.2019.1708412","url":null,"abstract":"<p><p>When playing single-reed woodwind instruments, players can modulate the spectral content of the airflow in their vocal tract, upstream of the vibrating reed. In an empirical study with professional clarinettists ( <math> <msub><mrow><mi>N</mi></mrow> <mrow><mrow><mi>p</mi></mrow> </mrow> </msub> <mo>=</mo> <mn>11</mn></math> ), blowing pressure and mouthpiece pressure were measured during the performance of Clarinet Concerto excerpts. By comparing mouth pressure and mouthpiece pressure signals in the time domain, a method to detect instances of vocal tract adjustments was established. Results showed that players tuned their vocal tract in both clarion and altissimo registers. Furthermore, the analysis revealed that vocal tract adjustments support shorter attack transients and help to avoid lower bore resonances.</p>","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 2","pages":"126-135"},"PeriodicalIF":1.1,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2019.1708412","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37807891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Dance to your own drum: Identification of musical genre and individual dancer from motion capture using machine learning 跟着自己的鼓跳舞:使用机器学习从动作捕捉中识别音乐类型和舞者个体
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2020-01-13 DOI: 10.1080/09298215.2020.1711778
Emily Carlson, Pasi Saari, Birgitta Burger, P. Toiviainen
ABSTRACT Machine learning has been used to accurately classify musical genre using features derived from audio signals. Musical genre, as well as lower-level audio features of music, have also been shown to influence music-induced movement, however, the degree to which such movements are genre-specific has not been explored. The current paper addresses this using motion capture data from participants dancing freely to eight genres. Using a Support Vector Machine model, data were classified by genre and by individual dancer. Against expectations, individual classification was notably more accurate than genre classification. Results are discussed in terms of embodied cognition and culture.
机器学习已被用于根据音频信号的特征准确分类音乐类型。音乐类型,以及较低层次的音乐音频特征,也被证明会影响音乐诱发的运动,然而,这些运动在多大程度上是特定于音乐类型的,还没有被探索。目前的论文利用参与者自由跳舞的八种体裁的动作捕捉数据来解决这个问题。使用支持向量机模型,将数据按类型和个人舞者进行分类。出乎意料的是,个体分类明显比类型分类更准确。结果从具身认知和文化的角度进行了讨论。
{"title":"Dance to your own drum: Identification of musical genre and individual dancer from motion capture using machine learning","authors":"Emily Carlson, Pasi Saari, Birgitta Burger, P. Toiviainen","doi":"10.1080/09298215.2020.1711778","DOIUrl":"https://doi.org/10.1080/09298215.2020.1711778","url":null,"abstract":"ABSTRACT Machine learning has been used to accurately classify musical genre using features derived from audio signals. Musical genre, as well as lower-level audio features of music, have also been shown to influence music-induced movement, however, the degree to which such movements are genre-specific has not been explored. The current paper addresses this using motion capture data from participants dancing freely to eight genres. Using a Support Vector Machine model, data were classified by genre and by individual dancer. Against expectations, individual classification was notably more accurate than genre classification. Results are discussed in terms of embodied cognition and culture.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"162 - 177"},"PeriodicalIF":1.1,"publicationDate":"2020-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1711778","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42187939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Automatic melody harmonization with triad chords: A comparative study 三和弦自动旋律协调的比较研究
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2020-01-08 DOI: 10.1080/09298215.2021.1873392
Yin-Cheng Yeh, Wen-Yi Hsiao, Satoru Fukayama, Tetsuro Kitahara, Benjamin Genchel, Hao-Min Liu, Hao-Wen Dong, Yian Chen, T. Leong, Yi-Hsuan Yang
The task of automatic melody harmonization aims to build a model that generates a chord sequence as the harmonic accompaniment of a given multiple-bar melody sequence. In this paper, we present a comparative study evaluating the performance of canonical approaches to this task, including template matching, hidden Markov model, genetic algorithm and deep learning. The evaluation is conducted on a dataset of 9226 melody/chord pairs, considering 48 different triad chords. We report the result of an objective evaluation using six different metrics and a subjective study with 202 participants, showing that a deep learning method performs the best.
自动旋律协调的任务旨在建立一个模型,生成一个和弦序列作为给定多小节旋律序列的和声伴奏。在本文中,我们对规范方法的性能进行了比较研究,包括模板匹配、隐马尔可夫模型、遗传算法和深度学习。评估是在9226个旋律/和弦对的数据集上进行的,考虑了48个不同的三和弦。我们报告了使用六种不同指标进行客观评估的结果,以及对202名参与者进行的主观研究,表明深度学习方法表现最好。
{"title":"Automatic melody harmonization with triad chords: A comparative study","authors":"Yin-Cheng Yeh, Wen-Yi Hsiao, Satoru Fukayama, Tetsuro Kitahara, Benjamin Genchel, Hao-Min Liu, Hao-Wen Dong, Yian Chen, T. Leong, Yi-Hsuan Yang","doi":"10.1080/09298215.2021.1873392","DOIUrl":"https://doi.org/10.1080/09298215.2021.1873392","url":null,"abstract":"The task of automatic melody harmonization aims to build a model that generates a chord sequence as the harmonic accompaniment of a given multiple-bar melody sequence. In this paper, we present a comparative study evaluating the performance of canonical approaches to this task, including template matching, hidden Markov model, genetic algorithm and deep learning. The evaluation is conducted on a dataset of 9226 melody/chord pairs, considering 48 different triad chords. We report the result of an objective evaluation using six different metrics and a subjective study with 202 participants, showing that a deep learning method performs the best.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"37 - 51"},"PeriodicalIF":1.1,"publicationDate":"2020-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2021.1873392","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49388382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Audio-first VR: New perspectives on musical experiences in virtual environments 音频优先的VR:虚拟环境中音乐体验的新视角
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2020-01-01 DOI: 10.1080/09298215.2019.1707234
Anil Çamci, R. Hamilton
ABSTRACT This special issue of the Journal of New Music Research explores VR (Virtual Reality) through the lenses of music, art and technology, each focusing on foregrounded sonic expression – an audio-first VR, wherein sound is treated not only as an integral part of immersive virtual experiences but also as a critical point of departure for creative and technological work in this domain. In this article, we identify emerging challenges and opportunities in audio-first VR, and pose questions pertaining to both theoretical and practical aspects of this concept. We then discuss how each contribution to our special issue addresses these questions through research and artistic projects, giving us a glimpse into the future of audio in VR.
新音乐研究杂志的这一期特刊通过音乐,艺术和技术的镜头探索VR(虚拟现实),每个人都专注于前景的声音表达-音频优先的VR,其中声音不仅被视为沉浸式虚拟体验的组成部分,而且还被视为该领域创造性和技术工作的临界点。在本文中,我们确定了音频优先VR的新挑战和机遇,并提出了与该概念的理论和实践方面有关的问题。然后,我们将讨论如何通过研究和艺术项目来解决这些问题,让我们一窥VR音频的未来。
{"title":"Audio-first VR: New perspectives on musical experiences in virtual environments","authors":"Anil Çamci, R. Hamilton","doi":"10.1080/09298215.2019.1707234","DOIUrl":"https://doi.org/10.1080/09298215.2019.1707234","url":null,"abstract":"ABSTRACT This special issue of the Journal of New Music Research explores VR (Virtual Reality) through the lenses of music, art and technology, each focusing on foregrounded sonic expression – an audio-first VR, wherein sound is treated not only as an integral part of immersive virtual experiences but also as a critical point of departure for creative and technological work in this domain. In this article, we identify emerging challenges and opportunities in audio-first VR, and pose questions pertaining to both theoretical and practical aspects of this concept. We then discuss how each contribution to our special issue addresses these questions through research and artistic projects, giving us a glimpse into the future of audio in VR.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"1 - 7"},"PeriodicalIF":1.1,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2019.1707234","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42975208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Concordia: A musical XR instrument for playing the solar system Concordia:一种用于演奏太阳系的XR乐器
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2020-01-01 DOI: 10.1080/09298215.2020.1714666
K. Snook, T. Barri, Monica Bolles, Petter Ericson, Carl Fravel, J. Goßmann, Susan E. Green-Mateu, Andrew Luck, M. Schedel, Robert Thomas
ABSTRACT Kepler Concordia, a new scientific and musical instrument enabling players to explore the solar system and other data within immersive extended-reality (XR) platforms, is being designed by a diverse team of musicians, artists, scientists and engineers using audio-first principles. The core instrument modules will be launched in 2019 for the 400th anniversary of Johannes Kepler's Harmonies of the World, in which he laid out a framework for the harmony of geometric form as well as the three laws of planetary motion. Kepler's own experimental process can be understood as audio-first because he employed his understanding of Western Classical music theory to investigate and discover the heliocentric, elliptical behaviour of planetary orbits. Indeed, principles of harmonic motion govern much of our physical world and show up at all scales in mathematics and physics. Few physical systems, however, offer such rich harmonic complexity and beauty as our own solar system. Concordia is a musical instrument that is modular, extensible and designed to allow players to generate and explore transparent sonifications of planetary movements rooted in the musical and mathematical concepts of Johannes Kepler as well as researchers who have extended Kepler's work, such as Hartmut Warm. Its primary function is to emphasise the auditory experience by encouraging musical explorations using sonification of geometric and relational information of scientifically accurate planetary ephemeris and astrodynamics. Concordia highlights harmonic relationships of the solar system through interactive sonic immersion. This article explains how we prioritise data sonification and then add visualisations and gamification to create a new type of experience and creative distributed-ledger powered ecosystem. Kepler Concordia facilitates the perception of music while presenting the celestial harmonies through multiple senses, with an emphasis on hearing, so that, as Kepler wrote, ‘the mind can seize upon the patterns’.
摘要开普勒Concordia是一种新的科学和乐器,使玩家能够在沉浸式扩展现实(XR)平台内探索太阳系和其他数据,由音乐家、艺术家、科学家和工程师组成的多元化团队使用音频优先原则设计。核心仪器模块将于2019年推出,以纪念约翰内斯·开普勒的《世界和谐》400周年,他在该书中阐述了几何形状和谐以及行星运动三定律的框架。开普勒自己的实验过程可以首先被理解为音频,因为他利用对西方古典音乐理论的理解来研究和发现行星轨道的日心椭圆行为。事实上,谐波运动的原理支配着我们的大部分物理世界,并在数学和物理学的各个层面上都有所体现。然而,很少有物理系统能像我们自己的太阳系那样提供如此丰富的谐波复杂性和美感。Concordia是一种模块化、可扩展的乐器,旨在让玩家生成和探索行星运动的透明声音,这些声音植根于约翰内斯·开普勒的音乐和数学概念,以及扩展开普勒工作的研究人员,如Hartmut Warm。它的主要功能是通过对科学准确的行星星历和天体动力学的几何和关系信息进行声波处理,鼓励音乐探索,从而强调听觉体验。Concordia通过互动式声波浸入来强调太阳系的和谐关系。本文解释了我们如何优先考虑数据声波化,然后添加可视化和游戏化,以创建一种新型的体验和创造性的分布式账本驱动生态系统。开普勒Concordia促进了对音乐的感知,同时通过多种感官呈现天籁之音,重点是听觉,因此,正如开普勒所写,“大脑可以抓住这些模式”。
{"title":"Concordia: A musical XR instrument for playing the solar system","authors":"K. Snook, T. Barri, Monica Bolles, Petter Ericson, Carl Fravel, J. Goßmann, Susan E. Green-Mateu, Andrew Luck, M. Schedel, Robert Thomas","doi":"10.1080/09298215.2020.1714666","DOIUrl":"https://doi.org/10.1080/09298215.2020.1714666","url":null,"abstract":"ABSTRACT Kepler Concordia, a new scientific and musical instrument enabling players to explore the solar system and other data within immersive extended-reality (XR) platforms, is being designed by a diverse team of musicians, artists, scientists and engineers using audio-first principles. The core instrument modules will be launched in 2019 for the 400th anniversary of Johannes Kepler's Harmonies of the World, in which he laid out a framework for the harmony of geometric form as well as the three laws of planetary motion. Kepler's own experimental process can be understood as audio-first because he employed his understanding of Western Classical music theory to investigate and discover the heliocentric, elliptical behaviour of planetary orbits. Indeed, principles of harmonic motion govern much of our physical world and show up at all scales in mathematics and physics. Few physical systems, however, offer such rich harmonic complexity and beauty as our own solar system. Concordia is a musical instrument that is modular, extensible and designed to allow players to generate and explore transparent sonifications of planetary movements rooted in the musical and mathematical concepts of Johannes Kepler as well as researchers who have extended Kepler's work, such as Hartmut Warm. Its primary function is to emphasise the auditory experience by encouraging musical explorations using sonification of geometric and relational information of scientifically accurate planetary ephemeris and astrodynamics. Concordia highlights harmonic relationships of the solar system through interactive sonic immersion. This article explains how we prioritise data sonification and then add visualisations and gamification to create a new type of experience and creative distributed-ledger powered ecosystem. Kepler Concordia facilitates the perception of music while presenting the celestial harmonies through multiple senses, with an emphasis on hearing, so that, as Kepler wrote, ‘the mind can seize upon the patterns’.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"103 - 88"},"PeriodicalIF":1.1,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1714666","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47869105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3D interaction techniques for musical expression 音乐表达的3D交互技术
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2020-01-01 DOI: 10.1080/09298215.2019.1706584
Florent Berthaut
As Virtual Reality headsets become accessible, more and more artistic applications are developed, including immersive musical instruments. 3D interaction techniques designed in the 3D User Interfaces research community, such as navigation, selection and manipulation techniques, open numerous opportunities for musical control. For example, navigation techniques such as teleportation, free walking/flying and path-planning enable different ways of accessing musical scores, scenes of spatialised sound sources or even parameter spaces. Manipulation techniques provide novel gestures and metaphors, e.g. for drawing or sculpting sound entities. Finally, 3D selection techniques facilitate the interaction with complex visual structures which can represent hierarchical temporal structures, audio graphs, scores or parameter spaces. However, existing devices and techniques were developed mainly with a focus on efficiency, i.e. minimising error rate and task completion times. They were therefore not designed with the specifics of musical interaction in mind. In this paper, we review existing 3D interaction techniques and examine how they can be used for musical control, including the possibilities they open for instrument designers. We then propose a number of research directions to adapt and extend 3DUIs for musical expression
随着虚拟现实耳机的普及,越来越多的艺术应用程序被开发出来,包括沉浸式乐器。3D用户界面研究社区设计的3D交互技术,如导航、选择和操纵技术,为音乐控制打开了许多机会。例如,远程传送、自由行走/飞行和路径规划等导航技术能够以不同的方式访问乐谱、空间化声源场景甚至参数空间。操纵技术提供了新颖的手势和隐喻,例如用于绘制或雕刻声音实体。最后,3D选择技术促进了与复杂视觉结构的交互,复杂视觉结构可以表示分层时间结构、音频图、分数或参数空间。然而,现有设备和技术的开发主要着眼于效率,即最大限度地减少错误率和任务完成时间。因此,它们的设计并没有考虑到音乐互动的细节。在本文中,我们回顾了现有的3D交互技术,并研究了它们如何用于音乐控制,包括它们为乐器设计师打开的可能性。然后,我们提出了一些研究方向,以适应和扩展用于音乐表达的3DUI
{"title":"3D interaction techniques for musical expression","authors":"Florent Berthaut","doi":"10.1080/09298215.2019.1706584","DOIUrl":"https://doi.org/10.1080/09298215.2019.1706584","url":null,"abstract":"As Virtual Reality headsets become accessible, more and more artistic applications are developed, including immersive musical instruments. 3D interaction techniques designed in the 3D User Interfaces research community, such as navigation, selection and manipulation techniques, open numerous opportunities for musical control. For example, navigation techniques such as teleportation, free walking/flying and path-planning enable different ways of accessing musical scores, scenes of spatialised sound sources or even parameter spaces. Manipulation techniques provide novel gestures and metaphors, e.g. for drawing or sculpting sound entities. Finally, 3D selection techniques facilitate the interaction with complex visual structures which can represent hierarchical temporal structures, audio graphs, scores or parameter spaces. However, existing devices and techniques were developed mainly with a focus on efficiency, i.e. minimising error rate and task completion times. They were therefore not designed with the specifics of musical interaction in mind. In this paper, we review existing 3D interaction techniques and examine how they can be used for musical control, including the possibilities they open for instrument designers. We then propose a number of research directions to adapt and extend 3DUIs for musical expression","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"60 - 72"},"PeriodicalIF":1.1,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2019.1706584","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45221830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Journal of New Music Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1