Pub Date : 2020-01-30DOI: 10.1080/09298215.2020.1715447
M. Irrgang, J. Steffens, Hauke Egermann
ABSTRACT Querying music is still a disembodied process in Music Information Retrieval. Thus, the goal of the presented study was to explore how free and spontaneous movement captured by smartphone accelerometer data can be related to musical properties. Motion features related to tempo, smoothness, size, and regularity were extracted and shown to predict the musical qualities ‘rhythmicity’ (R² = .45), ‘pitch level + range’ (R² = .06) and ‘complexity (R² = .15). We conclude that (rhythmic) music properties can be predicted from movement, and that an embodied approach to MIR is feasible.
{"title":"From acceleration to rhythmicity: Smartphone-assessed movement predicts properties of music","authors":"M. Irrgang, J. Steffens, Hauke Egermann","doi":"10.1080/09298215.2020.1715447","DOIUrl":"https://doi.org/10.1080/09298215.2020.1715447","url":null,"abstract":"ABSTRACT Querying music is still a disembodied process in Music Information Retrieval. Thus, the goal of the presented study was to explore how free and spontaneous movement captured by smartphone accelerometer data can be related to musical properties. Motion features related to tempo, smoothness, size, and regularity were extracted and shown to predict the musical qualities ‘rhythmicity’ (R² = .45), ‘pitch level + range’ (R² = .06) and ‘complexity (R² = .15). We conclude that (rhythmic) music properties can be predicted from movement, and that an embodied approach to MIR is feasible.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"178 - 191"},"PeriodicalIF":1.1,"publicationDate":"2020-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1715447","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46160146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-29DOI: 10.1080/09298215.2020.1716811
Anna Selway, Hendrik Vincent Koops, A. Volk, D. Bretherton, Nicholas Gibbins, R. Polfreman
ABSTRACT Harmonic transcriptions by ear rely heavily on subjective perceptions, which can lead to disagreement between annotators. The current computational metrics employed to measure annotator disagreement are useful for determining similarity on a pitch-class level, but are agnostic to the functional properties of chords. In contrast, music theories like Hugo Riemann's theory of ‘harmonic function’ acknowledge the similarity between chords currently unrecognised by computational metrics. This paper, utilises Riemann's theory to explain the harmonic annotator disagreements in the Chordify Annotator Subjectivity Dataset. This theory allows us to explain 82% of the dataset, compared to the 66% explained using pitch-class based methods alone. This new interdisiplinary application of Riemann's theory increases our understanding of harmonic disagreement and introduces a method for improving harmonic evaluation metrics that takes into account the function of a chord in relation to a tonal centre.
{"title":"Explaining harmonic inter-annotator disagreement using Hugo Riemann's theory of ‘harmonic function’","authors":"Anna Selway, Hendrik Vincent Koops, A. Volk, D. Bretherton, Nicholas Gibbins, R. Polfreman","doi":"10.1080/09298215.2020.1716811","DOIUrl":"https://doi.org/10.1080/09298215.2020.1716811","url":null,"abstract":"ABSTRACT Harmonic transcriptions by ear rely heavily on subjective perceptions, which can lead to disagreement between annotators. The current computational metrics employed to measure annotator disagreement are useful for determining similarity on a pitch-class level, but are agnostic to the functional properties of chords. In contrast, music theories like Hugo Riemann's theory of ‘harmonic function’ acknowledge the similarity between chords currently unrecognised by computational metrics. This paper, utilises Riemann's theory to explain the harmonic annotator disagreements in the Chordify Annotator Subjectivity Dataset. This theory allows us to explain 82% of the dataset, compared to the 66% explained using pitch-class based methods alone. This new interdisiplinary application of Riemann's theory increases our understanding of harmonic disagreement and introduces a method for improving harmonic evaluation metrics that takes into account the function of a chord in relation to a tonal centre.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"136 - 150"},"PeriodicalIF":1.1,"publicationDate":"2020-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1716811","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46547520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-27DOI: 10.1080/09298215.2020.1717544
H. Park, S. Lee, H. Chong
ABSTRACT This study aimed to investigate the differences in verbal descriptions of emotions induced by music between adults who are visually impaired (VI) and adults who have normal vision (NV). Thirty participants (15 VI, 15 NV) listened to music excerpts and were interviewed. A content analysis and a syntactic analysis were performed. Among the VI group, contextual verbalism was more highly observed compared to media or educational verbalism and a high ratio of affective words, expressions and descriptions via senses other than vision was found. The VI more frequently employed situational descriptions while the NV more often described episodic memories.
{"title":"A comparative study of verbal descriptions of emotions induced by music between adults with and without visual impairments","authors":"H. Park, S. Lee, H. Chong","doi":"10.1080/09298215.2020.1717544","DOIUrl":"https://doi.org/10.1080/09298215.2020.1717544","url":null,"abstract":"ABSTRACT This study aimed to investigate the differences in verbal descriptions of emotions induced by music between adults who are visually impaired (VI) and adults who have normal vision (NV). Thirty participants (15 VI, 15 NV) listened to music excerpts and were interviewed. A content analysis and a syntactic analysis were performed. Among the VI group, contextual verbalism was more highly observed compared to media or educational verbalism and a high ratio of affective words, expressions and descriptions via senses other than vision was found. The VI more frequently employed situational descriptions while the NV more often described episodic memories.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"151 - 161"},"PeriodicalIF":1.1,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1717544","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43434575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-21DOI: 10.1080/09298215.2019.1709510
Fabio Paolizzo, Colin G. Johnson
ABSTRACT Can autonomous systems be musically creative without musical knowledge? Assumptions from interdisciplinary studies on self-reflection are evaluated using Video Interactive VST Orchestra, a system that generates music from audio and video inputs through an analysis of video motion and simultaneous sound processing. The system is able to generate material that is primary, novel and contextual. A case study provides evidence that these three simple features allow the system to identify musical salience in the material that it is generating, and for the system to act as an autonomous musical agent.
{"title":"Creative autonomy in a simple interactive music system","authors":"Fabio Paolizzo, Colin G. Johnson","doi":"10.1080/09298215.2019.1709510","DOIUrl":"https://doi.org/10.1080/09298215.2019.1709510","url":null,"abstract":"ABSTRACT Can autonomous systems be musically creative without musical knowledge? Assumptions from interdisciplinary studies on self-reflection are evaluated using Video Interactive VST Orchestra, a system that generates music from audio and video inputs through an analysis of video motion and simultaneous sound processing. The system is able to generate material that is primary, novel and contextual. A case study provides evidence that these three simple features allow the system to identify musical salience in the material that it is generating, and for the system to act as an autonomous musical agent.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"115 - 125"},"PeriodicalIF":1.1,"publicationDate":"2020-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2019.1709510","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43165199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-20eCollection Date: 2020-01-01DOI: 10.1080/09298215.2019.1708412
Montserrat Pàmies-Vilà, Alex Hofmann, Vasileios Chatziioannou
When playing single-reed woodwind instruments, players can modulate the spectral content of the airflow in their vocal tract, upstream of the vibrating reed. In an empirical study with professional clarinettists ( ), blowing pressure and mouthpiece pressure were measured during the performance of Clarinet Concerto excerpts. By comparing mouth pressure and mouthpiece pressure signals in the time domain, a method to detect instances of vocal tract adjustments was established. Results showed that players tuned their vocal tract in both clarion and altissimo registers. Furthermore, the analysis revealed that vocal tract adjustments support shorter attack transients and help to avoid lower bore resonances.
当演奏单簧片木管乐器时,演奏者可以调节声道中气流的频谱含量,在振动簧片的上游。在对专业单簧管演奏者(N p = 11)的实证研究中,测量了单簧管协奏曲选段演奏过程中的吹管压力和吹口压力。通过对口腔压力信号和口腔压力信号在时域上的比较,建立了一种检测声道调整实例的方法。结果显示,演奏者在清音和高音两个音域都能调整他们的声道。此外,分析显示声道调整支持更短的攻击瞬态,并有助于避免较低的孔共振。
{"title":"The influence of the vocal tract on the attack transients in clarinet playing.","authors":"Montserrat Pàmies-Vilà, Alex Hofmann, Vasileios Chatziioannou","doi":"10.1080/09298215.2019.1708412","DOIUrl":"https://doi.org/10.1080/09298215.2019.1708412","url":null,"abstract":"<p><p>When playing single-reed woodwind instruments, players can modulate the spectral content of the airflow in their vocal tract, upstream of the vibrating reed. In an empirical study with professional clarinettists ( <math> <msub><mrow><mi>N</mi></mrow> <mrow><mrow><mi>p</mi></mrow> </mrow> </msub> <mo>=</mo> <mn>11</mn></math> ), blowing pressure and mouthpiece pressure were measured during the performance of Clarinet Concerto excerpts. By comparing mouth pressure and mouthpiece pressure signals in the time domain, a method to detect instances of vocal tract adjustments was established. Results showed that players tuned their vocal tract in both clarion and altissimo registers. Furthermore, the analysis revealed that vocal tract adjustments support shorter attack transients and help to avoid lower bore resonances.</p>","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 2","pages":"126-135"},"PeriodicalIF":1.1,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2019.1708412","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37807891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-13DOI: 10.1080/09298215.2020.1711778
Emily Carlson, Pasi Saari, Birgitta Burger, P. Toiviainen
ABSTRACT Machine learning has been used to accurately classify musical genre using features derived from audio signals. Musical genre, as well as lower-level audio features of music, have also been shown to influence music-induced movement, however, the degree to which such movements are genre-specific has not been explored. The current paper addresses this using motion capture data from participants dancing freely to eight genres. Using a Support Vector Machine model, data were classified by genre and by individual dancer. Against expectations, individual classification was notably more accurate than genre classification. Results are discussed in terms of embodied cognition and culture.
{"title":"Dance to your own drum: Identification of musical genre and individual dancer from motion capture using machine learning","authors":"Emily Carlson, Pasi Saari, Birgitta Burger, P. Toiviainen","doi":"10.1080/09298215.2020.1711778","DOIUrl":"https://doi.org/10.1080/09298215.2020.1711778","url":null,"abstract":"ABSTRACT Machine learning has been used to accurately classify musical genre using features derived from audio signals. Musical genre, as well as lower-level audio features of music, have also been shown to influence music-induced movement, however, the degree to which such movements are genre-specific has not been explored. The current paper addresses this using motion capture data from participants dancing freely to eight genres. Using a Support Vector Machine model, data were classified by genre and by individual dancer. Against expectations, individual classification was notably more accurate than genre classification. Results are discussed in terms of embodied cognition and culture.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"162 - 177"},"PeriodicalIF":1.1,"publicationDate":"2020-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1711778","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42187939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-08DOI: 10.1080/09298215.2021.1873392
Yin-Cheng Yeh, Wen-Yi Hsiao, Satoru Fukayama, Tetsuro Kitahara, Benjamin Genchel, Hao-Min Liu, Hao-Wen Dong, Yian Chen, T. Leong, Yi-Hsuan Yang
The task of automatic melody harmonization aims to build a model that generates a chord sequence as the harmonic accompaniment of a given multiple-bar melody sequence. In this paper, we present a comparative study evaluating the performance of canonical approaches to this task, including template matching, hidden Markov model, genetic algorithm and deep learning. The evaluation is conducted on a dataset of 9226 melody/chord pairs, considering 48 different triad chords. We report the result of an objective evaluation using six different metrics and a subjective study with 202 participants, showing that a deep learning method performs the best.
{"title":"Automatic melody harmonization with triad chords: A comparative study","authors":"Yin-Cheng Yeh, Wen-Yi Hsiao, Satoru Fukayama, Tetsuro Kitahara, Benjamin Genchel, Hao-Min Liu, Hao-Wen Dong, Yian Chen, T. Leong, Yi-Hsuan Yang","doi":"10.1080/09298215.2021.1873392","DOIUrl":"https://doi.org/10.1080/09298215.2021.1873392","url":null,"abstract":"The task of automatic melody harmonization aims to build a model that generates a chord sequence as the harmonic accompaniment of a given multiple-bar melody sequence. In this paper, we present a comparative study evaluating the performance of canonical approaches to this task, including template matching, hidden Markov model, genetic algorithm and deep learning. The evaluation is conducted on a dataset of 9226 melody/chord pairs, considering 48 different triad chords. We report the result of an objective evaluation using six different metrics and a subjective study with 202 participants, showing that a deep learning method performs the best.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"37 - 51"},"PeriodicalIF":1.1,"publicationDate":"2020-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2021.1873392","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49388382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1080/09298215.2019.1707234
Anil Çamci, R. Hamilton
ABSTRACT This special issue of the Journal of New Music Research explores VR (Virtual Reality) through the lenses of music, art and technology, each focusing on foregrounded sonic expression – an audio-first VR, wherein sound is treated not only as an integral part of immersive virtual experiences but also as a critical point of departure for creative and technological work in this domain. In this article, we identify emerging challenges and opportunities in audio-first VR, and pose questions pertaining to both theoretical and practical aspects of this concept. We then discuss how each contribution to our special issue addresses these questions through research and artistic projects, giving us a glimpse into the future of audio in VR.
{"title":"Audio-first VR: New perspectives on musical experiences in virtual environments","authors":"Anil Çamci, R. Hamilton","doi":"10.1080/09298215.2019.1707234","DOIUrl":"https://doi.org/10.1080/09298215.2019.1707234","url":null,"abstract":"ABSTRACT This special issue of the Journal of New Music Research explores VR (Virtual Reality) through the lenses of music, art and technology, each focusing on foregrounded sonic expression – an audio-first VR, wherein sound is treated not only as an integral part of immersive virtual experiences but also as a critical point of departure for creative and technological work in this domain. In this article, we identify emerging challenges and opportunities in audio-first VR, and pose questions pertaining to both theoretical and practical aspects of this concept. We then discuss how each contribution to our special issue addresses these questions through research and artistic projects, giving us a glimpse into the future of audio in VR.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"1 - 7"},"PeriodicalIF":1.1,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2019.1707234","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42975208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1080/09298215.2020.1714666
K. Snook, T. Barri, Monica Bolles, Petter Ericson, Carl Fravel, J. Goßmann, Susan E. Green-Mateu, Andrew Luck, M. Schedel, Robert Thomas
ABSTRACT Kepler Concordia, a new scientific and musical instrument enabling players to explore the solar system and other data within immersive extended-reality (XR) platforms, is being designed by a diverse team of musicians, artists, scientists and engineers using audio-first principles. The core instrument modules will be launched in 2019 for the 400th anniversary of Johannes Kepler's Harmonies of the World, in which he laid out a framework for the harmony of geometric form as well as the three laws of planetary motion. Kepler's own experimental process can be understood as audio-first because he employed his understanding of Western Classical music theory to investigate and discover the heliocentric, elliptical behaviour of planetary orbits. Indeed, principles of harmonic motion govern much of our physical world and show up at all scales in mathematics and physics. Few physical systems, however, offer such rich harmonic complexity and beauty as our own solar system. Concordia is a musical instrument that is modular, extensible and designed to allow players to generate and explore transparent sonifications of planetary movements rooted in the musical and mathematical concepts of Johannes Kepler as well as researchers who have extended Kepler's work, such as Hartmut Warm. Its primary function is to emphasise the auditory experience by encouraging musical explorations using sonification of geometric and relational information of scientifically accurate planetary ephemeris and astrodynamics. Concordia highlights harmonic relationships of the solar system through interactive sonic immersion. This article explains how we prioritise data sonification and then add visualisations and gamification to create a new type of experience and creative distributed-ledger powered ecosystem. Kepler Concordia facilitates the perception of music while presenting the celestial harmonies through multiple senses, with an emphasis on hearing, so that, as Kepler wrote, ‘the mind can seize upon the patterns’.
{"title":"Concordia: A musical XR instrument for playing the solar system","authors":"K. Snook, T. Barri, Monica Bolles, Petter Ericson, Carl Fravel, J. Goßmann, Susan E. Green-Mateu, Andrew Luck, M. Schedel, Robert Thomas","doi":"10.1080/09298215.2020.1714666","DOIUrl":"https://doi.org/10.1080/09298215.2020.1714666","url":null,"abstract":"ABSTRACT Kepler Concordia, a new scientific and musical instrument enabling players to explore the solar system and other data within immersive extended-reality (XR) platforms, is being designed by a diverse team of musicians, artists, scientists and engineers using audio-first principles. The core instrument modules will be launched in 2019 for the 400th anniversary of Johannes Kepler's Harmonies of the World, in which he laid out a framework for the harmony of geometric form as well as the three laws of planetary motion. Kepler's own experimental process can be understood as audio-first because he employed his understanding of Western Classical music theory to investigate and discover the heliocentric, elliptical behaviour of planetary orbits. Indeed, principles of harmonic motion govern much of our physical world and show up at all scales in mathematics and physics. Few physical systems, however, offer such rich harmonic complexity and beauty as our own solar system. Concordia is a musical instrument that is modular, extensible and designed to allow players to generate and explore transparent sonifications of planetary movements rooted in the musical and mathematical concepts of Johannes Kepler as well as researchers who have extended Kepler's work, such as Hartmut Warm. Its primary function is to emphasise the auditory experience by encouraging musical explorations using sonification of geometric and relational information of scientifically accurate planetary ephemeris and astrodynamics. Concordia highlights harmonic relationships of the solar system through interactive sonic immersion. This article explains how we prioritise data sonification and then add visualisations and gamification to create a new type of experience and creative distributed-ledger powered ecosystem. Kepler Concordia facilitates the perception of music while presenting the celestial harmonies through multiple senses, with an emphasis on hearing, so that, as Kepler wrote, ‘the mind can seize upon the patterns’.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"103 - 88"},"PeriodicalIF":1.1,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1714666","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47869105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1080/09298215.2019.1706584
Florent Berthaut
As Virtual Reality headsets become accessible, more and more artistic applications are developed, including immersive musical instruments. 3D interaction techniques designed in the 3D User Interfaces research community, such as navigation, selection and manipulation techniques, open numerous opportunities for musical control. For example, navigation techniques such as teleportation, free walking/flying and path-planning enable different ways of accessing musical scores, scenes of spatialised sound sources or even parameter spaces. Manipulation techniques provide novel gestures and metaphors, e.g. for drawing or sculpting sound entities. Finally, 3D selection techniques facilitate the interaction with complex visual structures which can represent hierarchical temporal structures, audio graphs, scores or parameter spaces. However, existing devices and techniques were developed mainly with a focus on efficiency, i.e. minimising error rate and task completion times. They were therefore not designed with the specifics of musical interaction in mind. In this paper, we review existing 3D interaction techniques and examine how they can be used for musical control, including the possibilities they open for instrument designers. We then propose a number of research directions to adapt and extend 3DUIs for musical expression
{"title":"3D interaction techniques for musical expression","authors":"Florent Berthaut","doi":"10.1080/09298215.2019.1706584","DOIUrl":"https://doi.org/10.1080/09298215.2019.1706584","url":null,"abstract":"As Virtual Reality headsets become accessible, more and more artistic applications are developed, including immersive musical instruments. 3D interaction techniques designed in the 3D User Interfaces research community, such as navigation, selection and manipulation techniques, open numerous opportunities for musical control. For example, navigation techniques such as teleportation, free walking/flying and path-planning enable different ways of accessing musical scores, scenes of spatialised sound sources or even parameter spaces. Manipulation techniques provide novel gestures and metaphors, e.g. for drawing or sculpting sound entities. Finally, 3D selection techniques facilitate the interaction with complex visual structures which can represent hierarchical temporal structures, audio graphs, scores or parameter spaces. However, existing devices and techniques were developed mainly with a focus on efficiency, i.e. minimising error rate and task completion times. They were therefore not designed with the specifics of musical interaction in mind. In this paper, we review existing 3D interaction techniques and examine how they can be used for musical control, including the possibilities they open for instrument designers. We then propose a number of research directions to adapt and extend 3DUIs for musical expression","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"49 1","pages":"60 - 72"},"PeriodicalIF":1.1,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2019.1706584","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45221830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}