首页 > 最新文献

Journal of New Music Research最新文献

英文 中文
Testing a hybrid hardware quantum multi-agent system architecture that utilizes the quantum speed advantage for interactive computer music 测试一种混合硬件量子多智能体系统架构,该架构利用量子速度优势进行交互式计算机音乐
IF 1.1 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2020-04-13 DOI: 10.1080/09298215.2020.1749672
Alexis Kirke
This paper introduces MIq (Multi-Agent Interactive qgMuse), which builds on the single agent quantum system qgMuse using teleportation. MIq is the first attempt at a real-time interactive quantum computer music algorithm that utilises the quantum advantage. Previous interactive or real-time quantum music algorithms running on quantum computers have been mappings of classical computing algorithms, with no quantum advantage obtained. MIq provides a quadratic speed-up over classical methods. It is a Quantum Hybrid Multi-agent System architecture implemented on 5 and 14 qubit quantum hardware. Classical agents and quantum agents connect via a classical/quantum hybrid agent that enables communication using quantum teleportation.
本文介绍了MIq(Multi-Agent Interactive qgMuse),它建立在使用隐形传态的单Agent量子系统qgMuse的基础上。MIq是利用量子优势的实时交互式量子计算机音乐算法的首次尝试。以前在量子计算机上运行的交互式或实时量子音乐算法是经典计算算法的映射,没有获得量子优势。MIq提供了优于经典方法的二次加速。它是一种在5和14量子位量子硬件上实现的量子混合多代理系统架构。经典代理和量子代理通过经典/量子混合代理连接,该代理能够使用量子隐形传态进行通信。
{"title":"Testing a hybrid hardware quantum multi-agent system architecture that utilizes the quantum speed advantage for interactive computer music","authors":"Alexis Kirke","doi":"10.1080/09298215.2020.1749672","DOIUrl":"https://doi.org/10.1080/09298215.2020.1749672","url":null,"abstract":"This paper introduces MIq (Multi-Agent Interactive qgMuse), which builds on the single agent quantum system qgMuse using teleportation. MIq is the first attempt at a real-time interactive quantum computer music algorithm that utilises the quantum advantage. Previous interactive or real-time quantum music algorithms running on quantum computers have been mappings of classical computing algorithms, with no quantum advantage obtained. MIq provides a quadratic speed-up over classical methods. It is a Quantum Hybrid Multi-agent System architecture implemented on 5 and 14 qubit quantum hardware. Classical agents and quantum agents connect via a classical/quantum hybrid agent that enables communication using quantum teleportation.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1749672","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47327385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Hearing tetrachords in an atonal context 在无调性语境中听四和弦
IF 1.1 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2020-04-13 DOI: 10.1080/09298215.2020.1749285
J. Brown, Nathan Cornelius
This study examines the perception of tetrachords. Musicians were divided into four groups. One group heard a Bartók composition predominated by [0167]; other groups heard it recomposed with any instance of [0167] replaced with [0148], [0268], or [0257]. Analysis of ratings before and after familiarisation suggests that participants recognized the tetrachord from familiarisation, no matter which set-class was prominent in familiarisation and despite confounds of hearing real music. Tetrachords with similar intervals to the motive were also rated higher after familiarisation. Notably, participants demonstrated ability to generalise from a melodic presentation in familiarisation to a harmonic presentation in ratings phases.
这项研究考察了对四和弦的感知。音乐家被分成四组。一组人听到了以[0167]为主的Bartók作品;其他小组听到它重新组合,将[0167]的任何实例替换为[0184]、[0268]或[0257]。对熟悉前后评分的分析表明,参与者从熟悉中识别出四弦琴,无论哪一组在熟悉中表现突出,尽管听真实音乐很困惑。与动机音程相似的四和弦在熟悉后也被评为更高。值得注意的是,参与者展示了从熟悉时的旋律表现到评级阶段的和声表现的概括能力。
{"title":"Hearing tetrachords in an atonal context","authors":"J. Brown, Nathan Cornelius","doi":"10.1080/09298215.2020.1749285","DOIUrl":"https://doi.org/10.1080/09298215.2020.1749285","url":null,"abstract":"This study examines the perception of tetrachords. Musicians were divided into four groups. One group heard a Bartók composition predominated by [0167]; other groups heard it recomposed with any instance of [0167] replaced with [0148], [0268], or [0257]. Analysis of ratings before and after familiarisation suggests that participants recognized the tetrachord from familiarisation, no matter which set-class was prominent in familiarisation and despite confounds of hearing real music. Tetrachords with similar intervals to the motive were also rated higher after familiarisation. Notably, participants demonstrated ability to generalise from a melodic presentation in familiarisation to a harmonic presentation in ratings phases.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1749285","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46826958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sound mass, auditory perception, and ‘post-tone’ music 声音质量、听觉感知和“后音”音乐
IF 1.1 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2020-04-13 DOI: 10.1080/09298215.2020.1749673
Jason Noble, S. McAdams
The term ‘post-tonal’ embodies a broad distinction between musical explorations of new combinations of tones (‘post-tonality’) and explorations of sonic resources other than tones (‘post-tone’). A significant turning-point in post-tone thinking occurred when some composers replaced notes with masses of notes, or sound masses, as musical units. Existing definitions of sound mass are reviewed and a new definition drawing on empirical evidence is offered. The perceptual principles that are involved in the perception of polyphonic music are demonstrated to also ground sound mass perception, with opposite aesthetic goals achieved through radically different musical organisation.
“后音调”一词体现了对新音调组合的音乐探索(“后音调性”)和对音调以外的声音资源的探索(“前音调”)之间的广泛区别。后音调思维的一个重要转折点发生在一些作曲家用大量音符或声团取代音符作为音乐单元时。回顾了现有的声质量定义,并根据经验证据提出了一个新的定义。复调音乐感知中涉及的感知原则也被证明是声音大众感知的基础,通过完全不同的音乐组织实现了相反的美学目标。
{"title":"Sound mass, auditory perception, and ‘post-tone’ music","authors":"Jason Noble, S. McAdams","doi":"10.1080/09298215.2020.1749673","DOIUrl":"https://doi.org/10.1080/09298215.2020.1749673","url":null,"abstract":"The term ‘post-tonal’ embodies a broad distinction between musical explorations of new combinations of tones (‘post-tonality’) and explorations of sonic resources other than tones (‘post-tone’). A significant turning-point in post-tone thinking occurred when some composers replaced notes with masses of notes, or sound masses, as musical units. Existing definitions of sound mass are reviewed and a new definition drawing on empirical evidence is offered. The perceptual principles that are involved in the perception of polyphonic music are demonstrated to also ground sound mass perception, with opposite aesthetic goals achieved through radically different musical organisation.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1749673","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43456508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Large-scale audience participation in live music using smartphones. 使用智能手机的大规模观众参与现场音乐。
IF 1.1 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2020-02-09 eCollection Date: 2020-01-01 DOI: 10.1080/09298215.2020.1722181
Oliver Hödl, Christoph Bartmann, Fares Kayali, Christian Löw, Peter Purgathofer

We present a study and reflection about the role and use of smartphone technology for a large-scale musical performance involving audience participation. We evaluated a full design and development process from initial ideation to a final performance concept. We found that the smartphone became the design tool, the technical device and the musical instrument at the same time. As a technical device that uses ultrasound communication as interaction technique, the smartphone became inspirational for the artist's creative work. In aiming to support the artist, we observed pervasive importance of retaining artistic control to realise artistic intent. This concerns the co-design process and the resulting concept of audience participation and supports recommendations for such participatory work.

我们提出了一项关于智能手机技术在涉及观众参与的大型音乐表演中的作用和使用的研究和反思。我们评估了一个完整的设计和开发过程,从最初的想法到最终的性能概念。我们发现智能手机同时成为了设计工具、技术设备和乐器。作为一种使用超声波通信作为交互技术的技术设备,智能手机成为艺术家创作的灵感来源。为了支持艺术家,我们注意到保持艺术控制以实现艺术意图的普遍重要性。这涉及共同设计过程和由此产生的观众参与概念,并支持对此类参与性工作的建议。
{"title":"Large-scale audience participation in live music using smartphones.","authors":"Oliver Hödl,&nbsp;Christoph Bartmann,&nbsp;Fares Kayali,&nbsp;Christian Löw,&nbsp;Peter Purgathofer","doi":"10.1080/09298215.2020.1722181","DOIUrl":"https://doi.org/10.1080/09298215.2020.1722181","url":null,"abstract":"<p><p>We present a study and reflection about the role and use of smartphone technology for a large-scale musical performance involving audience participation. We evaluated a full design and development process from initial ideation to a final performance concept. We found that the smartphone became the design tool, the technical device and the musical instrument at the same time. As a technical device that uses ultrasound communication as interaction technique, the smartphone became inspirational for the artist's creative work. In aiming to support the artist, we observed pervasive importance of retaining artistic control to realise artistic intent. This concerns the co-design process and the resulting concept of audience participation and supports recommendations for such participatory work.</p>","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1722181","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37807892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
From acceleration to rhythmicity: Smartphone-assessed movement predicts properties of music 从加速到节奏:智能手机评估的动作预测音乐的特性
IF 1.1 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2020-01-30 DOI: 10.1080/09298215.2020.1715447
M. Irrgang, J. Steffens, Hauke Egermann
ABSTRACT Querying music is still a disembodied process in Music Information Retrieval. Thus, the goal of the presented study was to explore how free and spontaneous movement captured by smartphone accelerometer data can be related to musical properties. Motion features related to tempo, smoothness, size, and regularity were extracted and shown to predict the musical qualities ‘rhythmicity’ (R² = .45), ‘pitch level + range’ (R² = .06) and ‘complexity (R² = .15). We conclude that (rhythmic) music properties can be predicted from movement, and that an embodied approach to MIR is feasible.
摘要在音乐信息检索中,查询音乐仍然是一个没有实体的过程。因此,本研究的目标是探索智能手机加速度计数据捕捉到的自由和自发运动如何与音乐特性相关。提取并展示了与节奏、流畅度、大小和规律性相关的运动特征,以预测音乐品质“节奏性”(R²=.45)、“音高水平+范围”(Rµ=.06)和“复杂性”(R΅=.15)。我们得出结论,(节奏性)音乐特性可以从运动中预测,MIR的具体方法是可行的。
{"title":"From acceleration to rhythmicity: Smartphone-assessed movement predicts properties of music","authors":"M. Irrgang, J. Steffens, Hauke Egermann","doi":"10.1080/09298215.2020.1715447","DOIUrl":"https://doi.org/10.1080/09298215.2020.1715447","url":null,"abstract":"ABSTRACT Querying music is still a disembodied process in Music Information Retrieval. Thus, the goal of the presented study was to explore how free and spontaneous movement captured by smartphone accelerometer data can be related to musical properties. Motion features related to tempo, smoothness, size, and regularity were extracted and shown to predict the musical qualities ‘rhythmicity’ (R² = .45), ‘pitch level + range’ (R² = .06) and ‘complexity (R² = .15). We conclude that (rhythmic) music properties can be predicted from movement, and that an embodied approach to MIR is feasible.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1715447","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46160146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Explaining harmonic inter-annotator disagreement using Hugo Riemann's theory of ‘harmonic function’ 用Hugo Riemann的“调和函数”理论解释调和注释者之间的分歧
IF 1.1 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2020-01-29 DOI: 10.1080/09298215.2020.1716811
Anna Selway, Hendrik Vincent Koops, A. Volk, D. Bretherton, Nicholas Gibbins, R. Polfreman
ABSTRACT Harmonic transcriptions by ear rely heavily on subjective perceptions, which can lead to disagreement between annotators. The current computational metrics employed to measure annotator disagreement are useful for determining similarity on a pitch-class level, but are agnostic to the functional properties of chords. In contrast, music theories like Hugo Riemann's theory of ‘harmonic function’ acknowledge the similarity between chords currently unrecognised by computational metrics. This paper, utilises Riemann's theory to explain the harmonic annotator disagreements in the Chordify Annotator Subjectivity Dataset. This theory allows us to explain 82% of the dataset, compared to the 66% explained using pitch-class based methods alone. This new interdisiplinary application of Riemann's theory increases our understanding of harmonic disagreement and introduces a method for improving harmonic evaluation metrics that takes into account the function of a chord in relation to a tonal centre.
摘要耳朵的谐波转录在很大程度上依赖于主观感知,这可能导致注释者之间的分歧。目前用于测量注释者分歧的计算指标有助于确定音高类水平上的相似性,但对和弦的功能特性是不可知的。相比之下,像雨果·里曼的“调和函数”理论这样的音乐理论承认了目前未被计算指标识别的和弦之间的相似性。本文利用黎曼理论解释了Chordify注释主体性数据集中调和注释主体的分歧。这一理论使我们能够解释82%的数据集,而单独使用基于音高类的方法解释的数据集为66%。黎曼理论的这一新的跨学科应用增加了我们对和声不一致的理解,并引入了一种改进和声评估指标的方法,该方法考虑了和弦相对于音调中心的功能。
{"title":"Explaining harmonic inter-annotator disagreement using Hugo Riemann's theory of ‘harmonic function’","authors":"Anna Selway, Hendrik Vincent Koops, A. Volk, D. Bretherton, Nicholas Gibbins, R. Polfreman","doi":"10.1080/09298215.2020.1716811","DOIUrl":"https://doi.org/10.1080/09298215.2020.1716811","url":null,"abstract":"ABSTRACT Harmonic transcriptions by ear rely heavily on subjective perceptions, which can lead to disagreement between annotators. The current computational metrics employed to measure annotator disagreement are useful for determining similarity on a pitch-class level, but are agnostic to the functional properties of chords. In contrast, music theories like Hugo Riemann's theory of ‘harmonic function’ acknowledge the similarity between chords currently unrecognised by computational metrics. This paper, utilises Riemann's theory to explain the harmonic annotator disagreements in the Chordify Annotator Subjectivity Dataset. This theory allows us to explain 82% of the dataset, compared to the 66% explained using pitch-class based methods alone. This new interdisiplinary application of Riemann's theory increases our understanding of harmonic disagreement and introduces a method for improving harmonic evaluation metrics that takes into account the function of a chord in relation to a tonal centre.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1716811","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46547520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A comparative study of verbal descriptions of emotions induced by music between adults with and without visual impairments 有视觉障碍和无视觉障碍的成年人对音乐引起的情感的言语描述的比较研究
IF 1.1 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2020-01-27 DOI: 10.1080/09298215.2020.1717544
H. Park, S. Lee, H. Chong
ABSTRACT This study aimed to investigate the differences in verbal descriptions of emotions induced by music between adults who are visually impaired (VI) and adults who have normal vision (NV). Thirty participants (15 VI, 15 NV) listened to music excerpts and were interviewed. A content analysis and a syntactic analysis were performed. Among the VI group, contextual verbalism was more highly observed compared to media or educational verbalism and a high ratio of affective words, expressions and descriptions via senses other than vision was found. The VI more frequently employed situational descriptions while the NV more often described episodic memories.
摘要本研究旨在调查视障成年人(VI)和视力正常成年人(NV)对音乐引发的情绪的言语描述的差异。30名参与者(15名VI,15名NV)听了音乐节选并接受了采访。进行了内容分析和句法分析。在VI组中,与媒体或教育言语相比,语境言语更受关注,并且通过视觉以外的感官发现情感词汇、表达和描述的比例很高。VI更频繁地使用情景描述,而NV更频繁地描述情景记忆。
{"title":"A comparative study of verbal descriptions of emotions induced by music between adults with and without visual impairments","authors":"H. Park, S. Lee, H. Chong","doi":"10.1080/09298215.2020.1717544","DOIUrl":"https://doi.org/10.1080/09298215.2020.1717544","url":null,"abstract":"ABSTRACT This study aimed to investigate the differences in verbal descriptions of emotions induced by music between adults who are visually impaired (VI) and adults who have normal vision (NV). Thirty participants (15 VI, 15 NV) listened to music excerpts and were interviewed. A content analysis and a syntactic analysis were performed. Among the VI group, contextual verbalism was more highly observed compared to media or educational verbalism and a high ratio of affective words, expressions and descriptions via senses other than vision was found. The VI more frequently employed situational descriptions while the NV more often described episodic memories.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1717544","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43434575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Creative autonomy in a simple interactive music system 一个简单的交互式音乐系统中的创造性自主
IF 1.1 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2020-01-21 DOI: 10.1080/09298215.2019.1709510
Fabio Paolizzo, Colin G. Johnson
ABSTRACT Can autonomous systems be musically creative without musical knowledge? Assumptions from interdisciplinary studies on self-reflection are evaluated using Video Interactive VST Orchestra, a system that generates music from audio and video inputs through an analysis of video motion and simultaneous sound processing. The system is able to generate material that is primary, novel and contextual. A case study provides evidence that these three simple features allow the system to identify musical salience in the material that it is generating, and for the system to act as an autonomous musical agent.
自主系统在没有音乐知识的情况下能有音乐创造力吗?通过视频互动VST管弦乐团评估跨学科自我反思研究的假设,该系统通过分析视频运动和同步声音处理从音频和视频输入生成音乐。该系统能够生成原始的、新颖的和上下文相关的材料。一个案例研究提供了证据,证明这三个简单的特征允许系统识别它所生成的材料中的音乐突出性,并使系统充当一个自主的音乐代理。
{"title":"Creative autonomy in a simple interactive music system","authors":"Fabio Paolizzo, Colin G. Johnson","doi":"10.1080/09298215.2019.1709510","DOIUrl":"https://doi.org/10.1080/09298215.2019.1709510","url":null,"abstract":"ABSTRACT Can autonomous systems be musically creative without musical knowledge? Assumptions from interdisciplinary studies on self-reflection are evaluated using Video Interactive VST Orchestra, a system that generates music from audio and video inputs through an analysis of video motion and simultaneous sound processing. The system is able to generate material that is primary, novel and contextual. A case study provides evidence that these three simple features allow the system to identify musical salience in the material that it is generating, and for the system to act as an autonomous musical agent.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2019.1709510","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43165199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The influence of the vocal tract on the attack transients in clarinet playing. 单簧管演奏中声道对进攻瞬变的影响。
IF 1.1 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2020-01-20 eCollection Date: 2020-01-01 DOI: 10.1080/09298215.2019.1708412
Montserrat Pàmies-Vilà, Alex Hofmann, Vasileios Chatziioannou

When playing single-reed woodwind instruments, players can modulate the spectral content of the airflow in their vocal tract, upstream of the vibrating reed. In an empirical study with professional clarinettists ( N p = 11 ), blowing pressure and mouthpiece pressure were measured during the performance of Clarinet Concerto excerpts. By comparing mouth pressure and mouthpiece pressure signals in the time domain, a method to detect instances of vocal tract adjustments was established. Results showed that players tuned their vocal tract in both clarion and altissimo registers. Furthermore, the analysis revealed that vocal tract adjustments support shorter attack transients and help to avoid lower bore resonances.

当演奏单簧片木管乐器时,演奏者可以调节声道中气流的频谱含量,在振动簧片的上游。在对专业单簧管演奏者(N p = 11)的实证研究中,测量了单簧管协奏曲选段演奏过程中的吹管压力和吹口压力。通过对口腔压力信号和口腔压力信号在时域上的比较,建立了一种检测声道调整实例的方法。结果显示,演奏者在清音和高音两个音域都能调整他们的声道。此外,分析显示声道调整支持更短的攻击瞬态,并有助于避免较低的孔共振。
{"title":"The influence of the vocal tract on the attack transients in clarinet playing.","authors":"Montserrat Pàmies-Vilà,&nbsp;Alex Hofmann,&nbsp;Vasileios Chatziioannou","doi":"10.1080/09298215.2019.1708412","DOIUrl":"https://doi.org/10.1080/09298215.2019.1708412","url":null,"abstract":"<p><p>When playing single-reed woodwind instruments, players can modulate the spectral content of the airflow in their vocal tract, upstream of the vibrating reed. In an empirical study with professional clarinettists ( <math> <msub><mrow><mi>N</mi></mrow> <mrow><mrow><mi>p</mi></mrow> </mrow> </msub> <mo>=</mo> <mn>11</mn></math> ), blowing pressure and mouthpiece pressure were measured during the performance of Clarinet Concerto excerpts. By comparing mouth pressure and mouthpiece pressure signals in the time domain, a method to detect instances of vocal tract adjustments was established. Results showed that players tuned their vocal tract in both clarion and altissimo registers. Furthermore, the analysis revealed that vocal tract adjustments support shorter attack transients and help to avoid lower bore resonances.</p>","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2019.1708412","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37807891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Dance to your own drum: Identification of musical genre and individual dancer from motion capture using machine learning 跟着自己的鼓跳舞:使用机器学习从动作捕捉中识别音乐类型和舞者个体
IF 1.1 4区 计算机科学 Q1 Arts and Humanities Pub Date : 2020-01-13 DOI: 10.1080/09298215.2020.1711778
Emily Carlson, Pasi Saari, Birgitta Burger, P. Toiviainen
ABSTRACT Machine learning has been used to accurately classify musical genre using features derived from audio signals. Musical genre, as well as lower-level audio features of music, have also been shown to influence music-induced movement, however, the degree to which such movements are genre-specific has not been explored. The current paper addresses this using motion capture data from participants dancing freely to eight genres. Using a Support Vector Machine model, data were classified by genre and by individual dancer. Against expectations, individual classification was notably more accurate than genre classification. Results are discussed in terms of embodied cognition and culture.
机器学习已被用于根据音频信号的特征准确分类音乐类型。音乐类型,以及较低层次的音乐音频特征,也被证明会影响音乐诱发的运动,然而,这些运动在多大程度上是特定于音乐类型的,还没有被探索。目前的论文利用参与者自由跳舞的八种体裁的动作捕捉数据来解决这个问题。使用支持向量机模型,将数据按类型和个人舞者进行分类。出乎意料的是,个体分类明显比类型分类更准确。结果从具身认知和文化的角度进行了讨论。
{"title":"Dance to your own drum: Identification of musical genre and individual dancer from motion capture using machine learning","authors":"Emily Carlson, Pasi Saari, Birgitta Burger, P. Toiviainen","doi":"10.1080/09298215.2020.1711778","DOIUrl":"https://doi.org/10.1080/09298215.2020.1711778","url":null,"abstract":"ABSTRACT Machine learning has been used to accurately classify musical genre using features derived from audio signals. Musical genre, as well as lower-level audio features of music, have also been shown to influence music-induced movement, however, the degree to which such movements are genre-specific has not been explored. The current paper addresses this using motion capture data from participants dancing freely to eight genres. Using a Support Vector Machine model, data were classified by genre and by individual dancer. Against expectations, individual classification was notably more accurate than genre classification. Results are discussed in terms of embodied cognition and culture.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2020.1711778","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42187939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
Journal of New Music Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1