首页 > 最新文献

Computer Music Journal最新文献

英文 中文
Finite State Machines with Data Paths in Visual Languages for Music 音乐视觉语言中带有数据路径的有限状态机
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-09-04 DOI: 10.1162/comj_a_00688
Tiago Fernandes Tavares, José Eduardo Fornari
Some music-domain visual programming languages (VPLs) have been shown to be Turing complete. The common lack of built-in flow control structures can obstruct using VPLs to implement general-purpose algorithms, however, which harms the direct use of algorithms and algorithm theory in art-creation processes using VPLs. In this article, we show how to systematically implement general-purpose algorithms in music-domain visual languages by using the computation model known as a finite state machine with data path. The results expose a finite state machine and a set of internal state variables that traverse paths whose speed can be controlled using metronome ticks and whose path depends on the initial conditions of the algorithm. These elements can be further mapped to music elements according to the musician's intentions. We demonstrate this technique by implementing Euclid's greatest common divisor algorithm and using it to control high-level music elements in an implementation of Terry Riley's In C, and to control audio synthesis parameters in a frequency-modulator synthesizer.
一些音乐领域的可视化编程语言(VPL)已被证明是图灵完备的。然而,VPL 普遍缺乏内置的流程控制结构,这阻碍了使用 VPL 实现通用算法,不利于在使用 VPL 进行艺术创作的过程中直接使用算法和算法理论。在本文中,我们展示了如何通过使用带数据路径的有限状态机这一计算模型,在音乐领域的可视化语言中系统地实现通用算法。结果揭示了一个有限状态机和一组内部状态变量,这些状态变量遍历的路径可以用节拍器的刻度来控制速度,其路径取决于算法的初始条件。这些元素可根据音乐家的意图进一步映射为音乐元素。我们通过实现欧几里得的最大公约数算法来演示这种技术,并用它来控制特里-莱利的《In C》实现中的高级音乐元素,以及控制频率调制器合成器中的音频合成参数。
{"title":"Finite State Machines with Data Paths in Visual Languages for Music","authors":"Tiago Fernandes Tavares, José Eduardo Fornari","doi":"10.1162/comj_a_00688","DOIUrl":"https://doi.org/10.1162/comj_a_00688","url":null,"abstract":"Some music-domain visual programming languages (VPLs) have been shown to be Turing complete. The common lack of built-in flow control structures can obstruct using VPLs to implement general-purpose algorithms, however, which harms the direct use of algorithms and algorithm theory in art-creation processes using VPLs. In this article, we show how to systematically implement general-purpose algorithms in music-domain visual languages by using the computation model known as a finite state machine with data path. The results expose a finite state machine and a set of internal state variables that traverse paths whose speed can be controlled using metronome ticks and whose path depends on the initial conditions of the algorithm. These elements can be further mapped to music elements according to the musician's intentions. We demonstrate this technique by implementing Euclid's greatest common divisor algorithm and using it to control high-level music elements in an implementation of Terry Riley's In C, and to control audio synthesis parameters in a frequency-modulator synthesizer.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Sonic Phantoms with Quadratic Difference Tone Spectrum Synthesis 利用二次差分音调频谱合成技术生成音效幻影
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-09-03 DOI: 10.1162/comj_a_00687
Esteban Gutiérrez, Christopher Haworth, Rodrigo F. Cádiz
Quadratic difference tones belong to a family of perceptual phenomena that arise from the neuromechanics of the auditory system in response to particular physical properties of sound. Long deployed as “ghost” or “phantom” tones by sound artists, improvisers, and computer musicians, in this article we address an entirely new topic: How to create a quadratic difference tone spectrum (QDTS) in which a target fundamental and harmonic overtone series are specified and in which the complex tone necessary to evoke it is synthesized. We propose a numerical algorithm that solves the problem of how to synthesize a QDTS for a target distribution of amplitudes. The algorithm aims to find a solution that matches the desired spectrum as closely as possible for an arbitrary number of target harmonics. Results from experiments using different parameter settings and target distributions show that the algorithm is effective in the majority of cases, with at least 99% of the cases being solvable in real time. An external object for the visual programming language Max is described. We discuss musical and perceptual considerations for using the external, and we describe a range of audio examples that demonstrate the synthesis of QDTSs across different cases. As we show, the method makes possible the matching of QDTSs to particular instrumental timbres with surprising efficiency. Also included is a discussion of a musical work by composer Marcin Pietruszewski that makes use of QDTS synthesis.
四次方差音属于感知现象的一种,它是听觉系统的神经力学对声音的特定物理特性做出反应时产生的。长期以来,声音艺术家、即兴演奏家和计算机音乐家一直将其作为 "幽灵 "或 "幻影 "音调使用,而在本文中,我们将讨论一个全新的话题:如何创建二次差分音谱(QDTS),在其中指定目标基音和谐波泛音系列,并合成唤起它所需的复合音调。我们提出了一种数值算法,用于解决如何为目标振幅分布合成 QDTS 的问题。该算法旨在为任意数量的目标谐波找到与所需频谱尽可能匹配的解决方案。使用不同参数设置和目标分布进行的实验结果表明,该算法在大多数情况下都很有效,至少 99% 的情况可以实时求解。我们还介绍了视觉编程语言 Max 的外部对象。我们讨论了使用该外部对象在音乐和感知方面的考虑因素,并描述了一系列音频示例,展示了不同情况下 QDTS 的合成。正如我们所展示的,该方法能以惊人的效率将 QDTS 与特定的乐器音色相匹配。此外,我们还讨论了作曲家 Marcin Pietruszewski 利用 QDTS 合成技术创作的音乐作品。
{"title":"Generating Sonic Phantoms with Quadratic Difference Tone Spectrum Synthesis","authors":"Esteban Gutiérrez, Christopher Haworth, Rodrigo F. Cádiz","doi":"10.1162/comj_a_00687","DOIUrl":"https://doi.org/10.1162/comj_a_00687","url":null,"abstract":"Quadratic difference tones belong to a family of perceptual phenomena that arise from the neuromechanics of the auditory system in response to particular physical properties of sound. Long deployed as “ghost” or “phantom” tones by sound artists, improvisers, and computer musicians, in this article we address an entirely new topic: How to create a quadratic difference tone spectrum (QDTS) in which a target fundamental and harmonic overtone series are specified and in which the complex tone necessary to evoke it is synthesized. We propose a numerical algorithm that solves the problem of how to synthesize a QDTS for a target distribution of amplitudes. The algorithm aims to find a solution that matches the desired spectrum as closely as possible for an arbitrary number of target harmonics. Results from experiments using different parameter settings and target distributions show that the algorithm is effective in the majority of cases, with at least 99% of the cases being solvable in real time. An external object for the visual programming language Max is described. We discuss musical and perceptual considerations for using the external, and we describe a range of audio examples that demonstrate the synthesis of QDTSs across different cases. As we show, the method makes possible the matching of QDTSs to particular instrumental timbres with surprising efficiency. Also included is a discussion of a musical work by composer Marcin Pietruszewski that makes use of QDTS synthesis.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Embodying Spatial Sound Synthesis with AI in Two Compositions for Instruments and 3-D Electronics 在两首乐器和三维电子乐作品中体现人工智能的空间声音合成技术
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-01-17 DOI: 10.1162/comj_a_00664
Aaron Einbond, Thibaut Carpentier, Diemo Schwarz, Jean Bresson
The situated spatial presence of musical instruments has been well studied in the fields of acoustics and music perception research, but so far it has not been the focus of human-AI interaction. We respond critically to this trend by seeking to reembody interactive electronics using data derived from natural acoustic phenomena. Two musical works, composed for human soloist and computer-generated live electronics, are intended to situate the listener in an immersive sonic environment in which real and virtual sources blend seamlessly. To do so, we experimented with two contrasting reproduction setups: a surrounding Ambisonic loudspeaker dome and a compact spherical loudspeaker array for radiation synthesis. A large database of measured radiation patterns of orchestral instruments served as a training set for machine learning models to control spatially rich 3-D patterns for electronic sounds. These are exploited during performance in response to live sounds captured with a spherical microphone array and used to train computer models of improvisation and to trigger corpus-based spatial synthesis. We show how AI techniques are useful to utilize complex, multidimensional, spatial data in the context of computer-assisted composition and human-computer interactive improvisation.
在声学和音乐感知研究领域,人们对乐器的空间存在进行了深入研究,但迄今为止,它还不是人机交互的重点。我们对这一趋势做出了批判性的回应,试图利用从自然声学现象中获得的数据来重新体现交互式电子设备。我们为人类独奏者和计算机生成的现场电子设备创作了两部音乐作品,旨在让听众置身于真实与虚拟音源完美融合的沉浸式音效环境中。为此,我们尝试了两种截然不同的再现设置:一种是环绕式 Ambisonic 圆顶扬声器,另一种是用于辐射合成的紧凑型球形扬声器阵列。一个大型管弦乐器测量辐射模式数据库可作为机器学习模型的训练集,用于控制电子音效丰富的空间三维模式。在演出过程中,我们利用球形麦克风阵列捕捉到的现场声音,训练即兴演奏的计算机模型,并触发基于语料库的空间合成。我们展示了人工智能技术如何在计算机辅助作曲和人机交互即兴表演中有效利用复杂的多维空间数据。
{"title":"Embodying Spatial Sound Synthesis with AI in Two Compositions for Instruments and 3-D Electronics","authors":"Aaron Einbond, Thibaut Carpentier, Diemo Schwarz, Jean Bresson","doi":"10.1162/comj_a_00664","DOIUrl":"https://doi.org/10.1162/comj_a_00664","url":null,"abstract":"The situated spatial presence of musical instruments has been well studied in the fields of acoustics and music perception research, but so far it has not been the focus of human-AI interaction. We respond critically to this trend by seeking to reembody interactive electronics using data derived from natural acoustic phenomena. Two musical works, composed for human soloist and computer-generated live electronics, are intended to situate the listener in an immersive sonic environment in which real and virtual sources blend seamlessly. To do so, we experimented with two contrasting reproduction setups: a surrounding Ambisonic loudspeaker dome and a compact spherical loudspeaker array for radiation synthesis. A large database of measured radiation patterns of orchestral instruments served as a training set for machine learning models to control spatially rich 3-D patterns for electronic sounds. These are exploited during performance in response to live sounds captured with a spherical microphone array and used to train computer models of improvisation and to trigger corpus-based spatial synthesis. We show how AI techniques are useful to utilize complex, multidimensional, spatial data in the context of computer-assisted composition and human-computer interactive improvisation.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139496833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cocreative Interaction: Somax2 and the REACH Project 创意互动:Somax2 和 REACH 项目
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-01-16 DOI: 10.1162/comj_a_00662
Gérard Assayag, Laurent Bonnasse-Gahot, Joakim Borg
Somax2 is an AI-based multiagent system for human-machine coimprovisation that generates stylistically coherent streams while continuously listening and adapting to musicians or other agents. The model on which it is based can be used with little configuration to interact with humans in full autonomy, but it also allows fine real-time control of its generative processes and interaction strategies, closer in this case to a “smart” digital instrument. An offspring of the Omax system, conceived at the Institut de Recherche et Coordination Acoustique/Musique, the Somax2 environment is part of the European Research Council Raising Cocreativity in Cyber-Human Musicianship (REACH) project, which studies distributed creativity as a general template for symbiotic interaction between humans and digital systems. It fosters mixed musical reality involving cocreative AI agents. The REACH project puts forward the idea that cocreativity in cyber-human systems results from the emergence of complex joint behavior, produced by interaction and featuring cross-learning mechanisms. Somax2 is a first step toward this ideal, and already shows life-size achievements. This article describes Somax2 extensively, from its theoretical model to its system architecture, through its listening and learning strategies, representation spaces, and interaction policies.
Somax2是一个基于人工智能的多代理系统,用于人机协同创作,在不断聆听和适应音乐家或其他代理的同时,生成风格一致的音乐流。该系统所基于的模型只需少量配置就能完全自主地与人类互动,同时还能对其生成过程和互动策略进行精细的实时控制,在这种情况下更接近于 "智能 "数字乐器。Somax2 环境是声学/音乐研究与协调研究所构想的 Omax 系统的后代,也是欧洲研究理事会 "提高网络人类音乐创作中的共同创造性"(REACH)项目的一部分,该项目将分布式创造性作为人类与数字系统共生互动的通用模板进行研究。该项目将分布式创造力作为人类与数字系统之间共生互动的通用模板进行研究,以促进涉及共同创造性人工智能代理的混合音乐现实。REACH 项目提出的理念是,网络-人类系统中的共同创造性源于复杂的联合行为,这种行为由互动产生,并以交叉学习机制为特征。Somax2是向这一理想迈出的第一步,目前已取得了一定的成果。本文将广泛介绍 Somax2,从其理论模型到系统架构,再到其聆听和学习策略、表示空间和交互策略。
{"title":"Cocreative Interaction: Somax2 and the REACH Project","authors":"Gérard Assayag, Laurent Bonnasse-Gahot, Joakim Borg","doi":"10.1162/comj_a_00662","DOIUrl":"https://doi.org/10.1162/comj_a_00662","url":null,"abstract":"Somax2 is an AI-based multiagent system for human-machine coimprovisation that generates stylistically coherent streams while continuously listening and adapting to musicians or other agents. The model on which it is based can be used with little configuration to interact with humans in full autonomy, but it also allows fine real-time control of its generative processes and interaction strategies, closer in this case to a “smart” digital instrument. An offspring of the Omax system, conceived at the Institut de Recherche et Coordination Acoustique/Musique, the Somax2 environment is part of the European Research Council Raising Cocreativity in Cyber-Human Musicianship (REACH) project, which studies distributed creativity as a general template for symbiotic interaction between humans and digital systems. It fosters mixed musical reality involving cocreative AI agents. The REACH project puts forward the idea that cocreativity in cyber-human systems results from the emergence of complex joint behavior, produced by interaction and featuring cross-learning mechanisms. Somax2 is a first step toward this ideal, and already shows life-size achievements. This article describes Somax2 extensively, from its theoretical model to its system architecture, through its listening and learning strategies, representation spaces, and interaction policies.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139482224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Live Coding Machine Learning: Finding the Moments of Intervention in Autonomous Processes 实时编码机器学习:寻找自主进程中的干预时机
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-01-16 DOI: 10.1162/comj_a_00663
Iván Paz, Shelly Knottsy
Machine learning (ML) deals with algorithms able to learn from data, with the primary aim of finding optimum solutions to perform tasks autonomously. In recent years there has been development in integrating ML algorithms with live coding practices, raising questions about what to optimize or automate, the agency of the algorithms, and in which parts of the ML processes one might intervene midperformance. Live coding performance practices typically involve conversational interaction with algorithmic processes in real time. In analyzing systems integrating live coding and ML, we consider the musical and performative implications of the “moment of intervention” in the ML model and workflow, and the channels for real-time intervention. We propose a framework for analysis, through which we reflect on the domain-specific algorithms and practices being developed that combine these two practices.
机器学习(ML)涉及能够从数据中学习的算法,其主要目的是找到最佳解决方案来自主执行任务。近年来,将 ML 算法与实时编码实践结合在一起的做法得到了发展,但也提出了一些问题,如哪些内容需要优化或自动化、算法的作用以及在执行过程中可以干预 ML 流程的哪些部分。实时编码性能实践通常涉及与算法过程的实时对话交互。在分析集成了现场编码和 ML 的系统时,我们会考虑 ML 模型和工作流程中 "干预时刻 "的音乐和表演含义,以及实时干预的渠道。我们提出了一个分析框架,通过该框架,我们思考了结合这两种实践而开发的特定领域算法和实践。
{"title":"Live Coding Machine Learning: Finding the Moments of Intervention in Autonomous Processes","authors":"Iván Paz, Shelly Knottsy","doi":"10.1162/comj_a_00663","DOIUrl":"https://doi.org/10.1162/comj_a_00663","url":null,"abstract":"Machine learning (ML) deals with algorithms able to learn from data, with the primary aim of finding optimum solutions to perform tasks autonomously. In recent years there has been development in integrating ML algorithms with live coding practices, raising questions about what to optimize or automate, the agency of the algorithms, and in which parts of the ML processes one might intervene midperformance. Live coding performance practices typically involve conversational interaction with algorithmic processes in real time. In analyzing systems integrating live coding and ML, we consider the musical and performative implications of the “moment of intervention” in the ML model and workflow, and the channels for real-time intervention. We propose a framework for analysis, through which we reflect on the domain-specific algorithms and practices being developed that combine these two practices.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139482307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tool or Actor? Expert Improvisers' Evaluation of a Musical AI “Toddler” 工具还是演员?即兴专家对音乐人工智能“蹒跚学步”的评价
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-11-10 DOI: 10.1162/comj_a_00657
Çağrı Erdem, Benedikte Wallace, Kyrre Glette, Alexander Refsum Jensenius
Abstract In this article we introduce the coadaptive audiovisual instrument CAVI. This instrument uses deep learning to generate control signals based on muscle and motion data of a performer's actions. The generated signals control time-based live sound-processing modules. How does a performer perceive such an instrument? Does it feel like a machine learning-based musical tool? Or is it an actor with the potential to become a musical partner? We report on an evaluation of CAVI after it had been used in two public performances. The evaluation is based on interviews with the performers, audience questionnaires, and the creator's self-analysis. Our findings suggest that the perception of CAVI as a tool or actor correlates with the performer's sense of agency. The perceived agency changes throughout a performance based on several factors, including perceived musical coordination, the balance between surprise and familiarity, a “common sense,” and the physical characteristics of the performance setting.
本文介绍了一种自适应视听仪器CAVI。该仪器使用深度学习来生成基于表演者动作的肌肉和运动数据的控制信号。生成的信号控制基于时间的实时声音处理模块。演奏者如何看待这样的乐器?它感觉像是一个基于机器学习的音乐工具吗?还是一个有潜力成为音乐搭档的演员?我们报告了CAVI在两次公开演出中使用后的评估。评估是基于对表演者的采访、观众的问卷调查和创作者的自我分析。我们的研究结果表明,CAVI作为工具或演员的感知与表演者的代理感相关。在整个表演过程中,感知代理的变化基于几个因素,包括感知到的音乐协调性、惊喜与熟悉之间的平衡、“常识”和表演环境的物理特征。
{"title":"Tool or Actor? Expert Improvisers' Evaluation of a Musical AI “Toddler”","authors":"Çağrı Erdem, Benedikte Wallace, Kyrre Glette, Alexander Refsum Jensenius","doi":"10.1162/comj_a_00657","DOIUrl":"https://doi.org/10.1162/comj_a_00657","url":null,"abstract":"Abstract In this article we introduce the coadaptive audiovisual instrument CAVI. This instrument uses deep learning to generate control signals based on muscle and motion data of a performer's actions. The generated signals control time-based live sound-processing modules. How does a performer perceive such an instrument? Does it feel like a machine learning-based musical tool? Or is it an actor with the potential to become a musical partner? We report on an evaluation of CAVI after it had been used in two public performances. The evaluation is based on interviews with the performers, audience questionnaires, and the creator's self-analysis. Our findings suggest that the perception of CAVI as a tool or actor correlates with the performer's sense of agency. The perceived agency changes throughout a performance based on several factors, including perceived musical coordination, the balance between surprise and familiarity, a “common sense,” and the physical characteristics of the performance setting.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"57 25","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135091944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Composing the Assemblage: Probing Aesthetic and Technical Dimensions of Artistic Creation with Machine Learning 组合:用机器学习探索艺术创作的美学与技术维度
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-11-10 DOI: 10.1162/comj_a_00658
Artemi-Maria Gioti, Aaron Einbond, Georgina Born
Abstract In this article we address the role of machine learning (ML) in the composition of two new musical works for acoustic instruments and electronics through autoethnographic reflection on the experience. Our study poses the key question of how ML shapes, and is in turn shaped by, the aesthetic commitments characterizing distinctive compositional practices. Further, we ask how artistic research in these practices can be informed by critical themes from humanities scholarship on material engagement and critical data studies. Through these frameworks, we consider in what ways the interaction with ML algorithms as part of the compositional process differs from that with other music technology tools. Rather than focus on narrowly conceived ML algorithms, we take into account the heterogeneous assemblage brought into play: from composers, performers, and listeners to loudspeakers, microphones, and audio descriptors. Our analysis focuses on a deconstructive critique of data as being contingent on the decisions and material conditions involved in the data creation process. It also explores how interaction among the human and nonhuman collaborators in the ML assemblage has significant similarities to—as well as differences from—existing models of material engagement. Tracking the creative process of composing these works, we uncover the aesthetic implications of the many nonlinear collaborative decisions involved in composing the assemblage.
在本文中,我们通过对经验的自我民族志反思来解决机器学习(ML)在两部声学乐器和电子学新音乐作品创作中的作用。我们的研究提出了一个关键问题,即ML是如何形成的,并且反过来又被塑造为具有独特构图实践特征的美学承诺。此外,我们想知道这些实践中的艺术研究如何能从人文学科关于材料参与和关键数据研究的关键主题中得到启示。通过这些框架,我们考虑了与ML算法的交互作为作曲过程的一部分与其他音乐技术工具的不同之处。而不是专注于狭隘的ML算法,我们考虑到异构组合带来的影响:从作曲家,表演者,听众到扬声器,麦克风和音频描述符。我们的分析侧重于对数据的解构性批判,因为数据创建过程中涉及的决策和物质条件是偶然的。它还探讨了机器学习组合中人类和非人类合作者之间的交互如何具有显着的相似性以及与现有材料参与模型的差异。通过跟踪这些作品的创作过程,我们揭示了这些作品组合中涉及的许多非线性协同决策的美学含义。
{"title":"Composing the Assemblage: Probing Aesthetic and Technical Dimensions of Artistic Creation with Machine Learning","authors":"Artemi-Maria Gioti, Aaron Einbond, Georgina Born","doi":"10.1162/comj_a_00658","DOIUrl":"https://doi.org/10.1162/comj_a_00658","url":null,"abstract":"Abstract In this article we address the role of machine learning (ML) in the composition of two new musical works for acoustic instruments and electronics through autoethnographic reflection on the experience. Our study poses the key question of how ML shapes, and is in turn shaped by, the aesthetic commitments characterizing distinctive compositional practices. Further, we ask how artistic research in these practices can be informed by critical themes from humanities scholarship on material engagement and critical data studies. Through these frameworks, we consider in what ways the interaction with ML algorithms as part of the compositional process differs from that with other music technology tools. Rather than focus on narrowly conceived ML algorithms, we take into account the heterogeneous assemblage brought into play: from composers, performers, and listeners to loudspeakers, microphones, and audio descriptors. Our analysis focuses on a deconstructive critique of data as being contingent on the decisions and material conditions involved in the data creation process. It also explores how interaction among the human and nonhuman collaborators in the ML assemblage has significant similarities to—as well as differences from—existing models of material engagement. Tracking the creative process of composing these works, we uncover the aesthetic implications of the many nonlinear collaborative decisions involved in composing the assemblage.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"57 24","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135091945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Complementary Roles of Note-Oriented and Mixing-Oriented Software in Student Learning of Computer Science plus Music 笔记型和混合型软件在学生计算机音乐学习中的互补作用
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-08 DOI: 10.1162/comj_a_00651
L. McCall, Jason Freeman, Tom McKlin, Taneisha Lee, Michael Horn, Brian Magerko
Many introductory computer science educational platforms foster student interest and facilitate student learning through the authentic incorporation of music. Although many such platforms have demonstrated promising outcomes in student engagement across diverse student populations and learning contexts, little is known about the specific ways in which music and computer science learning are uniquely combined to support student knowledge in both domains. This study looks at two different learning platforms for computer science and music (CS-plus-music), TunePad and EarSketch, which were used by middle school students during a week-long virtual summer camp. Using both platforms, students created computational music projects, which we analyzed for characteristics of music and code complexity across multiple dimensions. Students also completed surveys before and after the workshop about their perceptions of the platforms and their own backgrounds, and we interviewed some students. The results suggest that different connections between music and computing concepts emerge, as well as different progressions through the concepts themselves, depending in part on the design affordances of the application programming interface for computer music in each platform. Coupled with prior findings about the different roles each platform may play in developing situational interest for students, these findings suggest that different CS-plus-musiclearning platforms can provide complementary roles that benefit and support learning and development of student interest.
许多介绍性的计算机科学教育平台通过真正融入音乐来培养学生的兴趣并促进学生的学习。尽管许多这样的平台在不同的学生群体和学习环境中表现出了良好的学生参与效果,但人们对音乐和计算机科学学习独特结合以支持学生在这两个领域的知识的具体方式知之甚少。这项研究着眼于两个不同的计算机科学和音乐学习平台,TunePad和EarSchetch,中学生在为期一周的虚拟夏令营中使用了这两个平台。使用这两个平台,学生们创建了计算音乐项目,我们分析了音乐的特征和跨多个维度的代码复杂性。学生们还在研讨会前后完成了关于他们对平台的看法和自己背景的调查,我们采访了一些学生。结果表明,音乐和计算概念之间出现了不同的联系,以及概念本身的不同进展,这在一定程度上取决于每个平台中计算机音乐应用程序编程接口的设计可供性。再加上先前关于每个平台在培养学生情境兴趣方面可能扮演的不同角色的研究结果,这些发现表明,不同的CS+音乐学习平台可以提供互补的角色,有利于并支持学生兴趣的学习和发展。
{"title":"Complementary Roles of Note-Oriented and Mixing-Oriented Software in Student Learning of Computer Science plus Music","authors":"L. McCall, Jason Freeman, Tom McKlin, Taneisha Lee, Michael Horn, Brian Magerko","doi":"10.1162/comj_a_00651","DOIUrl":"https://doi.org/10.1162/comj_a_00651","url":null,"abstract":"\u0000 Many introductory computer science educational platforms foster student interest and facilitate student learning through the authentic incorporation of music. Although many such platforms have demonstrated promising outcomes in student engagement across diverse student populations and learning contexts, little is known about the specific ways in which music and computer science learning are uniquely combined to support student knowledge in both domains. This study looks at two different learning platforms for computer science and music (CS-plus-music), TunePad and EarSketch, which were used by middle school students during a week-long virtual summer camp. Using both platforms, students created computational music projects, which we analyzed for characteristics of music and code complexity across multiple dimensions. Students also completed surveys before and after the workshop about their perceptions of the platforms and their own backgrounds, and we interviewed some students. The results suggest that different connections between music and computing concepts emerge, as well as different progressions through the concepts themselves, depending in part on the design affordances of the application programming interface for computer music in each platform. Coupled with prior findings about the different roles each platform may play in developing situational interest for students, these findings suggest that different CS-plus-musiclearning platforms can provide complementary roles that benefit and support learning and development of student interest.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44663145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Detection of Cue Points for the Emulation of DJ Mixing 自动检测提示点的模拟DJ混音
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-08 DOI: 10.1162/comj_a_00652
Mickaël Zehren, Marco Alunno, P. Bientinesi
The automatic identification of cue points is a central task in applications as diverse as music thumbnailing, generation of mash ups, and DJ mixing. Our focus lies in electronic dance music and in a specific kind of cue point, the “switch point,” that makes it possible to automatically construct transitions between tracks, mimicking what professional DJs do. We present two approaches for the detection of switch points. One embodies a few general rules we established from interviews with professional DJs, the other models a manually annotated dataset that we curated. Both approaches are based on feature extraction and novelty analysis. From an evaluation conducted on previously unknown tracks, we found that about 90% of the points generated can be reliably used in the context of a DJ mix.
提示点的自动识别是各种应用程序的中心任务,如音乐缩略图、混搭生成和DJ混音。我们的重点在于电子舞曲和一种特定的提示点,即“切换点”,它可以模仿专业DJ的做法,自动构建曲目之间的转换。我们提出了两种检测切换点的方法。其中一个体现了我们从对专业DJ的采访中建立的一些一般规则,另一个则为我们策划的手动注释数据集建模。这两种方法都基于特征提取和新颖性分析。根据对以前未知的曲目进行的评估,我们发现大约90%的生成点可以可靠地用于DJ混音。
{"title":"Automatic Detection of Cue Points for the Emulation of DJ Mixing","authors":"Mickaël Zehren, Marco Alunno, P. Bientinesi","doi":"10.1162/comj_a_00652","DOIUrl":"https://doi.org/10.1162/comj_a_00652","url":null,"abstract":"\u0000 The automatic identification of cue points is a central task in applications as diverse as music thumbnailing, generation of mash ups, and DJ mixing. Our focus lies in electronic dance music and in a specific kind of cue point, the “switch point,” that makes it possible to automatically construct transitions between tracks, mimicking what professional DJs do. We present two approaches for the detection of switch points. One embodies a few general rules we established from interviews with professional DJs, the other models a manually annotated dataset that we curated. Both approaches are based on feature extraction and novelty analysis. From an evaluation conducted on previously unknown tracks, we found that about 90% of the points generated can be reliably used in the context of a DJ mix.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45202210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recordings 录音
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-06-30 DOI: 10.4135/9781412972024.n2140
R. Feller
James Dashow’s second volume of Soundings in Pure Duration features works for electronic sounds, several which are composed for instrumental or vocal soloists. The composer is well known in the electronic and computer music worlds and has produced a large amount of work over many decades. This release contains the last four works in the Soundings series, composed between 2014 and 2020, as well as the rerelease of “. . . At Other Times, the Distances,” an older, quadraphonic composition. This DVD contains stereo mix downs and full 5.0-surround mixes for each of the five compositions. The stereo versions were all spatially enhanced to suggest a wider-than-normal audio field. Dashow is perhaps best known for his work with spatialization. According to the liner notes,
詹姆斯-达肖(James Dashow)的第二卷《纯时长的声音》(Soundings in Pure Duration)收录了电子声音作品,其中有几首是为器乐或声乐独奏而作。这位作曲家在电子和计算机音乐界享有盛誉,几十年来创作了大量作品。此次发行的作品包括 "Soundings "系列中创作于 2014 年至 2020 年的最后四部作品,以及重新发行的"......"。在其他时间,距离",这是一首较早的四声部作品。这张 DVD 包含五首作品的立体声混音和完整的 5.0 环绕声混音。所有立体声版本都经过空间增强处理,以显示比正常音场更宽广的音场。Dashow 最著名的作品可能是空间化。根据内页说明
{"title":"Recordings","authors":"R. Feller","doi":"10.4135/9781412972024.n2140","DOIUrl":"https://doi.org/10.4135/9781412972024.n2140","url":null,"abstract":"James Dashow’s second volume of Soundings in Pure Duration features works for electronic sounds, several which are composed for instrumental or vocal soloists. The composer is well known in the electronic and computer music worlds and has produced a large amount of work over many decades. This release contains the last four works in the Soundings series, composed between 2014 and 2020, as well as the rerelease of “. . . At Other Times, the Distances,” an older, quadraphonic composition. This DVD contains stereo mix downs and full 5.0-surround mixes for each of the five compositions. The stereo versions were all spatially enhanced to suggest a wider-than-normal audio field. Dashow is perhaps best known for his work with spatialization. According to the liner notes,","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"49 1","pages":"120 - 122"},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139367085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Music Journal
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1