首页 > 最新文献

Computer Music Journal最新文献

英文 中文
Embodying Spatial Sound Synthesis with AI in Two Compositions for Instruments and 3-D Electronics 在两首乐器和三维电子乐作品中体现人工智能的空间声音合成技术
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-01-17 DOI: 10.1162/comj_a_00664
Aaron Einbond, Thibaut Carpentier, Diemo Schwarz, Jean Bresson
The situated spatial presence of musical instruments has been well studied in the fields of acoustics and music perception research, but so far it has not been the focus of human-AI interaction. We respond critically to this trend by seeking to reembody interactive electronics using data derived from natural acoustic phenomena. Two musical works, composed for human soloist and computer-generated live electronics, are intended to situate the listener in an immersive sonic environment in which real and virtual sources blend seamlessly. To do so, we experimented with two contrasting reproduction setups: a surrounding Ambisonic loudspeaker dome and a compact spherical loudspeaker array for radiation synthesis. A large database of measured radiation patterns of orchestral instruments served as a training set for machine learning models to control spatially rich 3-D patterns for electronic sounds. These are exploited during performance in response to live sounds captured with a spherical microphone array and used to train computer models of improvisation and to trigger corpus-based spatial synthesis. We show how AI techniques are useful to utilize complex, multidimensional, spatial data in the context of computer-assisted composition and human-computer interactive improvisation.
在声学和音乐感知研究领域,人们对乐器的空间存在进行了深入研究,但迄今为止,它还不是人机交互的重点。我们对这一趋势做出了批判性的回应,试图利用从自然声学现象中获得的数据来重新体现交互式电子设备。我们为人类独奏者和计算机生成的现场电子设备创作了两部音乐作品,旨在让听众置身于真实与虚拟音源完美融合的沉浸式音效环境中。为此,我们尝试了两种截然不同的再现设置:一种是环绕式 Ambisonic 圆顶扬声器,另一种是用于辐射合成的紧凑型球形扬声器阵列。一个大型管弦乐器测量辐射模式数据库可作为机器学习模型的训练集,用于控制电子音效丰富的空间三维模式。在演出过程中,我们利用球形麦克风阵列捕捉到的现场声音,训练即兴演奏的计算机模型,并触发基于语料库的空间合成。我们展示了人工智能技术如何在计算机辅助作曲和人机交互即兴表演中有效利用复杂的多维空间数据。
{"title":"Embodying Spatial Sound Synthesis with AI in Two Compositions for Instruments and 3-D Electronics","authors":"Aaron Einbond, Thibaut Carpentier, Diemo Schwarz, Jean Bresson","doi":"10.1162/comj_a_00664","DOIUrl":"https://doi.org/10.1162/comj_a_00664","url":null,"abstract":"The situated spatial presence of musical instruments has been well studied in the fields of acoustics and music perception research, but so far it has not been the focus of human-AI interaction. We respond critically to this trend by seeking to reembody interactive electronics using data derived from natural acoustic phenomena. Two musical works, composed for human soloist and computer-generated live electronics, are intended to situate the listener in an immersive sonic environment in which real and virtual sources blend seamlessly. To do so, we experimented with two contrasting reproduction setups: a surrounding Ambisonic loudspeaker dome and a compact spherical loudspeaker array for radiation synthesis. A large database of measured radiation patterns of orchestral instruments served as a training set for machine learning models to control spatially rich 3-D patterns for electronic sounds. These are exploited during performance in response to live sounds captured with a spherical microphone array and used to train computer models of improvisation and to trigger corpus-based spatial synthesis. We show how AI techniques are useful to utilize complex, multidimensional, spatial data in the context of computer-assisted composition and human-computer interactive improvisation.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139496833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cocreative Interaction: Somax2 and the REACH Project 创意互动:Somax2 和 REACH 项目
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-01-16 DOI: 10.1162/comj_a_00662
Gérard Assayag, Laurent Bonnasse-Gahot, Joakim Borg
Somax2 is an AI-based multiagent system for human-machine coimprovisation that generates stylistically coherent streams while continuously listening and adapting to musicians or other agents. The model on which it is based can be used with little configuration to interact with humans in full autonomy, but it also allows fine real-time control of its generative processes and interaction strategies, closer in this case to a “smart” digital instrument. An offspring of the Omax system, conceived at the Institut de Recherche et Coordination Acoustique/Musique, the Somax2 environment is part of the European Research Council Raising Cocreativity in Cyber-Human Musicianship (REACH) project, which studies distributed creativity as a general template for symbiotic interaction between humans and digital systems. It fosters mixed musical reality involving cocreative AI agents. The REACH project puts forward the idea that cocreativity in cyber-human systems results from the emergence of complex joint behavior, produced by interaction and featuring cross-learning mechanisms. Somax2 is a first step toward this ideal, and already shows life-size achievements. This article describes Somax2 extensively, from its theoretical model to its system architecture, through its listening and learning strategies, representation spaces, and interaction policies.
Somax2是一个基于人工智能的多代理系统,用于人机协同创作,在不断聆听和适应音乐家或其他代理的同时,生成风格一致的音乐流。该系统所基于的模型只需少量配置就能完全自主地与人类互动,同时还能对其生成过程和互动策略进行精细的实时控制,在这种情况下更接近于 "智能 "数字乐器。Somax2 环境是声学/音乐研究与协调研究所构想的 Omax 系统的后代,也是欧洲研究理事会 "提高网络人类音乐创作中的共同创造性"(REACH)项目的一部分,该项目将分布式创造性作为人类与数字系统共生互动的通用模板进行研究。该项目将分布式创造力作为人类与数字系统之间共生互动的通用模板进行研究,以促进涉及共同创造性人工智能代理的混合音乐现实。REACH 项目提出的理念是,网络-人类系统中的共同创造性源于复杂的联合行为,这种行为由互动产生,并以交叉学习机制为特征。Somax2是向这一理想迈出的第一步,目前已取得了一定的成果。本文将广泛介绍 Somax2,从其理论模型到系统架构,再到其聆听和学习策略、表示空间和交互策略。
{"title":"Cocreative Interaction: Somax2 and the REACH Project","authors":"Gérard Assayag, Laurent Bonnasse-Gahot, Joakim Borg","doi":"10.1162/comj_a_00662","DOIUrl":"https://doi.org/10.1162/comj_a_00662","url":null,"abstract":"Somax2 is an AI-based multiagent system for human-machine coimprovisation that generates stylistically coherent streams while continuously listening and adapting to musicians or other agents. The model on which it is based can be used with little configuration to interact with humans in full autonomy, but it also allows fine real-time control of its generative processes and interaction strategies, closer in this case to a “smart” digital instrument. An offspring of the Omax system, conceived at the Institut de Recherche et Coordination Acoustique/Musique, the Somax2 environment is part of the European Research Council Raising Cocreativity in Cyber-Human Musicianship (REACH) project, which studies distributed creativity as a general template for symbiotic interaction between humans and digital systems. It fosters mixed musical reality involving cocreative AI agents. The REACH project puts forward the idea that cocreativity in cyber-human systems results from the emergence of complex joint behavior, produced by interaction and featuring cross-learning mechanisms. Somax2 is a first step toward this ideal, and already shows life-size achievements. This article describes Somax2 extensively, from its theoretical model to its system architecture, through its listening and learning strategies, representation spaces, and interaction policies.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139482224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Live Coding Machine Learning: Finding the Moments of Intervention in Autonomous Processes 实时编码机器学习:寻找自主进程中的干预时机
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-01-16 DOI: 10.1162/comj_a_00663
Iván Paz, Shelly Knottsy
Machine learning (ML) deals with algorithms able to learn from data, with the primary aim of finding optimum solutions to perform tasks autonomously. In recent years there has been development in integrating ML algorithms with live coding practices, raising questions about what to optimize or automate, the agency of the algorithms, and in which parts of the ML processes one might intervene midperformance. Live coding performance practices typically involve conversational interaction with algorithmic processes in real time. In analyzing systems integrating live coding and ML, we consider the musical and performative implications of the “moment of intervention” in the ML model and workflow, and the channels for real-time intervention. We propose a framework for analysis, through which we reflect on the domain-specific algorithms and practices being developed that combine these two practices.
机器学习(ML)涉及能够从数据中学习的算法,其主要目的是找到最佳解决方案来自主执行任务。近年来,将 ML 算法与实时编码实践结合在一起的做法得到了发展,但也提出了一些问题,如哪些内容需要优化或自动化、算法的作用以及在执行过程中可以干预 ML 流程的哪些部分。实时编码性能实践通常涉及与算法过程的实时对话交互。在分析集成了现场编码和 ML 的系统时,我们会考虑 ML 模型和工作流程中 "干预时刻 "的音乐和表演含义,以及实时干预的渠道。我们提出了一个分析框架,通过该框架,我们思考了结合这两种实践而开发的特定领域算法和实践。
{"title":"Live Coding Machine Learning: Finding the Moments of Intervention in Autonomous Processes","authors":"Iván Paz, Shelly Knottsy","doi":"10.1162/comj_a_00663","DOIUrl":"https://doi.org/10.1162/comj_a_00663","url":null,"abstract":"Machine learning (ML) deals with algorithms able to learn from data, with the primary aim of finding optimum solutions to perform tasks autonomously. In recent years there has been development in integrating ML algorithms with live coding practices, raising questions about what to optimize or automate, the agency of the algorithms, and in which parts of the ML processes one might intervene midperformance. Live coding performance practices typically involve conversational interaction with algorithmic processes in real time. In analyzing systems integrating live coding and ML, we consider the musical and performative implications of the “moment of intervention” in the ML model and workflow, and the channels for real-time intervention. We propose a framework for analysis, through which we reflect on the domain-specific algorithms and practices being developed that combine these two practices.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139482307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tool or Actor? Expert Improvisers' Evaluation of a Musical AI “Toddler” 工具还是演员?即兴专家对音乐人工智能“蹒跚学步”的评价
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-11-10 DOI: 10.1162/comj_a_00657
Çağrı Erdem, Benedikte Wallace, Kyrre Glette, Alexander Refsum Jensenius
Abstract In this article we introduce the coadaptive audiovisual instrument CAVI. This instrument uses deep learning to generate control signals based on muscle and motion data of a performer's actions. The generated signals control time-based live sound-processing modules. How does a performer perceive such an instrument? Does it feel like a machine learning-based musical tool? Or is it an actor with the potential to become a musical partner? We report on an evaluation of CAVI after it had been used in two public performances. The evaluation is based on interviews with the performers, audience questionnaires, and the creator's self-analysis. Our findings suggest that the perception of CAVI as a tool or actor correlates with the performer's sense of agency. The perceived agency changes throughout a performance based on several factors, including perceived musical coordination, the balance between surprise and familiarity, a “common sense,” and the physical characteristics of the performance setting.
本文介绍了一种自适应视听仪器CAVI。该仪器使用深度学习来生成基于表演者动作的肌肉和运动数据的控制信号。生成的信号控制基于时间的实时声音处理模块。演奏者如何看待这样的乐器?它感觉像是一个基于机器学习的音乐工具吗?还是一个有潜力成为音乐搭档的演员?我们报告了CAVI在两次公开演出中使用后的评估。评估是基于对表演者的采访、观众的问卷调查和创作者的自我分析。我们的研究结果表明,CAVI作为工具或演员的感知与表演者的代理感相关。在整个表演过程中,感知代理的变化基于几个因素,包括感知到的音乐协调性、惊喜与熟悉之间的平衡、“常识”和表演环境的物理特征。
{"title":"Tool or Actor? Expert Improvisers' Evaluation of a Musical AI “Toddler”","authors":"Çağrı Erdem, Benedikte Wallace, Kyrre Glette, Alexander Refsum Jensenius","doi":"10.1162/comj_a_00657","DOIUrl":"https://doi.org/10.1162/comj_a_00657","url":null,"abstract":"Abstract In this article we introduce the coadaptive audiovisual instrument CAVI. This instrument uses deep learning to generate control signals based on muscle and motion data of a performer's actions. The generated signals control time-based live sound-processing modules. How does a performer perceive such an instrument? Does it feel like a machine learning-based musical tool? Or is it an actor with the potential to become a musical partner? We report on an evaluation of CAVI after it had been used in two public performances. The evaluation is based on interviews with the performers, audience questionnaires, and the creator's self-analysis. Our findings suggest that the perception of CAVI as a tool or actor correlates with the performer's sense of agency. The perceived agency changes throughout a performance based on several factors, including perceived musical coordination, the balance between surprise and familiarity, a “common sense,” and the physical characteristics of the performance setting.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"57 25","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135091944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Composing the Assemblage: Probing Aesthetic and Technical Dimensions of Artistic Creation with Machine Learning 组合:用机器学习探索艺术创作的美学与技术维度
Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-11-10 DOI: 10.1162/comj_a_00658
Artemi-Maria Gioti, Aaron Einbond, Georgina Born
Abstract In this article we address the role of machine learning (ML) in the composition of two new musical works for acoustic instruments and electronics through autoethnographic reflection on the experience. Our study poses the key question of how ML shapes, and is in turn shaped by, the aesthetic commitments characterizing distinctive compositional practices. Further, we ask how artistic research in these practices can be informed by critical themes from humanities scholarship on material engagement and critical data studies. Through these frameworks, we consider in what ways the interaction with ML algorithms as part of the compositional process differs from that with other music technology tools. Rather than focus on narrowly conceived ML algorithms, we take into account the heterogeneous assemblage brought into play: from composers, performers, and listeners to loudspeakers, microphones, and audio descriptors. Our analysis focuses on a deconstructive critique of data as being contingent on the decisions and material conditions involved in the data creation process. It also explores how interaction among the human and nonhuman collaborators in the ML assemblage has significant similarities to—as well as differences from—existing models of material engagement. Tracking the creative process of composing these works, we uncover the aesthetic implications of the many nonlinear collaborative decisions involved in composing the assemblage.
在本文中,我们通过对经验的自我民族志反思来解决机器学习(ML)在两部声学乐器和电子学新音乐作品创作中的作用。我们的研究提出了一个关键问题,即ML是如何形成的,并且反过来又被塑造为具有独特构图实践特征的美学承诺。此外,我们想知道这些实践中的艺术研究如何能从人文学科关于材料参与和关键数据研究的关键主题中得到启示。通过这些框架,我们考虑了与ML算法的交互作为作曲过程的一部分与其他音乐技术工具的不同之处。而不是专注于狭隘的ML算法,我们考虑到异构组合带来的影响:从作曲家,表演者,听众到扬声器,麦克风和音频描述符。我们的分析侧重于对数据的解构性批判,因为数据创建过程中涉及的决策和物质条件是偶然的。它还探讨了机器学习组合中人类和非人类合作者之间的交互如何具有显着的相似性以及与现有材料参与模型的差异。通过跟踪这些作品的创作过程,我们揭示了这些作品组合中涉及的许多非线性协同决策的美学含义。
{"title":"Composing the Assemblage: Probing Aesthetic and Technical Dimensions of Artistic Creation with Machine Learning","authors":"Artemi-Maria Gioti, Aaron Einbond, Georgina Born","doi":"10.1162/comj_a_00658","DOIUrl":"https://doi.org/10.1162/comj_a_00658","url":null,"abstract":"Abstract In this article we address the role of machine learning (ML) in the composition of two new musical works for acoustic instruments and electronics through autoethnographic reflection on the experience. Our study poses the key question of how ML shapes, and is in turn shaped by, the aesthetic commitments characterizing distinctive compositional practices. Further, we ask how artistic research in these practices can be informed by critical themes from humanities scholarship on material engagement and critical data studies. Through these frameworks, we consider in what ways the interaction with ML algorithms as part of the compositional process differs from that with other music technology tools. Rather than focus on narrowly conceived ML algorithms, we take into account the heterogeneous assemblage brought into play: from composers, performers, and listeners to loudspeakers, microphones, and audio descriptors. Our analysis focuses on a deconstructive critique of data as being contingent on the decisions and material conditions involved in the data creation process. It also explores how interaction among the human and nonhuman collaborators in the ML assemblage has significant similarities to—as well as differences from—existing models of material engagement. Tracking the creative process of composing these works, we uncover the aesthetic implications of the many nonlinear collaborative decisions involved in composing the assemblage.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"57 24","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135091945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finite State Machines with Data Paths in Visual Languages for Music 音乐视觉语言中带有数据路径的有限状态机
IF 0.4 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-01 DOI: 10.1162/COMJ_a_00688
Tiago Fernandes Tavares;José Eduardo Fornari
{"title":"Finite State Machines with Data Paths in Visual Languages for Music","authors":"Tiago Fernandes Tavares;José Eduardo Fornari","doi":"10.1162/COMJ_a_00688","DOIUrl":"10.1162/COMJ_a_00688","url":null,"abstract":"","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"47 3","pages":"35-49"},"PeriodicalIF":0.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Products of Interest
IF 0.4 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-01 DOI: 10.1162/COMJ_r_00690
{"title":"Products of Interest","authors":"","doi":"10.1162/COMJ_r_00690","DOIUrl":"https://doi.org/10.1162/COMJ_r_00690","url":null,"abstract":"","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"47 3","pages":"71-85"},"PeriodicalIF":0.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143430562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Sonic Phantoms with Quadratic Difference Tone Spectrum Synthesis 利用二次差分音调频谱合成技术生成音效幻影
IF 0.4 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-01 DOI: 10.1162/COMJ_a_00687
Esteban Gutiérrez;Christopher Haworth;Rodrigo F. Cádiz
{"title":"Generating Sonic Phantoms with Quadratic Difference Tone Spectrum Synthesis","authors":"Esteban Gutiérrez;Christopher Haworth;Rodrigo F. Cádiz","doi":"10.1162/COMJ_a_00687","DOIUrl":"10.1162/COMJ_a_00687","url":null,"abstract":"","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"47 3","pages":"19-34"},"PeriodicalIF":0.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
About This Issue
IF 0.4 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-01 DOI: 10.1162/COMJ_e_00689
{"title":"About This Issue","authors":"","doi":"10.1162/COMJ_e_00689","DOIUrl":"https://doi.org/10.1162/COMJ_e_00689","url":null,"abstract":"","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"47 3","pages":"1-1"},"PeriodicalIF":0.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143430498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Music Features for Managing Revisions and Variants of Musical Scores
IF 0.4 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-01 DOI: 10.1162/COMJ_a_00691
Paul Grünbacher;Rudolf Hanl;Lukas Linsbauer
{"title":"Using Music Features for Managing Revisions and Variants of Musical Scores","authors":"Paul Grünbacher;Rudolf Hanl;Lukas Linsbauer","doi":"10.1162/COMJ_a_00691","DOIUrl":"https://doi.org/10.1162/COMJ_a_00691","url":null,"abstract":"","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"47 3","pages":"50-68"},"PeriodicalIF":0.4,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143430518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Music Journal
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1