首页 > 最新文献

Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)最新文献

英文 中文
Comparing Meta-Classifiers for Automatic Music Genre Classification 音乐体裁自动分类的元分类器比较
Pub Date : 2019-09-25 DOI: 10.5753/sbcm.2019.10434
V. Y. Shinohara, J. Foleiss, T. Tavares
Automatic music genre classification is the problem of associating mutually-exclusive labels to audio tracks. This process fosters the organization of collections and facilitates searching and marketing music. One approach for automatic music genre classification is to use diverse vector representations for each track, and then classify them individually. After that, a majority voting system can be used to infer a single label to the whole track. In this work, we evaluated the impact of changing the majority voting system to a meta-classifier. The classification results with the meta-classifier showed statistically significant improvements when related to the majority-voting classifier. This indicates that the higher-level information used by the meta-classifier might be relevant for automatic music genre classification.
自动音乐类型分类是将互斥标签与音轨相关联的问题。这个过程促进了收藏的组织,促进了音乐的搜索和营销。自动音乐类型分类的一种方法是对每首曲目使用不同的向量表示,然后分别对它们进行分类。之后,可以使用多数投票系统来推断整个轨道的单个标签。在这项工作中,我们评估了将多数投票系统更改为元分类器的影响。当与多数投票分类器相关时,元分类器的分类结果在统计上显着改善。这表明元分类器使用的高级信息可能与自动音乐类型分类相关。
{"title":"Comparing Meta-Classifiers for Automatic Music Genre Classification","authors":"V. Y. Shinohara, J. Foleiss, T. Tavares","doi":"10.5753/sbcm.2019.10434","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10434","url":null,"abstract":"Automatic music genre classification is the problem of associating mutually-exclusive labels to audio tracks. This process fosters the organization of collections and facilitates searching and marketing music. One approach for automatic music genre classification is to use diverse vector representations for each track, and then classify them individually. After that, a majority voting system can be used to infer a single label to the whole track. In this work, we evaluated the impact of changing the majority voting system to a meta-classifier. The classification results with the meta-classifier showed statistically significant improvements when related to the majority-voting classifier. This indicates that the higher-level information used by the meta-classifier might be relevant for automatic music genre classification.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115129886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Computer Music research at FEEC/Unicamp: a snapshot of 2019 FEEC/Unicamp的计算机音乐研究:2019年的快照
Pub Date : 2019-09-25 DOI: 10.5753/sbcm.2019.10438
T. Tavares, B. Masiero
This is a lab report paper about the state of affairs in the computer music research group at the School of Electrical and Computer Engineering of the University of Campinas (FEEC/Unicamp). This report discusses the people involved in the group, the efforts in teaching and the current research work performed. Last, it provides some discussions on the lessons learned from the past few years and some pointers for future work.
这是一篇关于坎皮纳斯大学电子与计算机工程学院(FEEC/Unicamp)计算机音乐研究小组现状的实验室报告。本报告讨论了参与小组的人员,在教学方面的努力和目前进行的研究工作。最后,对过去几年的经验教训进行了讨论,并对今后的工作提出了建议。
{"title":"Computer Music research at FEEC/Unicamp: a snapshot of 2019","authors":"T. Tavares, B. Masiero","doi":"10.5753/sbcm.2019.10438","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10438","url":null,"abstract":"This is a lab report paper about the state of affairs in the computer music research group at the School of Electrical and Computer Engineering of the University of Campinas (FEEC/Unicamp). This report discusses the people involved in the group, the efforts in teaching and the current research work performed. Last, it provides some discussions on the lessons learned from the past few years and some pointers for future work.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"469 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125840723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Ha Dou Ken Music: Mapping a joysticks as a musical controller 哈豆肯音乐:将操纵杆映射为音乐控制器
Pub Date : 2019-09-25 DOI: 10.5753/sbcm.2019.10425
Gabriel Lopes Rocha, J. Araújo, F. Schiavoni
The structure of a digital musical instrument (DMI) can be splitted up in three parts: interface, mapping and synthesizer. For DMI’s, in which sound synthesis is done via software, the interaction interface serves to capture the performer’s gestures, which can be mapped under various techniques to different sounds. In this work, we bring videogame controls as an interface for musical interaction. Due to its great presence in popular culture and its ease of access, even people who are not in the habit of playing electronic games possibly interacted with this kind of interface once in a lifetime. Thus, gestures like pressing a sequence of buttons, pressing them simultaneously or sliding your fingers through the control can be mapped for musical creation. This work aims the elaboration of a strategy in which several gestures captured by the interface can influence one or several parameters of the sound synthesis, making a mapping denominated many to many. Buttons combinations used to perform game actions that are common in fighting games, like Street Fighter, were mapped to the synthesizer to create a music. Experiments show that this mapping is capable of influencing the musical expression of a DMI making it closer to an acoustic instrument.
数字乐器(DMI)的结构可以分为三个部分:接口、映射和合成器。对于DMI来说,声音合成是通过软件完成的,交互界面用来捕捉表演者的手势,这些手势可以通过各种技术映射到不同的声音。在这项工作中,我们将电子游戏控制作为音乐交互的界面。由于它在流行文化中的巨大存在和易用性,即使是不习惯玩电子游戏的人也可能在一生中与这种界面进行一次互动。因此,按下一系列按钮、同时按下这些按钮或滑动手指等手势都可以映射到音乐创作中。这项工作旨在阐述一种策略,在这种策略中,由界面捕获的几个手势可以影响声音合成的一个或几个参数,从而使映射命名为多对多。用于执行格斗游戏(如《街头霸王》)中常见的游戏动作的按钮组合被映射到合成器上以创建音乐。实验表明,这种映射能够影响DMI的音乐表达,使其更接近原声乐器。
{"title":"Ha Dou Ken Music: Mapping a joysticks as a musical controller","authors":"Gabriel Lopes Rocha, J. Araújo, F. Schiavoni","doi":"10.5753/sbcm.2019.10425","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10425","url":null,"abstract":"The structure of a digital musical instrument (DMI) can be splitted up in three parts: interface, mapping and synthesizer. For DMI’s, in which sound synthesis is done via software, the interaction interface serves to capture the performer’s gestures, which can be mapped under various techniques to different sounds. In this work, we bring videogame controls as an interface for musical interaction. Due to its great presence in popular culture and its ease of access, even people who are not in the habit of playing electronic games possibly interacted with this kind of interface once in a lifetime. Thus, gestures like pressing a sequence of buttons, pressing them simultaneously or sliding your fingers through the control can be mapped for musical creation. This work aims the elaboration of a strategy in which several gestures captured by the interface can influence one or several parameters of the sound synthesis, making a mapping denominated many to many. Buttons combinations used to perform game actions that are common in fighting games, like Street Fighter, were mapped to the synthesizer to create a music. Experiments show that this mapping is capable of influencing the musical expression of a DMI making it closer to an acoustic instrument.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126643360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Per(sino)ficação 每(钟)修改
Pub Date : 2019-09-25 DOI: 10.5753/sbcm.2019.10456
Fábio Dos Passos Carvalho, F. Schiavoni, João Teixeira
The bell’s culture is a secular tradition strongly linked to the religious and social activities of the old Brazilian’s villages. In São João del-Rei, where the singular bell tradition composes the soundscape of the city, the bell’s ringing created from different rhythmic and timbral patterns, establish a language capable of transmitting varied types of messages to the local population. In this way, the social function of these ringing, added to real or legendary facts related to the bell’s culture, were able to produce affections and to constitute a strong relation with the identity of the community. The link of this community with the bells, therefore transcends the man-object relationship, tending to an interpersonal relationship practically. Thus, to emphasize this connection in an artistic way, it is proposed the installation called: PER (SINO) FICAÇÂO. This consists of an environment where users would have their physical attributes collected through the use of computer vision. From the interlocking of these data with timbral attributes of the bells, visitors would be able to sound like these, through mapped bodily attributes capable of performing syntheses based on original samples of the bells. Thus the inverse sense of the personification of the bell is realized, producing the human “bellification”.
钟的文化是一种世俗的传统,与古老的巴西村庄的宗教和社会活动密切相关。在 o jo o del-Rei,独特的钟声传统构成了城市的音景,钟声由不同的节奏和音色模式创造,建立了一种能够向当地居民传递各种信息的语言。通过这种方式,这些钟声的社会功能,加上与钟的文化有关的真实或传说的事实,能够产生情感,并与社区的身份构成强烈的关系。因此,这个社区与钟声的联系超越了人物关系,实际上趋向于人际关系。因此,为了以艺术的方式强调这种联系,我们提出了这个装置:PER (SINO) FICAÇÂO。这包括一个环境,用户可以通过使用计算机视觉收集他们的物理属性。从这些数据与钟声的音色属性的联锁中,游客将能够通过能够根据钟声的原始样本进行合成的映射的身体属性来发出这些声音。从而实现了钟的逆人格化意义,产生了人的“好战”。
{"title":"Per(sino)ficação","authors":"Fábio Dos Passos Carvalho, F. Schiavoni, João Teixeira","doi":"10.5753/sbcm.2019.10456","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10456","url":null,"abstract":"The bell’s culture is a secular tradition strongly linked to the religious and social activities of the old Brazilian’s villages. In São João del-Rei, where the singular bell tradition composes the soundscape of the city, the bell’s ringing created from different rhythmic and timbral patterns, establish a language capable of transmitting varied types of messages to the local population. In this way, the social function of these ringing, added to real or legendary facts related to the bell’s culture, were able to produce affections and to constitute a strong relation with the identity of the community. The link of this community with the bells, therefore transcends the man-object relationship, tending to an interpersonal relationship practically. Thus, to emphasize this connection in an artistic way, it is proposed the installation called: PER (SINO) FICAÇÂO. This consists of an environment where users would have their physical attributes collected through the use of computer vision. From the interlocking of these data with timbral attributes of the bells, visitors would be able to sound like these, through mapped bodily attributes capable of performing syntheses based on original samples of the bells. Thus the inverse sense of the personification of the bell is realized, producing the human “bellification”.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122622506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cluster analysis of benchmark acoustic features on Brazilian music 巴西音乐基准声学特征的聚类分析
Pub Date : 2019-09-25 DOI: 10.5753/sbcm.2019.10444
Leonardo Antunes Ferreira, Estela Ribeiro, C. Thomaz
In this work, we extend a standard and successful acoustic feature extraction approach based on trigger selection to examples of Brazilian Bossa-Nova and Heitor Villa Lobos music pieces. Additionally, we propose and implement a computational framework to disclose whether all the acoustic features extracted are statistically relevant, that is, non-redundant. Our experimental results show that not all these well-known features might be necessary for trigger selection, given the multivariate statistical redundancy found, which associated all these acoustic features into 3 clusters with different factor loadings and, consequently, representatives.
在这项工作中,我们将基于触发器选择的标准和成功的声学特征提取方法扩展到巴西Bossa-Nova和Heitor Villa Lobos音乐作品的示例中。此外,我们提出并实现了一个计算框架来揭示提取的所有声学特征是否具有统计相关性,即非冗余性。我们的实验结果表明,并非所有这些众所周知的特征都可能是触发选择所必需的,考虑到发现的多元统计冗余,它将所有这些声学特征关联到3个具有不同因子负载的聚类中,因此,代表。
{"title":"A cluster analysis of benchmark acoustic features on Brazilian music","authors":"Leonardo Antunes Ferreira, Estela Ribeiro, C. Thomaz","doi":"10.5753/sbcm.2019.10444","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10444","url":null,"abstract":"In this work, we extend a standard and successful acoustic feature extraction approach based on trigger selection to examples of Brazilian Bossa-Nova and Heitor Villa Lobos music pieces. Additionally, we propose and implement a computational framework to disclose whether all the acoustic features extracted are statistically relevant, that is, non-redundant. Our experimental results show that not all these well-known features might be necessary for trigger selection, given the multivariate statistical redundancy found, which associated all these acoustic features into 3 clusters with different factor loadings and, consequently, representatives.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124747080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Prototyping Web instruments with Mosaicode 使用Mosaicode对Web工具进行原型设计
Pub Date : 2019-09-25 DOI: 10.5753/sbcm.2019.10431
A. Gomes, F. Resende, L. Goncalves, F. Schiavoni
Many HTML 5 features enable you to build audio applications for web browsers, simplifying the distribution of these applications, and turning any computer, mobile, and portable device into a digital musical instrument. Developing such applications is not an easy task for layprogrammers or non-programmers and may require some effort by musicians and artists to encode audio applications based on HTML5 technologies and APIs. In order to simplify this task, this paper presents the Mosaicode, a Visual programming environment that enables the development of Digital Musical Instruments using the visual programming paradigm. Applications can be developed in the Mosaicode from diagrams – blocks, which encapsulate basic programming functions, and connections, to exchange information among the blocks. The Mosaicode, by having the functionality of generating, compiling and executing codes, can be used to quickly prototype musical instruments, and make it easy to use for beginners looking for learn programming and expert developers who need to optimize the construction of musical applications.
HTML 5的许多特性使您能够为web浏览器构建音频应用程序,从而简化了这些应用程序的分发,并将任何计算机、移动设备和便携式设备转变为数字乐器。对于非专业程序员或非程序员来说,开发这样的应用程序并不是一件容易的事情,可能需要音乐家和艺术家付出一些努力来基于HTML5技术和api对音频应用程序进行编码。为了简化这一任务,本文提出了Mosaicode,这是一个可视化编程环境,可以使用可视化编程范式开发数字乐器。应用程序可以在Mosaicode中从图块中开发出来,图块封装了基本的编程功能和连接,以便在块之间交换信息。该Mosaicode,通过具有生成,编译和执行代码的功能,可以用来快速原型乐器,并使其易于使用的初学者寻找学习编程和专家开发人员谁需要优化音乐应用程序的建设。
{"title":"Prototyping Web instruments with Mosaicode","authors":"A. Gomes, F. Resende, L. Goncalves, F. Schiavoni","doi":"10.5753/sbcm.2019.10431","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10431","url":null,"abstract":"Many HTML 5 features enable you to build audio applications for web browsers, simplifying the distribution of these applications, and turning any computer, mobile, and portable device into a digital musical instrument. Developing such applications is not an easy task for layprogrammers or non-programmers and may require some effort by musicians and artists to encode audio applications based on HTML5 technologies and APIs. In order to simplify this task, this paper presents the Mosaicode, a Visual programming environment that enables the development of Digital Musical Instruments using the visual programming paradigm. Applications can be developed in the Mosaicode from diagrams – blocks, which encapsulate basic programming functions, and connections, to exchange information among the blocks. The Mosaicode, by having the functionality of generating, compiling and executing codes, can be used to quickly prototype musical instruments, and make it easy to use for beginners looking for learn programming and expert developers who need to optimize the construction of musical applications.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124441119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
J-Analyzer: A Software for Computer-Assisted Analysis of Antônio Carlos Jobims Songs J-Analyzer:计算机辅助分析Antônio卡洛斯·乔布斯歌曲的软件
Pub Date : 2019-09-25 DOI: 10.5753/sbcm.2019.10416
C. Almada, João Penchel, Igor Chagas, Max Kühn, Claudia Usai, Eduardo Cabral, Vinicius Braga, Ana Miccolis
The present paper describes structure and functioning of J-Analyzer, a computational tool for assistedanalysis. It integrates a research project intended to investigate the complete song collection by Brazilian composer Antônio Carlos Jobim, focusing on the aspect of harmonic transformation. The program is used to determine the nature of transformational relations between any chordal pair of chords present in a song, as well as the structure of the chords themselves.
本文介绍了辅助分析计算工具J-Analyzer的结构和功能。它整合了一个研究项目,旨在调查巴西作曲家Antônio卡洛斯·若宾的全集歌曲,重点关注和声转换方面。该程序用于确定歌曲中存在的任何和弦对和弦之间的转换关系的性质,以及和弦本身的结构。
{"title":"J-Analyzer: A Software for Computer-Assisted Analysis of Antônio Carlos Jobims Songs","authors":"C. Almada, João Penchel, Igor Chagas, Max Kühn, Claudia Usai, Eduardo Cabral, Vinicius Braga, Ana Miccolis","doi":"10.5753/sbcm.2019.10416","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10416","url":null,"abstract":"The present paper describes structure and functioning of J-Analyzer, a computational tool for assistedanalysis. It integrates a research project intended to investigate the complete song collection by Brazilian composer Antônio Carlos Jobim, focusing on the aspect of harmonic transformation. The program is used to determine the nature of transformational relations between any chordal pair of chords present in a song, as well as the structure of the chords themselves.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126434613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A retrospective of the research on musical expression conducted at CEGeME 回顾在geeme进行的音乐表达研究
Pub Date : 2019-09-25 DOI: 10.5753/sbcm.2019.10440
M. Loureiro, T. Magalhaes, Davi Mota, T. Campolina, Aluizio Oliveira
CEGeME - Center for Research on Musical Gesture and Expression is affiliated to the Graduate Program in Music of the Universidade Federal de Minas Gerais (UFMG), hosted by the School of Music, Belo Horizonte, Brazil, since 2008. Focused on the empirical investigation of music performance, research at CEGeME departs from musical content information extracted from audio signals and three-dimensional spatial position of musicians, recorded during a music performance. Our laboratories are properly equipped for the acquisition of such data. Aiming at establishing a musicological approach to different aspects of musical expressiveness, we investigate causal relations between the expressive intention of musicians and the way they manipulate the acoustic material and how they move while playing a piece of music. The methodology seeks support on knowledge such as computational modeling, statistical analysis, and digital signal processing, which adds to traditional musicology skills. The group has attracted study postulants from different specialties, such as Computer Science, Engineering, Physics, Phonoaudiology and Music Therapy, as well as collaborations from professional musicians instigated by specific inquiries on the performance on their instruments. This paper presents a brief retrospective of the different research projects conducted at CEGeME.
ceegeme -音乐手势和表达研究中心隶属于米纳斯吉拉斯州联邦大学(UFMG)音乐研究生课程,自2008年以来由巴西贝洛奥里藏特音乐学院主办。gegeme的研究侧重于对音乐表演的实证调查,从音乐表演过程中录制的音频信号和音乐家的三维空间位置中提取音乐内容信息。我们的实验室有适当的设备来获取这类数据。为了建立音乐表现力的不同方面的音乐学方法,我们研究了音乐家的表达意图与他们在演奏音乐时操纵声学材料的方式以及他们如何移动之间的因果关系。该方法寻求诸如计算建模、统计分析和数字信号处理等知识的支持,这些知识增加了传统音乐学技能。该小组吸引了来自不同专业的研究人员,如计算机科学,工程,物理,音质学和音乐治疗,以及专业音乐家的合作,因为他们的乐器表现有特定的询问。本文简要回顾了在geeme进行的不同研究项目。
{"title":"A retrospective of the research on musical expression conducted at CEGeME","authors":"M. Loureiro, T. Magalhaes, Davi Mota, T. Campolina, Aluizio Oliveira","doi":"10.5753/sbcm.2019.10440","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10440","url":null,"abstract":"CEGeME - Center for Research on Musical Gesture and Expression is affiliated to the Graduate Program in Music of the Universidade Federal de Minas Gerais (UFMG), hosted by the School of Music, Belo Horizonte, Brazil, since 2008. Focused on the empirical investigation of music performance, research at CEGeME departs from musical content information extracted from audio signals and three-dimensional spatial position of musicians, recorded during a music performance. Our laboratories are properly equipped for the acquisition of such data. Aiming at establishing a musicological approach to different aspects of musical expressiveness, we investigate causal relations between the expressive intention of musicians and the way they manipulate the acoustic material and how they move while playing a piece of music. The methodology seeks support on knowledge such as computational modeling, statistical analysis, and digital signal processing, which adds to traditional musicology skills. The group has attracted study postulants from different specialties, such as Computer Science, Engineering, Physics, Phonoaudiology and Music Therapy, as well as collaborations from professional musicians instigated by specific inquiries on the performance on their instruments. This paper presents a brief retrospective of the different research projects conducted at CEGeME.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134045285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automatic onset detection using convolutional neural networks 使用卷积神经网络的自动发作检测
Pub Date : 2019-09-25 DOI: 10.5753/sbcm.2019.10446
W. Cornelissen, M. Loureiro
A very significant task for music research is to estimate instants when meaningful events begin (onset) and when they end (offset). Onset detection is widely applied in many fields: electrocardiograms, seismographic data, stock market results and many Music Information Research(MIR) tasks, such as Automatic Music Transcription, Rhythm Detection, Speech Recognition, etc. Automatic Onset Detection(AOD) received, recently, a huge contribution coming from Artificial Intelligence (AI) methods, mainly Machine Learning and Deep Learning. In this work, the use of Convolutional Neural Networks (CNN) is explored by adapting its original architecture in order to apply the approach to automatic onset detection on audio musical signals. We used a CNN network for onset detection on a very general dataset, well acknowledged by the MIR community, and examined the accuracy of the method by comparison to ground truth data published by the dataset. The results are promising and outperform another methods of musical onset detection.
音乐研究的一个非常重要的任务是估计有意义的事件开始(开始)和结束(抵消)的时刻。起跳检测被广泛应用于许多领域:心电图、地震数据、股票市场结果和许多音乐信息研究(MIR)任务,如音乐自动转录、节奏检测、语音识别等。近年来,人工智能(AI)方法(主要是机器学习和深度学习)对自动发作检测(AOD)做出了巨大贡献。在这项工作中,通过调整卷积神经网络(CNN)的原始架构,探索其使用,以便将该方法应用于音频音乐信号的自动开始检测。我们使用CNN网络在一个非常通用的数据集上进行发作检测,该数据集得到了MIR社区的广泛认可,并通过与数据集发布的地面真实数据进行比较来检查该方法的准确性。结果是有希望的,并优于其他方法的音乐开始检测。
{"title":"Automatic onset detection using convolutional neural networks","authors":"W. Cornelissen, M. Loureiro","doi":"10.5753/sbcm.2019.10446","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10446","url":null,"abstract":"A very significant task for music research is to estimate instants when meaningful events begin (onset) and when they end (offset). Onset detection is widely applied in many fields: electrocardiograms, seismographic data, stock market results and many Music Information Research(MIR) tasks, such as Automatic Music Transcription, Rhythm Detection, Speech Recognition, etc. Automatic Onset Detection(AOD) received, recently, a huge contribution coming from Artificial Intelligence (AI) methods, mainly Machine Learning and Deep Learning. In this work, the use of Convolutional Neural Networks (CNN) is explored by adapting its original architecture in order to apply the approach to automatic onset detection on audio musical signals. We used a CNN network for onset detection on a very general dataset, well acknowledged by the MIR community, and examined the accuracy of the method by comparison to ground truth data published by the dataset. The results are promising and outperform another methods of musical onset detection.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125843574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSYCHO library for Pure Data 用于纯数据的PSYCHO库
Pub Date : 2019-09-25 DOI: 10.5753/sbcm.2019.10432
Alexandre Torres Porres
This paper describes the PSYCHO library for the Pure Data programming language. This library provides novel functions for Pure Data and is a collection of compiled objects, abstractions and patches that include psychoacoustic models and conversions. Most notably, it provides models related to Sensory Dissonance, such as Sharpness, Roughness, Tonalness and Pitch Commonality. This library is an evolution and revision of earlier research work developed during a masters and PhD program. The previous developments had not been made easily available as a single and well documented library. Moreover, the work went through a major overhaul, got rid of the dependance of Pd Extended (now an abandoned and unsupported software) and provides new features. This paper describes the evolution of the early work into the PSYCHO library and presents its main objects, functions and contributions.
本文描述了Pure Data编程语言的PSYCHO库。这个库为Pure Data提供了新颖的函数,是编译对象、抽象和补丁的集合,包括心理声学模型和转换。最值得注意的是,它提供了与感官失调相关的模型,如锐度、粗糙度、音调和音调共性。该图书馆是在硕士和博士课程期间开发的早期研究工作的演变和修订。以前的开发并不容易作为一个单独的、文档完备的库提供。此外,这项工作经历了一次重大的修改,摆脱了对Pd Extended的依赖(现在是一个被抛弃和不受支持的软件),并提供了新的功能。本文描述了早期工作在PSYCHO库中的演变,并介绍了它的主要对象、功能和贡献。
{"title":"PSYCHO library for Pure Data","authors":"Alexandre Torres Porres","doi":"10.5753/sbcm.2019.10432","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10432","url":null,"abstract":"This paper describes the PSYCHO library for the Pure Data programming language. This library provides novel functions for Pure Data and is a collection of compiled objects, abstractions and patches that include psychoacoustic models and conversions. Most notably, it provides models related to Sensory Dissonance, such as Sharpness, Roughness, Tonalness and Pitch Commonality. This library is an evolution and revision of earlier research work developed during a masters and PhD program. The previous developments had not been made easily available as a single and well documented library. Moreover, the work went through a major overhaul, got rid of the dependance of Pd Extended (now an abandoned and unsupported software) and provides new features. This paper describes the evolution of the early work into the PSYCHO library and presents its main objects, functions and contributions.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122655196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1