Automatic music genre classification is the problem of associating mutually-exclusive labels to audio tracks. This process fosters the organization of collections and facilitates searching and marketing music. One approach for automatic music genre classification is to use diverse vector representations for each track, and then classify them individually. After that, a majority voting system can be used to infer a single label to the whole track. In this work, we evaluated the impact of changing the majority voting system to a meta-classifier. The classification results with the meta-classifier showed statistically significant improvements when related to the majority-voting classifier. This indicates that the higher-level information used by the meta-classifier might be relevant for automatic music genre classification.
{"title":"Comparing Meta-Classifiers for Automatic Music Genre Classification","authors":"V. Y. Shinohara, J. Foleiss, T. Tavares","doi":"10.5753/sbcm.2019.10434","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10434","url":null,"abstract":"Automatic music genre classification is the problem of associating mutually-exclusive labels to audio tracks. This process fosters the organization of collections and facilitates searching and marketing music. One approach for automatic music genre classification is to use diverse vector representations for each track, and then classify them individually. After that, a majority voting system can be used to infer a single label to the whole track. In this work, we evaluated the impact of changing the majority voting system to a meta-classifier. The classification results with the meta-classifier showed statistically significant improvements when related to the majority-voting classifier. This indicates that the higher-level information used by the meta-classifier might be relevant for automatic music genre classification.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115129886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This is a lab report paper about the state of affairs in the computer music research group at the School of Electrical and Computer Engineering of the University of Campinas (FEEC/Unicamp). This report discusses the people involved in the group, the efforts in teaching and the current research work performed. Last, it provides some discussions on the lessons learned from the past few years and some pointers for future work.
{"title":"Computer Music research at FEEC/Unicamp: a snapshot of 2019","authors":"T. Tavares, B. Masiero","doi":"10.5753/sbcm.2019.10438","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10438","url":null,"abstract":"This is a lab report paper about the state of affairs in the computer music research group at the School of Electrical and Computer Engineering of the University of Campinas (FEEC/Unicamp). This report discusses the people involved in the group, the efforts in teaching and the current research work performed. Last, it provides some discussions on the lessons learned from the past few years and some pointers for future work.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"469 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125840723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The structure of a digital musical instrument (DMI) can be splitted up in three parts: interface, mapping and synthesizer. For DMI’s, in which sound synthesis is done via software, the interaction interface serves to capture the performer’s gestures, which can be mapped under various techniques to different sounds. In this work, we bring videogame controls as an interface for musical interaction. Due to its great presence in popular culture and its ease of access, even people who are not in the habit of playing electronic games possibly interacted with this kind of interface once in a lifetime. Thus, gestures like pressing a sequence of buttons, pressing them simultaneously or sliding your fingers through the control can be mapped for musical creation. This work aims the elaboration of a strategy in which several gestures captured by the interface can influence one or several parameters of the sound synthesis, making a mapping denominated many to many. Buttons combinations used to perform game actions that are common in fighting games, like Street Fighter, were mapped to the synthesizer to create a music. Experiments show that this mapping is capable of influencing the musical expression of a DMI making it closer to an acoustic instrument.
{"title":"Ha Dou Ken Music: Mapping a joysticks as a musical controller","authors":"Gabriel Lopes Rocha, J. Araújo, F. Schiavoni","doi":"10.5753/sbcm.2019.10425","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10425","url":null,"abstract":"The structure of a digital musical instrument (DMI) can be splitted up in three parts: interface, mapping and synthesizer. For DMI’s, in which sound synthesis is done via software, the interaction interface serves to capture the performer’s gestures, which can be mapped under various techniques to different sounds. In this work, we bring videogame controls as an interface for musical interaction. Due to its great presence in popular culture and its ease of access, even people who are not in the habit of playing electronic games possibly interacted with this kind of interface once in a lifetime. Thus, gestures like pressing a sequence of buttons, pressing them simultaneously or sliding your fingers through the control can be mapped for musical creation. This work aims the elaboration of a strategy in which several gestures captured by the interface can influence one or several parameters of the sound synthesis, making a mapping denominated many to many. Buttons combinations used to perform game actions that are common in fighting games, like Street Fighter, were mapped to the synthesizer to create a music. Experiments show that this mapping is capable of influencing the musical expression of a DMI making it closer to an acoustic instrument.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126643360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fábio Dos Passos Carvalho, F. Schiavoni, João Teixeira
The bell’s culture is a secular tradition strongly linked to the religious and social activities of the old Brazilian’s villages. In São João del-Rei, where the singular bell tradition composes the soundscape of the city, the bell’s ringing created from different rhythmic and timbral patterns, establish a language capable of transmitting varied types of messages to the local population. In this way, the social function of these ringing, added to real or legendary facts related to the bell’s culture, were able to produce affections and to constitute a strong relation with the identity of the community. The link of this community with the bells, therefore transcends the man-object relationship, tending to an interpersonal relationship practically. Thus, to emphasize this connection in an artistic way, it is proposed the installation called: PER (SINO) FICAÇÂO. This consists of an environment where users would have their physical attributes collected through the use of computer vision. From the interlocking of these data with timbral attributes of the bells, visitors would be able to sound like these, through mapped bodily attributes capable of performing syntheses based on original samples of the bells. Thus the inverse sense of the personification of the bell is realized, producing the human “bellification”.
钟的文化是一种世俗的传统,与古老的巴西村庄的宗教和社会活动密切相关。在 o jo o del-Rei,独特的钟声传统构成了城市的音景,钟声由不同的节奏和音色模式创造,建立了一种能够向当地居民传递各种信息的语言。通过这种方式,这些钟声的社会功能,加上与钟的文化有关的真实或传说的事实,能够产生情感,并与社区的身份构成强烈的关系。因此,这个社区与钟声的联系超越了人物关系,实际上趋向于人际关系。因此,为了以艺术的方式强调这种联系,我们提出了这个装置:PER (SINO) FICAÇÂO。这包括一个环境,用户可以通过使用计算机视觉收集他们的物理属性。从这些数据与钟声的音色属性的联锁中,游客将能够通过能够根据钟声的原始样本进行合成的映射的身体属性来发出这些声音。从而实现了钟的逆人格化意义,产生了人的“好战”。
{"title":"Per(sino)ficação","authors":"Fábio Dos Passos Carvalho, F. Schiavoni, João Teixeira","doi":"10.5753/sbcm.2019.10456","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10456","url":null,"abstract":"The bell’s culture is a secular tradition strongly linked to the religious and social activities of the old Brazilian’s villages. In São João del-Rei, where the singular bell tradition composes the soundscape of the city, the bell’s ringing created from different rhythmic and timbral patterns, establish a language capable of transmitting varied types of messages to the local population. In this way, the social function of these ringing, added to real or legendary facts related to the bell’s culture, were able to produce affections and to constitute a strong relation with the identity of the community. The link of this community with the bells, therefore transcends the man-object relationship, tending to an interpersonal relationship practically. Thus, to emphasize this connection in an artistic way, it is proposed the installation called: PER (SINO) FICAÇÂO. This consists of an environment where users would have their physical attributes collected through the use of computer vision. From the interlocking of these data with timbral attributes of the bells, visitors would be able to sound like these, through mapped bodily attributes capable of performing syntheses based on original samples of the bells. Thus the inverse sense of the personification of the bell is realized, producing the human “bellification”.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122622506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leonardo Antunes Ferreira, Estela Ribeiro, C. Thomaz
In this work, we extend a standard and successful acoustic feature extraction approach based on trigger selection to examples of Brazilian Bossa-Nova and Heitor Villa Lobos music pieces. Additionally, we propose and implement a computational framework to disclose whether all the acoustic features extracted are statistically relevant, that is, non-redundant. Our experimental results show that not all these well-known features might be necessary for trigger selection, given the multivariate statistical redundancy found, which associated all these acoustic features into 3 clusters with different factor loadings and, consequently, representatives.
在这项工作中,我们将基于触发器选择的标准和成功的声学特征提取方法扩展到巴西Bossa-Nova和Heitor Villa Lobos音乐作品的示例中。此外,我们提出并实现了一个计算框架来揭示提取的所有声学特征是否具有统计相关性,即非冗余性。我们的实验结果表明,并非所有这些众所周知的特征都可能是触发选择所必需的,考虑到发现的多元统计冗余,它将所有这些声学特征关联到3个具有不同因子负载的聚类中,因此,代表。
{"title":"A cluster analysis of benchmark acoustic features on Brazilian music","authors":"Leonardo Antunes Ferreira, Estela Ribeiro, C. Thomaz","doi":"10.5753/sbcm.2019.10444","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10444","url":null,"abstract":"In this work, we extend a standard and successful acoustic feature extraction approach based on trigger selection to examples of Brazilian Bossa-Nova and Heitor Villa Lobos music pieces. Additionally, we propose and implement a computational framework to disclose whether all the acoustic features extracted are statistically relevant, that is, non-redundant. Our experimental results show that not all these well-known features might be necessary for trigger selection, given the multivariate statistical redundancy found, which associated all these acoustic features into 3 clusters with different factor loadings and, consequently, representatives.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124747080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many HTML 5 features enable you to build audio applications for web browsers, simplifying the distribution of these applications, and turning any computer, mobile, and portable device into a digital musical instrument. Developing such applications is not an easy task for layprogrammers or non-programmers and may require some effort by musicians and artists to encode audio applications based on HTML5 technologies and APIs. In order to simplify this task, this paper presents the Mosaicode, a Visual programming environment that enables the development of Digital Musical Instruments using the visual programming paradigm. Applications can be developed in the Mosaicode from diagrams – blocks, which encapsulate basic programming functions, and connections, to exchange information among the blocks. The Mosaicode, by having the functionality of generating, compiling and executing codes, can be used to quickly prototype musical instruments, and make it easy to use for beginners looking for learn programming and expert developers who need to optimize the construction of musical applications.
HTML 5的许多特性使您能够为web浏览器构建音频应用程序,从而简化了这些应用程序的分发,并将任何计算机、移动设备和便携式设备转变为数字乐器。对于非专业程序员或非程序员来说,开发这样的应用程序并不是一件容易的事情,可能需要音乐家和艺术家付出一些努力来基于HTML5技术和api对音频应用程序进行编码。为了简化这一任务,本文提出了Mosaicode,这是一个可视化编程环境,可以使用可视化编程范式开发数字乐器。应用程序可以在Mosaicode中从图块中开发出来,图块封装了基本的编程功能和连接,以便在块之间交换信息。该Mosaicode,通过具有生成,编译和执行代码的功能,可以用来快速原型乐器,并使其易于使用的初学者寻找学习编程和专家开发人员谁需要优化音乐应用程序的建设。
{"title":"Prototyping Web instruments with Mosaicode","authors":"A. Gomes, F. Resende, L. Goncalves, F. Schiavoni","doi":"10.5753/sbcm.2019.10431","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10431","url":null,"abstract":"Many HTML 5 features enable you to build audio applications for web browsers, simplifying the distribution of these applications, and turning any computer, mobile, and portable device into a digital musical instrument. Developing such applications is not an easy task for layprogrammers or non-programmers and may require some effort by musicians and artists to encode audio applications based on HTML5 technologies and APIs. In order to simplify this task, this paper presents the Mosaicode, a Visual programming environment that enables the development of Digital Musical Instruments using the visual programming paradigm. Applications can be developed in the Mosaicode from diagrams – blocks, which encapsulate basic programming functions, and connections, to exchange information among the blocks. The Mosaicode, by having the functionality of generating, compiling and executing codes, can be used to quickly prototype musical instruments, and make it easy to use for beginners looking for learn programming and expert developers who need to optimize the construction of musical applications.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124441119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Almada, João Penchel, Igor Chagas, Max Kühn, Claudia Usai, Eduardo Cabral, Vinicius Braga, Ana Miccolis
The present paper describes structure and functioning of J-Analyzer, a computational tool for assistedanalysis. It integrates a research project intended to investigate the complete song collection by Brazilian composer Antônio Carlos Jobim, focusing on the aspect of harmonic transformation. The program is used to determine the nature of transformational relations between any chordal pair of chords present in a song, as well as the structure of the chords themselves.
{"title":"J-Analyzer: A Software for Computer-Assisted Analysis of Antônio Carlos Jobims Songs","authors":"C. Almada, João Penchel, Igor Chagas, Max Kühn, Claudia Usai, Eduardo Cabral, Vinicius Braga, Ana Miccolis","doi":"10.5753/sbcm.2019.10416","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10416","url":null,"abstract":"The present paper describes structure and functioning of J-Analyzer, a computational tool for assistedanalysis. It integrates a research project intended to investigate the complete song collection by Brazilian composer Antônio Carlos Jobim, focusing on the aspect of harmonic transformation. The program is used to determine the nature of transformational relations between any chordal pair of chords present in a song, as well as the structure of the chords themselves.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126434613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Loureiro, T. Magalhaes, Davi Mota, T. Campolina, Aluizio Oliveira
CEGeME - Center for Research on Musical Gesture and Expression is affiliated to the Graduate Program in Music of the Universidade Federal de Minas Gerais (UFMG), hosted by the School of Music, Belo Horizonte, Brazil, since 2008. Focused on the empirical investigation of music performance, research at CEGeME departs from musical content information extracted from audio signals and three-dimensional spatial position of musicians, recorded during a music performance. Our laboratories are properly equipped for the acquisition of such data. Aiming at establishing a musicological approach to different aspects of musical expressiveness, we investigate causal relations between the expressive intention of musicians and the way they manipulate the acoustic material and how they move while playing a piece of music. The methodology seeks support on knowledge such as computational modeling, statistical analysis, and digital signal processing, which adds to traditional musicology skills. The group has attracted study postulants from different specialties, such as Computer Science, Engineering, Physics, Phonoaudiology and Music Therapy, as well as collaborations from professional musicians instigated by specific inquiries on the performance on their instruments. This paper presents a brief retrospective of the different research projects conducted at CEGeME.
{"title":"A retrospective of the research on musical expression conducted at CEGeME","authors":"M. Loureiro, T. Magalhaes, Davi Mota, T. Campolina, Aluizio Oliveira","doi":"10.5753/sbcm.2019.10440","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10440","url":null,"abstract":"CEGeME - Center for Research on Musical Gesture and Expression is affiliated to the Graduate Program in Music of the Universidade Federal de Minas Gerais (UFMG), hosted by the School of Music, Belo Horizonte, Brazil, since 2008. Focused on the empirical investigation of music performance, research at CEGeME departs from musical content information extracted from audio signals and three-dimensional spatial position of musicians, recorded during a music performance. Our laboratories are properly equipped for the acquisition of such data. Aiming at establishing a musicological approach to different aspects of musical expressiveness, we investigate causal relations between the expressive intention of musicians and the way they manipulate the acoustic material and how they move while playing a piece of music. The methodology seeks support on knowledge such as computational modeling, statistical analysis, and digital signal processing, which adds to traditional musicology skills. The group has attracted study postulants from different specialties, such as Computer Science, Engineering, Physics, Phonoaudiology and Music Therapy, as well as collaborations from professional musicians instigated by specific inquiries on the performance on their instruments. This paper presents a brief retrospective of the different research projects conducted at CEGeME.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134045285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A very significant task for music research is to estimate instants when meaningful events begin (onset) and when they end (offset). Onset detection is widely applied in many fields: electrocardiograms, seismographic data, stock market results and many Music Information Research(MIR) tasks, such as Automatic Music Transcription, Rhythm Detection, Speech Recognition, etc. Automatic Onset Detection(AOD) received, recently, a huge contribution coming from Artificial Intelligence (AI) methods, mainly Machine Learning and Deep Learning. In this work, the use of Convolutional Neural Networks (CNN) is explored by adapting its original architecture in order to apply the approach to automatic onset detection on audio musical signals. We used a CNN network for onset detection on a very general dataset, well acknowledged by the MIR community, and examined the accuracy of the method by comparison to ground truth data published by the dataset. The results are promising and outperform another methods of musical onset detection.
{"title":"Automatic onset detection using convolutional neural networks","authors":"W. Cornelissen, M. Loureiro","doi":"10.5753/sbcm.2019.10446","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10446","url":null,"abstract":"A very significant task for music research is to estimate instants when meaningful events begin (onset) and when they end (offset). Onset detection is widely applied in many fields: electrocardiograms, seismographic data, stock market results and many Music Information Research(MIR) tasks, such as Automatic Music Transcription, Rhythm Detection, Speech Recognition, etc. Automatic Onset Detection(AOD) received, recently, a huge contribution coming from Artificial Intelligence (AI) methods, mainly Machine Learning and Deep Learning. In this work, the use of Convolutional Neural Networks (CNN) is explored by adapting its original architecture in order to apply the approach to automatic onset detection on audio musical signals. We used a CNN network for onset detection on a very general dataset, well acknowledged by the MIR community, and examined the accuracy of the method by comparison to ground truth data published by the dataset. The results are promising and outperform another methods of musical onset detection.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125843574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the PSYCHO library for the Pure Data programming language. This library provides novel functions for Pure Data and is a collection of compiled objects, abstractions and patches that include psychoacoustic models and conversions. Most notably, it provides models related to Sensory Dissonance, such as Sharpness, Roughness, Tonalness and Pitch Commonality. This library is an evolution and revision of earlier research work developed during a masters and PhD program. The previous developments had not been made easily available as a single and well documented library. Moreover, the work went through a major overhaul, got rid of the dependance of Pd Extended (now an abandoned and unsupported software) and provides new features. This paper describes the evolution of the early work into the PSYCHO library and presents its main objects, functions and contributions.
{"title":"PSYCHO library for Pure Data","authors":"Alexandre Torres Porres","doi":"10.5753/sbcm.2019.10432","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10432","url":null,"abstract":"This paper describes the PSYCHO library for the Pure Data programming language. This library provides novel functions for Pure Data and is a collection of compiled objects, abstractions and patches that include psychoacoustic models and conversions. Most notably, it provides models related to Sensory Dissonance, such as Sharpness, Roughness, Tonalness and Pitch Commonality. This library is an evolution and revision of earlier research work developed during a masters and PhD program. The previous developments had not been made easily available as a single and well documented library. Moreover, the work went through a major overhaul, got rid of the dependance of Pd Extended (now an abandoned and unsupported software) and provides new features. This paper describes the evolution of the early work into the PSYCHO library and presents its main objects, functions and contributions.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122655196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}