A criação de um instrumento musical digital é uma atividade que atrai músicos e pesquisadores há décadas e certamente possui problemas em abertos sobre os quais estas pesquisas tem se debruçado. Certamente, criar um instrumento pode ser uma tarefa simples que consiste em organizar uma interface de forma que ela sirva para fazer som. No entanto, esta interface simples pode estar distante do que consideramos um instrumento musical quando pensamos que gerar som pode não ser a única coisa que buscamos ao tocar um instrumento.
{"title":"Expressividade de instrumentos musicais digitais - Just push play","authors":"Gabriel Lopes Rocha, F. Schiavoni","doi":"10.5753/sbcm.2021.19458","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19458","url":null,"abstract":"A criação de um instrumento musical digital é uma atividade que atrai músicos e pesquisadores há décadas e certamente possui problemas em abertos sobre os quais estas pesquisas tem se debruçado. Certamente, criar um instrumento pode ser uma tarefa simples que consiste em organizar uma interface de forma que ela sirva para fazer som. No entanto, esta interface simples pode estar distante do que consideramos um instrumento musical quando pensamos que gerar som pode não ser a única coisa que buscamos ao tocar um instrumento.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"24 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132606276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lick the Toad is an ongoing project developed as a web based interface that runs in modern browsers. It provides a custom made platform to collect user data accessed from mobile devices, such as smartphones, tablets etc. The system offers a tool for interactive collective sonification aiding the idea of networked music performance. It can be used in various contexts, such as onsite installation, interactive compositional tool, or for the distribution of raw data for live coding performances. The system embeds neural network capabilities for prediction purposes by using user input and outputs/targets alike. The inputs and the targets of the training processes can be adapted according to the needs of the use making it a versatile component for creative practice. It is developed as open-source project and it works currently as a NodeJS application with plans for future deployment on remote server to support remote communication and interaction amongst distant users.
Lick the Toad是一个正在进行的项目,它是一个基于web的界面,可以在现代浏览器中运行。它提供了一个定制的平台来收集从移动设备(如智能手机、平板电脑等)访问的用户数据。该系统为交互式集体发声提供了一种辅助网络音乐表演的工具。它可以在各种环境中使用,例如现场安装,交互式组合工具,或用于现场编码表演的原始数据分发。该系统通过使用用户输入和输出/目标来嵌入用于预测目的的神经网络功能。培训过程的输入和目标可以根据使用的需要进行调整,使其成为创造性实践的多功能组成部分。它是作为一个开源项目开发的,目前作为一个NodeJS应用程序工作,计划未来部署在远程服务器上,以支持远程用户之间的远程通信和交互。
{"title":"Lick the Toad: a web-based interface for collective sonification","authors":"K. Vasilakos","doi":"10.5753/sbcm.2021.19444","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19444","url":null,"abstract":"Lick the Toad is an ongoing project developed as a web based interface that runs in modern browsers. It provides a custom made platform to collect user data accessed from mobile devices, such as smartphones, tablets etc. The system offers a tool for interactive collective sonification aiding the idea of networked music performance. It can be used in various contexts, such as onsite installation, interactive compositional tool, or for the distribution of raw data for live coding performances. The system embeds neural network capabilities for prediction purposes by using user input and outputs/targets alike. The inputs and the targets of the training processes can be adapted according to the needs of the use making it a versatile component for creative practice. It is developed as open-source project and it works currently as a NodeJS application with plans for future deployment on remote server to support remote communication and interaction amongst distant users.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129932760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. S. Silva, G. Cabral, Rodrigo Mendes de Carvalho Pereira
The emergence of Digital Musical Instruments (DMIs) in the computer music field has been providing new means of interaction with music performances. Particularly, with the advancements in the Internet of Things (IoT) area, there has been an increase in augmented musical instruments containing LEDs within their own bodies to support music learning. These instruments can help musicians by having the capabilities to display musical notes, chords and scales. This research uses research through design to present a preliminary study of some of the challenges related to the design of these instruments and attempts to provide some insights associated with their usability and development, particularly related to the development of an augmented acoustic guitar, the VioLED, developed alongside the company Daccord Music. The system provides three modes of operation: song mode, solo/improvising mode and animation mode. The preliminary results present challenges and obstacles pertinent to usability and development of these instruments identified in different iterations of design. These challenges are associated with areas such hardware-software co-design, usability, latency/jitter, energy consumption, etc.
{"title":"A preliminary study of Augmented Musical Instruments for Study (AMIS) using research through design","authors":"E. S. Silva, G. Cabral, Rodrigo Mendes de Carvalho Pereira","doi":"10.5753/sbcm.2021.19438","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19438","url":null,"abstract":"The emergence of Digital Musical Instruments (DMIs) in the computer music field has been providing new means of interaction with music performances. Particularly, with the advancements in the Internet of Things (IoT) area, there has been an increase in augmented musical instruments containing LEDs within their own bodies to support music learning. These instruments can help musicians by having the capabilities to display musical notes, chords and scales. This research uses research through design to present a preliminary study of some of the challenges related to the design of these instruments and attempts to provide some insights associated with their usability and development, particularly related to the development of an augmented acoustic guitar, the VioLED, developed alongside the company Daccord Music. The system provides three modes of operation: song mode, solo/improvising mode and animation mode. The preliminary results present challenges and obstacles pertinent to usability and development of these instruments identified in different iterations of design. These challenges are associated with areas such hardware-software co-design, usability, latency/jitter, energy consumption, etc.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133595641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recommendation systems are a constantly expanding study area, with applications in various fields such as e-commerce, films, music to promote the user’s suggestions. When we talk about music, we have more than 20 years of studies trying to solve the problem of a good generation of playlists that maximizes the satisfaction of a larger number of listeners. For automated automatic playlist generation methods focusing on a user group, we have the collaborative filter as a more assertive method to get the user’s not likely, to improve the performance of group recommendation algorithms we store the preferences of users Especially I did not like it by placing the availability of using this data as an algorithm input parameter. The platform described in This paper is intended to facilitate testing between these recommendation systems, standardizing data entry, and facilitating requests. The use of GraphQL as a framework associated with Apollo as a library, greatly facilitates the integration of these APIs, as the separation of data sources makes it possible to associate Spotify data with Deezer or Apple Music data, these data are stored in the database of the connection, so that in future requests it will no longer be necessary to consult the Spotify API, thus facilitating the consumption of data from the artificial intelligence algorithms, as well as a possible sharing of songs between services, since all services have an ISRC code to identify the songs.
{"title":"An open source platform to assist the creation of group playlists through artificial intelligence algorithms","authors":"Flaviano Dias Fontes, G. Cabral, Geber Ramalho","doi":"10.5753/sbcm.2021.19442","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19442","url":null,"abstract":"Recommendation systems are a constantly expanding study area, with applications in various fields such as e-commerce, films, music to promote the user’s suggestions. When we talk about music, we have more than 20 years of studies trying to solve the problem of a good generation of playlists that maximizes the satisfaction of a larger number of listeners. For automated automatic playlist generation methods focusing on a user group, we have the collaborative filter as a more assertive method to get the user’s not likely, to improve the performance of group recommendation algorithms we store the preferences of users Especially I did not like it by placing the availability of using this data as an algorithm input parameter. The platform described in This paper is intended to facilitate testing between these recommendation systems, standardizing data entry, and facilitating requests. The use of GraphQL as a framework associated with Apollo as a library, greatly facilitates the integration of these APIs, as the separation of data sources makes it possible to associate Spotify data with Deezer or Apple Music data, these data are stored in the database of the connection, so that in future requests it will no longer be necessary to consult the Spotify API, thus facilitating the consumption of data from the artificial intelligence algorithms, as well as a possible sharing of songs between services, since all services have an ISRC code to identify the songs.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"215 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133755432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rute Moura, G. Cabral, Jader Abreu, Mychelline Cunha, Horhanna Almeida
The technological acceleration of recent years has allowed for significant advances in data processing and the study of Music, an art with richly expressive and communicative information. In this article, we bring reports of experiences and contributions in the treatment of musical data extracted from digital MIDI and sound files, for the development of systems that generate musical visualizations.
{"title":"Challenges to generate musical visualizations","authors":"Rute Moura, G. Cabral, Jader Abreu, Mychelline Cunha, Horhanna Almeida","doi":"10.5753/sbcm.2021.19437","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19437","url":null,"abstract":"The technological acceleration of recent years has allowed for significant advances in data processing and the study of Music, an art with richly expressive and communicative information. In this article, we bring reports of experiences and contributions in the treatment of musical data extracted from digital MIDI and sound files, for the development of systems that generate musical visualizations.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"94 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114050667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article presents the grainBIRD concept inspired by a distributed fog of grain sounds metaphor. Shortly, granular synthesizers are distributed in a mobile devices network. Applying Interactive Genetic Algorithm (IGA) and exchanging Open Sound Control (OSC) messages in a Virtual Private Network, grainBIRDs generate sound grains in standalone and network configurations. The grainBIRDs concept dialogues with the Internet of Things (IoT) paradigm since it was inspired by Cloud and Fog computing architectures. In this sense, this article presents the grainBIRD and grainBIRD Orchestra concepts, followed by a review on Computer Music and Network technologies related to the project. The article also discusses the computer implementation of the grainBIRD application using the Pure Data programming environment and concludes with performance tests of the network communication feasibility and the system sound generation capacities.
{"title":"grainBirds: Evolutionary granular synthesisers distributed in Fog Computing","authors":"J. Manzolli, Edelson Henrique Constantino","doi":"10.5753/sbcm.2021.19432","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19432","url":null,"abstract":"The article presents the grainBIRD concept inspired by a distributed fog of grain sounds metaphor. Shortly, granular synthesizers are distributed in a mobile devices network. Applying Interactive Genetic Algorithm (IGA) and exchanging Open Sound Control (OSC) messages in a Virtual Private Network, grainBIRDs generate sound grains in standalone and network configurations. The grainBIRDs concept dialogues with the Internet of Things (IoT) paradigm since it was inspired by Cloud and Fog computing architectures. In this sense, this article presents the grainBIRD and grainBIRD Orchestra concepts, followed by a review on Computer Music and Network technologies related to the project. The article also discusses the computer implementation of the grainBIRD application using the Pure Data programming environment and concludes with performance tests of the network communication feasibility and the system sound generation capacities.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127883108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Loureiro, A. Silva, A. B. O. Neto, Davi Mota, F. B. Barros, F. Schiavoni, Gustavo Machado Oliveira, Luis de Souza da Silva, Ravi Shankar Viana Domingues, Renato Rodrigues Lisboa, T. Magalhaes, T. Campolina, Tiago Lima Bicalho Cruz
O CEGeME - Centro de Pesquisa em Gesto e Expressão Musical é filiado ao Programa de Pós-Graduação em Música da Universidade Federal de Minas Gerais (UFMG), sediado na Escola de Música de Belo Horizonte, Brasil, desde 2008 é dedicado à pesquisa empírica da performance musical, que partem de informação de conteúdo musical extraída de sinais de áudio e da posição espacial tridimensional dos músicos, registradas durante uma performance musical. Com o objetivo de estabelecer uma abordagem musicológica a diferentes aspectos da expressividade musical, investigamos relações causais entre a intenção expressiva dos músicos e a forma como manipulam o material acústico e como se movem durante a execução de uma peça musical. O grupo atraiu postulantes de estudos de diversas áreas de conhecimento e músicos profissionais instigados por questões específicas envolvidas na expressividade musical, tais como, estudos sobre o uso da segunda válvula do trombone baixo moderno para obter um melhor legato, os parâmetros acústicos do filtro-fonte no trompete, as emoções faciais presentes na performance de cantores líricos, os parâmetros acústicos e psicoacústicos associados a diferentes preparações de palheta do oboé, a modelagem espectral do ataque de notas na clarineta, análise do timbre musical baseada em aprendizado de máquina, estudos de parâmetros acústicos e cinemáticos associados à execução da transição de notas em legato na clarineta, análise da articulação na performance de uma peça para clarineta de Debussy, aplicações sonológicas na educação musical, influência da consistência na interpretação na coesão da performance da música de câmara e o desenvolvimento de software na plataforma python criado para dar suporte às demandas da pesquisa conduzida no CEGeME.
{"title":"CEGeME 2021 - Gestos, Movimentos, Performance musical e Pandemia","authors":"M. Loureiro, A. Silva, A. B. O. Neto, Davi Mota, F. B. Barros, F. Schiavoni, Gustavo Machado Oliveira, Luis de Souza da Silva, Ravi Shankar Viana Domingues, Renato Rodrigues Lisboa, T. Magalhaes, T. Campolina, Tiago Lima Bicalho Cruz","doi":"10.5753/sbcm.2021.19463","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19463","url":null,"abstract":"O CEGeME - Centro de Pesquisa em Gesto e Expressão Musical é filiado ao Programa de Pós-Graduação em Música da Universidade Federal de Minas Gerais (UFMG), sediado na Escola de Música de Belo Horizonte, Brasil, desde 2008 é dedicado à pesquisa empírica da performance musical, que partem de informação de conteúdo musical extraída de sinais de áudio e da posição espacial tridimensional dos músicos, registradas durante uma performance musical. Com o objetivo de estabelecer uma abordagem musicológica a diferentes aspectos da expressividade musical, investigamos relações causais entre a intenção expressiva dos músicos e a forma como manipulam o material acústico e como se movem durante a execução de uma peça musical. O grupo atraiu postulantes de estudos de diversas áreas de conhecimento e músicos profissionais instigados por questões específicas envolvidas na expressividade musical, tais como, estudos sobre o uso da segunda válvula do trombone baixo moderno para obter um melhor legato, os parâmetros acústicos do filtro-fonte no trompete, as emoções faciais presentes na performance de cantores líricos, os parâmetros acústicos e psicoacústicos associados a diferentes preparações de palheta do oboé, a modelagem espectral do ataque de notas na clarineta, análise do timbre musical baseada em aprendizado de máquina, estudos de parâmetros acústicos e cinemáticos associados à execução da transição de notas em legato na clarineta, análise da articulação na performance de uma peça para clarineta de Debussy, aplicações sonológicas na educação musical, influência da consistência na interpretação na coesão da performance da música de câmara e o desenvolvimento de software na plataforma python criado para dar suporte às demandas da pesquisa conduzida no CEGeME.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121431797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandre Thomé da Silva de Almeida, R. Vieira, R. S. Oliveira, F. Schiavoni
A internet das Coisas Musicais é uma área de pesquisa que pretende levar a conectividade da Internet das Coisas para o campo da música e das artes. Junto com esta tecnologia surge a possibilidade de conexão de diferentes coisas musicais em um ambiente de concerto ou de criação artística que permitiria, por exemplo, a participação do público nestes processos, tanto de maneira presencial, por meio de uma rede local, quanto remoto, por meio da Internet. Neste trabalho trazemos algumas discussões sobre a IoMusT e também suas possibilidades e desafios.
{"title":"Desafios da Internet das Coisas Musicais","authors":"Alexandre Thomé da Silva de Almeida, R. Vieira, R. S. Oliveira, F. Schiavoni","doi":"10.5753/sbcm.2021.19459","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19459","url":null,"abstract":"A internet das Coisas Musicais é uma área de pesquisa que pretende levar a conectividade da Internet das Coisas para o campo da música e das artes. Junto com esta tecnologia surge a possibilidade de conexão de diferentes coisas musicais em um ambiente de concerto ou de criação artística que permitiria, por exemplo, a participação do público nestes processos, tanto de maneira presencial, por meio de uma rede local, quanto remoto, por meio da Internet. Neste trabalho trazemos algumas discussões sobre a IoMusT e também suas possibilidades e desafios.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122768741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article presents practical and artistic contributions to the field of computational musical systems based on audio feedback networks which have been used as instruments for music creation in the author's artistic practice. The article begins with an introduction to the research field of feedback and selforganized music systems. Later on two systems are presented: the first is a network of cross-modulated sinusoidal oscillators (by frequency modulation), and the second is a network of transforming processes of pre-recorded sound samples.
{"title":"Playing Time-Variant Audio Feedback Networks","authors":"A. Monteiro","doi":"10.5753/sbcm.2021.19443","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19443","url":null,"abstract":"This article presents practical and artistic contributions to the field of computational musical systems based on audio feedback networks which have been used as instruments for music creation in the author's artistic practice. The article begins with an introduction to the research field of feedback and selforganized music systems. Later on two systems are presented: the first is a network of cross-modulated sinusoidal oscillators (by frequency modulation), and the second is a network of transforming processes of pre-recorded sound samples.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115456151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The text presents a process aimed at computer-aided composition for percussion instruments based on Concatenative Sound Synthesis (CSS). After the introduction, we address the concept of ”technomorphism” and the influence of electroacoustic techniques in instrumental composition. The third section covers processes of instrumental sound synthesis and its development in the context of Computer-Aided Composition (CAC) and Computer-Aided Music Orchestration (CAMO). Then, we describe the general principles of Concatenative Sound Synthesis (CSS). The fifth section covers our adaptation of CSS as a technomorphic model for Computer-Aided Composition/Orchestration, employing a corpus of percussion sounds/instruments. In the final section, we discuss future developments and the mains characteristics of our implementation and strategy.
{"title":"Concatenative Sound Synthesis as a Technomorphic Model in Computer-Aided Composition","authors":"Júlio Guatimosim, J. Padovani, Carlos Guatimosim","doi":"10.5753/sbcm.2021.19431","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19431","url":null,"abstract":"The text presents a process aimed at computer-aided composition for percussion instruments based on Concatenative Sound Synthesis (CSS). After the introduction, we address the concept of ”technomorphism” and the influence of electroacoustic techniques in instrumental composition. The third section covers processes of instrumental sound synthesis and its development in the context of Computer-Aided Composition (CAC) and Computer-Aided Music Orchestration (CAMO). Then, we describe the general principles of Concatenative Sound Synthesis (CSS). The fifth section covers our adaptation of CSS as a technomorphic model for Computer-Aided Composition/Orchestration, employing a corpus of percussion sounds/instruments. In the final section, we discuss future developments and the mains characteristics of our implementation and strategy.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115588028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}