João Tragtenberg, Filipe Calegario, G. Cabral, Geber Ramalho
This paper presents the development process of “TumTá”, a wearable Digital Dance and Music Instrument that triggers sound samples from foot stomps and “Pisada,” a dance-enabled MIDI pedalboard. It was developed between 2012 and 2017 for the use of Helder Vasconcelos, a dancer and musician formed by the traditions of Cavalo Marinho and Maracatu Rural from Pernambuco. The design of this instrument was inspired by traditional instruments like the Zabumba and by the gestural vocabulary from Cavalo Marinho, to make music and dance at the same time. The development process of this instrument is described in the three prototyping phases conducted by three approaches: building blocks, artisanal, and digital fabrication. The process of designing digital technology inspired by Brazilian traditions is analyzed, lessons learned, and future works are presented.
{"title":"TumTá and Pisada: Two Foot-controlled Digital Dance and Music Instruments Inspired by Popular Brazillian Traditions","authors":"João Tragtenberg, Filipe Calegario, G. Cabral, Geber Ramalho","doi":"10.5753/sbcm.2019.10426","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10426","url":null,"abstract":"This paper presents the development process of “TumTá”, a wearable Digital Dance and Music Instrument that triggers sound samples from foot stomps and “Pisada,” a dance-enabled MIDI pedalboard. It was developed between 2012 and 2017 for the use of Helder Vasconcelos, a dancer and musician formed by the traditions of Cavalo Marinho and Maracatu Rural from Pernambuco. The design of this instrument was inspired by traditional instruments like the Zabumba and by the gestural vocabulary from Cavalo Marinho, to make music and dance at the same time. The development process of this instrument is described in the three prototyping phases conducted by three approaches: building blocks, artisanal, and digital fabrication. The process of designing digital technology inspired by Brazilian traditions is analyzed, lessons learned, and future works are presented.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130595160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the state of art of realtime singing voice synthesis and presents its concept, applications and technical aspects. A technological mapping and a literature review are made in order to indicate the latest developments in this area. We made a brief comparative analysis among the selected works. Finally, we have discussed challenges and future research problems. Keywords: Real-time singing voice synthesis, Sound Synthesis, TTS, MIDI, Computer Music.
{"title":"State of art of real-time singing voice synthesis","authors":"L. A. Z. Brum, E. Moreno","doi":"10.5753/sbcm.2019.10422","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10422","url":null,"abstract":"This paper describes the state of art of realtime singing voice synthesis and presents its concept, applications and technical aspects. A technological mapping and a literature review are made in order to indicate the latest developments in this area. We made a brief comparative analysis among the selected works. Finally, we have discussed challenges and future research problems. Keywords: Real-time singing voice synthesis, Sound Synthesis, TTS, MIDI, Computer Music.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117024407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Gomes, Josue Da Silva, Marco Leal, Thiago Nascimento
At every moment, innumerable emotions can indicate and provide questions about daily attitudes. These emotions can interfere or stimulate different goals. Whether in school, home or social life, the environment increases the itinerant part of the process of attitudes. The musician is also passive of these emotions and incorporates them into his compositions for various reasons. Thus, the musical composition has innumerable sources, for example, academic formation, experiences, influences and perceptions of the musical scene. In this way, this work develops the mAchine learning Algorithm Applied to emotions in melodies (3A). The 3A recognizes the musician’s melodies in real time to generate accompaniment melody. As input, The 3A used MIDI data from a synthesizer to generate accompanying MIDI output or sound file by the programming language Chuck. Initially in this work, it is using the Gregorian modes for each intention of composition. In case, the musician changes the mode or tone, the 3A has an adaptation to continuing the musical sequence. Currently, The 3A uses artificial neural networks to predict and adapt melodies. It started from mathematical series for the formation of melodies that present interesting results for both mathematicians and musicians.
{"title":"3A: mAchine learning Algorithm Applied to emotions in melodies","authors":"C. Gomes, Josue Da Silva, Marco Leal, Thiago Nascimento","doi":"10.5753/sbcm.2019.10450","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10450","url":null,"abstract":"At every moment, innumerable emotions can indicate and provide questions about daily attitudes. These emotions can interfere or stimulate different goals. Whether in school, home or social life, the environment increases the itinerant part of the process of attitudes. The musician is also passive of these emotions and incorporates them into his compositions for various reasons. Thus, the musical composition has innumerable sources, for example, academic formation, experiences, influences and perceptions of the musical scene. In this way, this work develops the mAchine learning Algorithm Applied to emotions in melodies (3A). The 3A recognizes the musician’s melodies in real time to generate accompaniment melody. As input, The 3A used MIDI data from a synthesizer to generate accompanying MIDI output or sound file by the programming language Chuck. Initially in this work, it is using the Gregorian modes for each intention of composition. In case, the musician changes the mode or tone, the 3A has an adaptation to continuing the musical sequence. Currently, The 3A uses artificial neural networks to predict and adapt melodies. It started from mathematical series for the formation of melodies that present interesting results for both mathematicians and musicians.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123047126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The creation of Digital Musical Instruments (DMI) tries to keep abreast the technological progress and sometimes it does not worry about some possible side effects of its development. Obsolescence and residues, rampant consumption, constant need to generate innovation, code ephemerality, culture shock, social apartheid, are some possible traps that an equivocated DMI development can bring up to society. Faced all these possibilities, we are trying to understand what can be a sustainable Digital Instrument analyzing several dimensions of sustainability, from economical to cultural, from social to environmental. In this paper, we point out some possibilities to try to reach up more sustainable instruments development bringing up the human being and values like cooperation and collaboration to the center of the DMI development discussion. Through some questions, we seek to instigate a paradigm shift in art-science and provide a fertile field for future research.
{"title":"Sustainable Interfaces for Music Expression","authors":"Igino Silva Junior, F. Schiavoni","doi":"10.5753/sbcm.2019.10424","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10424","url":null,"abstract":"The creation of Digital Musical Instruments (DMI) tries to keep abreast the technological progress and sometimes it does not worry about some possible side effects of its development. Obsolescence and residues, rampant consumption, constant need to generate innovation, code ephemerality, culture shock, social apartheid, are some possible traps that an equivocated DMI development can bring up to society. Faced all these possibilities, we are trying to understand what can be a sustainable Digital Instrument analyzing several dimensions of sustainability, from economical to cultural, from social to environmental. In this paper, we point out some possibilities to try to reach up more sustainable instruments development bringing up the human being and values like cooperation and collaboration to the center of the DMI development discussion. Through some questions, we seek to instigate a paradigm shift in art-science and provide a fertile field for future research.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"369 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123433723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This workshop will bring to the audience an introduction to the Chuck audio programming language, to the Unity game engine within a hands-on experience how one can use such technologies to achieve a new level of immersion through procedural generated sounds responding to game events and visual effects. The workshop is intended to a broad audience ranging from programmers to ones with little to no knowledge in the field.
{"title":"Procedural Music in Games","authors":"J. Ayres, Pedro Arthur, Vitor G. Rolla, L. Velho","doi":"10.5753/sbcm.2019.10462","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10462","url":null,"abstract":"This workshop will bring to the audience an introduction to the Chuck audio programming language, to the Unity game engine within a hands-on experience how one can use such technologies to achieve a new level of immersion through procedural generated sounds responding to game events and visual effects. The workshop is intended to a broad audience ranging from programmers to ones with little to no knowledge in the field.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125940115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MusTIC is a research and innovation group concerned in conceiving and developing products and experiences that have an impact on music, education, visual and performing arts, and entertainment. In particular, we have been working with tools, methods, and concepts from physical computing, interaction design, and signal processing to build new interfaces for artistic expression, to develop tools for rapid prototyping, and to improve education through robotics and gamification.
{"title":"MusTIC: Research and Innovation Group on Music, Technology, Interactivity and Creativity","authors":"Filipe Calegario, G. Cabral, Geber Ramalho","doi":"10.5753/sbcm.2019.10441","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10441","url":null,"abstract":"MusTIC is a research and innovation group concerned in conceiving and developing products and experiences that have an impact on music, education, visual and performing arts, and entertainment. In particular, we have been working with tools, methods, and concepts from physical computing, interaction design, and signal processing to build new interfaces for artistic expression, to develop tools for rapid prototyping, and to improve education through robotics and gamification.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123961742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online streaming platforms have become one of the most important forms of music consumption. Most streaming platforms provide tools to assess the popularity of a song in the forms of scores and rankings. In this paper, we address two issues related to song popularity. First, we predict whether an already popular song may attract higher-than-average public interest and become “viral”. Second, we predict whether sudden spikes in public interest will translate into long-term popularity growth. We base our findings in data from the streaming platform Spotify and consider appearances in its “Most-Popular” list as indicative of popularity, and appearances in its “Virals” list as indicative of interest growth. We approach the problem as a classification task and employ a Support Vector Machine model built on popularity information to predict interest, and vice versa. We also verify if acoustic information can provide useful features for both tasks. Our results show that the popularity information alone is sufficient to predict future interest growth, achieving a F1-score above 90% at predicting whether a song will be featured in the “Virals” list after being observed in the “Most-Popular”.
{"title":"Predicting Music Popularity on Streaming Platforms","authors":"C. V. Araujo, Marco Cristo, Rafael Giusti","doi":"10.5753/sbcm.2019.10436","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10436","url":null,"abstract":"Online streaming platforms have become one of the most important forms of music consumption. Most streaming platforms provide tools to assess the popularity of a song in the forms of scores and rankings. In this paper, we address two issues related to song popularity. First, we predict whether an already popular song may attract higher-than-average public interest and become “viral”. Second, we predict whether sudden spikes in public interest will translate into long-term popularity growth. We base our findings in data from the streaming platform Spotify and consider appearances in its “Most-Popular” list as indicative of popularity, and appearances in its “Virals” list as indicative of interest growth. We approach the problem as a classification task and employ a Support Vector Machine model built on popularity information to predict interest, and vice versa. We also verify if acoustic information can provide useful features for both tasks. Our results show that the popularity information alone is sufficient to predict future interest growth, achieving a F1-score above 90% at predicting whether a song will be featured in the “Virals” list after being observed in the “Most-Popular”.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116864776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An interaction design that lean towards musical traits based on and constrained by our cognitive and biological system could, not only provide a better user experience, but also minimize collateral effects of excessive use of such technology to make music. This paper presents and discuss innate abilities involved in musical activities that - in the authors´ viewpoint - could be considered in design guidelines to computer music technologies, especially those related to ubimus.
{"title":"Cognitive Offloading: Can ubimus technologies affect our musicality?","authors":"L. Costalonga, M. Pimenta","doi":"10.5753/sbcm.2019.10427","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10427","url":null,"abstract":"An interaction design that lean towards musical traits based on and constrained by our cognitive and biological system could, not only provide a better user experience, but also minimize collateral effects of excessive use of such technology to make music. This paper presents and discuss innate abilities involved in musical activities that - in the authors´ viewpoint - could be considered in design guidelines to computer music technologies, especially those related to ubimus.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128998443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses a computer-aided musical analysis methodology anchored on psychoacoustics audio descriptors. The musicological aim is to analyze compositions centered on timbre manipulations that explore sound masses and granular synthesis as their builders. Our approach utilizes two psychoacoustics models: 1) Critical Bandwidths and 2) Loudness, and two spectral features extractors: 1) Centroid and 2) Spectral Spread. A review of the literature, contextualizing the state-of-art of audio descriptors, is followed by a definition of the musicological context guiding our analysis and discussions. Further, we present results on a comparative analysis of two acousmatic pieces: Schall (1995) of Horacio Vaggione and Asperezas (2018) of Micael Antunes. As electroacoustic works, there are no scores, therefore, segmentation and the subsequent musical analysis is an important issue to be solved. Consequently, the article ends discussing the methodological implication of the computational musicology addressed here.
{"title":"A computer-based framework to analyze continuous and discontinuous textural works using psychoacoustics audio descriptors","authors":"Micael Antunes, Danilo Rossetti, J. Manzolli","doi":"10.5753/sbcm.2019.10415","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10415","url":null,"abstract":"This paper discusses a computer-aided musical analysis methodology anchored on psychoacoustics audio descriptors. The musicological aim is to analyze compositions centered on timbre manipulations that explore sound masses and granular synthesis as their builders. Our approach utilizes two psychoacoustics models: 1) Critical Bandwidths and 2) Loudness, and two spectral features extractors: 1) Centroid and 2) Spectral Spread. A review of the literature, contextualizing the state-of-art of audio descriptors, is followed by a definition of the musicological context guiding our analysis and discussions. Further, we present results on a comparative analysis of two acousmatic pieces: Schall (1995) of Horacio Vaggione and Asperezas (2018) of Micael Antunes. As electroacoustic works, there are no scores, therefore, segmentation and the subsequent musical analysis is an important issue to be solved. Consequently, the article ends discussing the methodological implication of the computational musicology addressed here.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123015047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This is a concert proposal of Brazilian digital art, which brings in its creative core the historical and cultural aspects of certain locations in Brazil. The term Tecnofagia derives from an allusion to the concept of anthropophagic movement (artistic movement started in the twentieth century founded and theorized by the poet Oswald de Andrade and the painter Tarsila do Amaral). The anthropophagic movement was a metaphor for a goal of cultural swallowing where foreign culture would not be denied but should not be imitated. In his notes, Oswald de Andrade proposes the "cultural devouring of imported techniques to re-elaborate them autonomously, turning them into an export product." The Tecnofagia project is a collaborative creative and collective performance group that seeks to broaden aspects of live electronic music, video art, improvisation and performance, taking them into a multimodal narrative context with essentially Brazilian sound elements such as:accents and phonemes; instrumental tones; soundscapes; historical, political and cultural contexts. In this sense, Tecnofagia tries to go beyond techniques and technologies of interactive performance, as it provokes glances for a Brazilian art-technological miscegenation. That is, it seeks emergent characteristics of the encounters between media, art, spaces, culture, temporalities, objects, people and technologies, at the moment of performance.
这是一个巴西数字艺术的音乐会提案,它将巴西某些地区的历史和文化方面带入其创意核心。“technofagia”一词源于“食人运动”(始于20世纪的艺术运动,由诗人Oswald de Andrade和画家Tarsila do Amaral创立并提出理论)。“人食运动”是对文化吞噬目标的隐喻,在这种目标下,外国文化不会被否定,但也不应该被模仿。奥斯瓦尔德·德·安德拉德在他的笔记中提出,“从文化上吞食进口技术,自主地对它们进行重新加工,将它们变成一种出口产品。”Tecnofagia项目是一个合作的创意和集体表演团体,旨在拓宽现场电子音乐,视频艺术,即兴创作和表演的方面,将它们带入一个多模式的叙事背景,本质上是巴西的声音元素,如:口音和音位;仪器调;音景;历史、政治和文化背景。从这个意义上说,Tecnofagia试图超越互动表演的技术和技术,因为它引发了巴西艺术与技术混合的目光。也就是说,在表演的瞬间,它寻求媒体、艺术、空间、文化、时间性、物体、人和技术之间相遇的新兴特征。
{"title":"Tecnofagia: A Multimodal Rite","authors":"Luzilei Aliel, Rafael Fajiolli, Ricardo Thomasi","doi":"10.5753/sbcm.2019.10454","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10454","url":null,"abstract":"This is a concert proposal of Brazilian digital art, which brings in its creative core the historical and cultural aspects of certain locations in Brazil. The term Tecnofagia derives from an allusion to the concept of anthropophagic movement (artistic movement started in the twentieth century founded and theorized by the poet Oswald de Andrade and the painter Tarsila do Amaral). The anthropophagic movement was a metaphor for a goal of cultural swallowing where foreign culture would not be denied but should not be imitated. In his notes, Oswald de Andrade proposes the \"cultural devouring of imported techniques to re-elaborate them autonomously, turning them into an export product.\" The Tecnofagia project is a collaborative creative and collective performance group that seeks to broaden aspects of live electronic music, video art, improvisation and performance, taking them into a multimodal narrative context with essentially Brazilian sound elements such as:accents and phonemes; instrumental tones; soundscapes; historical, political and cultural contexts. In this sense, Tecnofagia tries to go beyond techniques and technologies of interactive performance, as it provokes glances for a Brazilian art-technological miscegenation. That is, it seeks emergent characteristics of the encounters between media, art, spaces, culture, temporalities, objects, people and technologies, at the moment of performance.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125368164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}