HMusic is a domain specific language based on music patterns that can be used to write music and live coding. The main abstractions provided by the language are patterns and tracks. Code written in HMusic looks like patterns and multi-tracks available in music sequencers, drum machines and DAWs. HMusic provides primitives to design and combine patterns generating new patterns. The objective of this paper is to extend the original design of HMusic to allow effects on tracks. We describe new abstractions to add effects on individual tracks and in groups of tracks, and how they influence the combinators for track composition and multiplication. HMusic allows the live coding of music and, as it is embedded in the Haskell functional programming language, programmers can write functions to manipulate effects on the fly. The current implementation of the language is compiled into Sonic Pi [1], and we describe how the compiler’s back-end was modified to support the new abstractions for effects. HMusic can be and can be downloaded from [2].
{"title":"Combining Effects in a Music Programming Language based on Patterns","authors":"A. R. D. Bois, R. Ribeiro","doi":"10.5753/sbcm.2019.10430","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10430","url":null,"abstract":"HMusic is a domain specific language based on music patterns that can be used to write music and live coding. The main abstractions provided by the language are patterns and tracks. Code written in HMusic looks like patterns and multi-tracks available in music sequencers, drum machines and DAWs. HMusic provides primitives to design and combine patterns generating new patterns. The objective of this paper is to extend the original design of HMusic to allow effects on tracks. We describe new abstractions to add effects on individual tracks and in groups of tracks, and how they influence the combinators for track composition and multiplication. HMusic allows the live coding of music and, as it is embedded in the Haskell functional programming language, programmers can write functions to manipulate effects on the fly. The current implementation of the language is compiled into Sonic Pi [1], and we describe how the compiler’s back-end was modified to support the new abstractions for effects. HMusic can be and can be downloaded from [2].","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117290646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Detecting voice in a mixture of sound sources remains a challenging task in MIR research. The musical content can be perceived in many different ways as instrumentation varies. We evaluate how instrumentation affects singing voice detection in pieces using a standard spectral feature (MFCC). We trained Random Forest models with song remixes for specific subsets of sound sources, and compare it to models trained with the original songs. We thus present a preliminary analysis of the classification accuracy results.
{"title":"Instrumental Sensibility of Vocal Detector Based on Spectral Features","authors":"Shayenne Moura, M. Queiroz","doi":"10.5753/sbcm.2019.10451","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10451","url":null,"abstract":"Detecting voice in a mixture of sound sources remains a challenging task in MIR research. The musical content can be perceived in many different ways as instrumentation varies. We evaluate how instrumentation affects singing voice detection in pieces using a standard spectral feature (MFCC). We trained Random Forest models with song remixes for specific subsets of sound sources, and compare it to models trained with the original songs. We thus present a preliminary analysis of the classification accuracy results.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117321347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Araújo, Avner Paulo, Igino Silva Junior, F. Schiavoni, Mauro César Fachina Canito, R. A. Costa
Since HTML 5 and web audio were released, we have seen several initiatives to construct web based instruments and musical applications based on this technology. Web based instruments involved composers, musicians and the audience in musical performances based in the fact that a web instrument embedded in a web page can be accessed by everyone. Nonetheless, despite the fact that these applications are accessible by the network, it is not easy to use the network and these technologies to synchronize the participants of a musical performance and control the level of interaction in a collaborative musical creation scenario. Based on a multimedia performance created in our research group, O Chaos das 5, we present in this paper some scenarios of interaction and control between musicians and the audience that can be reached using a server side programming infrastructure along with the HTML5. In this performance, the audience took part of the musical soundscape using a cellphone to access a set of digital instruments. These scenarios and the proposed solutions brought up a set of possibilities to balance control and interaction of audience participation into live performance using web instruments.
自从HTML 5和网络音频发布以来,我们已经看到了一些基于这项技术构建基于网络的乐器和音乐应用程序的倡议。基于网络的乐器涉及作曲家、音乐家和音乐表演的观众,基于嵌入在网页中的网络乐器可以被每个人访问的事实。然而,尽管这些应用程序可以通过网络访问,但在协作音乐创作场景中,使用网络和这些技术来同步音乐表演的参与者并控制交互水平并不容易。基于我们的研究小组创造的多媒体表演,O Chaos das 5,我们在本文中提出了一些音乐家和观众之间的交互和控制的场景,这些场景可以通过使用服务器端编程基础设施以及HTML5来实现。在这次演出中,观众通过手机接入一套数字乐器,参与了音乐音景的一部分。这些场景和提出的解决方案提出了一系列可能性,以平衡观众参与的控制和互动,并使用网络工具进行现场表演。
{"title":"A technical approach of the audience participation in the performance 'O Chaos das 5'","authors":"J. Araújo, Avner Paulo, Igino Silva Junior, F. Schiavoni, Mauro César Fachina Canito, R. A. Costa","doi":"10.5753/sbcm.2019.10419","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10419","url":null,"abstract":"Since HTML 5 and web audio were released, we have seen several initiatives to construct web based instruments and musical applications based on this technology. Web based instruments involved composers, musicians and the audience in musical performances based in the fact that a web instrument embedded in a web page can be accessed by everyone. Nonetheless, despite the fact that these applications are accessible by the network, it is not easy to use the network and these technologies to synchronize the participants of a musical performance and control the level of interaction in a collaborative musical creation scenario. Based on a multimedia performance created in our research group, O Chaos das 5, we present in this paper some scenarios of interaction and control between musicians and the audience that can be reached using a server side programming infrastructure along with the HTML5. In this performance, the audience took part of the musical soundscape using a cellphone to access a set of digital instruments. These scenarios and the proposed solutions brought up a set of possibilities to balance control and interaction of audience participation into live performance using web instruments.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123133698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Air drums, or imaginary drums, are commonly played as a form of participating in musical experiences. The gestures derived from playing air drums can be acquired using accelerometers and then mapped into sound control responses. Commonly, the mapping process relies on a peak-picking procedure that maps local maxima or minima to sound triggers. In this work, we analyzed accelerometer and audio data comprising the motion of subjects playing air drums while vocalizing their expected results. Our qualitative analysis revealed that each subject produced a different relationship between their motion and the vocalization. This suggests that using a fixed peak-picking procedure can be unreliable when designing accelerometer-controlled drum instruments. Moreover, user-specific personalization can be an important feature in this type of virtual instrument. This poses a new challenge for this field, which consists of quickly personalizing virtual drum interactions. We made our dataset available to foster future work in this subject.
{"title":"Visualizing Air Drums: Analysis of Motion and Vocalization Data Related to Playing Imaginary Drums","authors":"A. Caetano, T. Tavares","doi":"10.5753/sbcm.2019.10423","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10423","url":null,"abstract":"Air drums, or imaginary drums, are commonly played as a form of participating in musical experiences. The gestures derived from playing air drums can be acquired using accelerometers and then mapped into sound control responses. Commonly, the mapping process relies on a peak-picking procedure that maps local maxima or minima to sound triggers. In this work, we analyzed accelerometer and audio data comprising the motion of subjects playing air drums while vocalizing their expected results. Our qualitative analysis revealed that each subject produced a different relationship between their motion and the vocalization. This suggests that using a fixed peak-picking procedure can be unreliable when designing accelerometer-controlled drum instruments. Moreover, user-specific personalization can be an important feature in this type of virtual instrument. This poses a new challenge for this field, which consists of quickly personalizing virtual drum interactions. We made our dataset available to foster future work in this subject.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129926652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music lyrics can convey a great part of the meaning in popular songs. Such meaning is important for humans to understand songs as related to typical narratives, such as romantic interests or life stories. This understanding is part of affective aspects that can be used to choose songs to play in particular situations. This paper analyzes the effectiveness of using text mining tools to classify lyrics according to their narrative contexts. For such, we used a vote-based dataset and several machine learning algorithms. Also, we compared the classification results to that of a typical human. Last, we compare the problems of identifying narrative contexts and of identifying lyric valence. Our results indicate that narrative contexts can be identified more consistently than valence. Also, we show that human-based classification typically do not reach a high accuracy, which suggests an upper bound for automatic classification. narrative contexts. For such, we built a dataset containing Brazilian popular music lyrics which were raters voted online according to its context and valence. We approached the problem using a machine learning pipeline in which lyrics are projected into a vector space and then classified using general-purpose algorithms. We experimented with document representations based on sparse topic models [11, 12, 13, 14], which aims to find groups of words that typically appear together in the dataset. Also, we extracted part-of-speech tags for each lyric and used their histogram as features in the classification process.
{"title":"Identifying Narrative Contexts in Brazilian Popular Music Lyrics Using Sparse Topic Models: A Comparison Between Human-Based and Machine-Based Classification","authors":"André Dalmora, T. Tavares","doi":"10.5753/sbcm.2019.10417","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10417","url":null,"abstract":"Music lyrics can convey a great part of the meaning in popular songs. Such meaning is important for humans to understand songs as related to typical narratives, such as romantic interests or life stories. This understanding is part of affective aspects that can be used to choose songs to play in particular situations. This paper analyzes the effectiveness of using text mining tools to classify lyrics according to their narrative contexts. For such, we used a vote-based dataset and several machine learning algorithms. Also, we compared the classification results to that of a typical human. Last, we compare the problems of identifying narrative contexts and of identifying lyric valence. Our results indicate that narrative contexts can be identified more consistently than valence. Also, we show that human-based classification typically do not reach a high accuracy, which suggests an upper bound for automatic classification. narrative contexts. For such, we built a dataset containing Brazilian popular music lyrics which were raters voted online according to its context and valence. We approached the problem using a machine learning pipeline in which lyrics are projected into a vector space and then classified using general-purpose algorithms. We experimented with document representations based on sparse topic models [11, 12, 13, 14], which aims to find groups of words that typically appear together in the dataset. Also, we extracted part-of-speech tags for each lyric and used their histogram as features in the classification process.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127509366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main objective of this talk is to report on the First Brazilian Symposium on Computer Music, which occurred in August 1994, at the city of Caxambu, Minas Gerais, promoted by the UFMG. The meeting occurred one year after the creation of NUCOM, a group of young academics dedicated to this emerging research field in Brazil gathered as a discussion list. This quite exciting and fancy event at Hotel Gloria in Caxambu was able to imposingly launch the group to the national, as well as to the international academic community. First, due to the excellency of the event’s output and its daring program, that included 34 selected papers by researchers from various institutions from Argentina, Brazil, Canada, Denmark, France, Hong Kong, Mexico, UK, and USA, five lectures an two panels of discussion offered by researchers from the most advanced computer music research centers all over the world. The program also included eight concerts, two of them featuring traditional music, such as Bach, Mozart, and Brazilian music.Six computer music concerts presented 48 selected compositions submitted to the symposium. Second, as the symposium happened as apart of the 14th Congress of Brazilian Computer Science Society (SBC), the excellency of its output was able to attract the interest of SBC’s board of directors. They invited NUCOM to integrate the society as a Special Committee, which are sub-groups of SBC dedicated to specific computer science topics. At the end of the description, this report aims at raising questions, arguments, and debates about today’s format of NUCOM meetings, considering more seriously the interdisciplinary character of the methodologic approaches adopted by the field. Interdisciplinarity should be pursued by striving to contaminate a growing number of different topics of musical sciences, as well as of other research fields.
{"title":"The First Brazilian Symposium on Computer Music presents Brazilian computer music potentials - Caxambu, MG, 1994","authors":"M. Loureiro","doi":"10.5753/sbcm.2019.10463","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10463","url":null,"abstract":"The main objective of this talk is to report on the First Brazilian Symposium on Computer Music, which occurred in August 1994, at the city of Caxambu, Minas Gerais, promoted by the UFMG. The meeting occurred one year after the creation of NUCOM, a group of young academics dedicated to this emerging research field in Brazil gathered as a discussion list. This quite exciting and fancy event at Hotel Gloria in Caxambu was able to imposingly launch the group to the national, as well as to the international academic community. First, due to the excellency of the event’s output and its daring program, that included 34 selected papers by researchers from various institutions from Argentina, Brazil, Canada, Denmark, France, Hong Kong, Mexico, UK, and USA, five lectures an two panels of discussion offered by researchers from the most advanced computer music research centers all over the world. The program also included eight concerts, two of them featuring traditional music, such as Bach, Mozart, and Brazilian music.Six computer music concerts presented 48 selected compositions submitted to the symposium. Second, as the symposium happened as apart of the 14th Congress of Brazilian Computer Science Society (SBC), the excellency of its output was able to attract the interest of SBC’s board of directors. They invited NUCOM to integrate the society as a Special Committee, which are sub-groups of SBC dedicated to specific computer science topics. At the end of the description, this report aims at raising questions, arguments, and debates about today’s format of NUCOM meetings, considering more seriously the interdisciplinary character of the methodologic approaches adopted by the field. Interdisciplinarity should be pursued by striving to contaminate a growing number of different topics of musical sciences, as well as of other research fields.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117234620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This hands-on workshop comprises essential techniques for digital signal processing and machine learning. Participants will use the Python libraries librosa and scikit-learn as support to build an automatic audio classification system. The workshop will use explorations in toy problems to approach theoretical aspects. Later, it will discuss practical issues for building a scientific applications in the field.
{"title":"Introduction to automatic audio classification","authors":"T. Tavares","doi":"10.5753/sbcm.2019.10461","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10461","url":null,"abstract":"This hands-on workshop comprises essential techniques for digital signal processing and machine learning. Participants will use the Python libraries librosa and scikit-learn as support to build an automatic audio classification system. The workshop will use explorations in toy problems to approach theoretical aspects. Later, it will discuss practical issues for building a scientific applications in the field.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132973323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a historical overview and a brief report of the main recent activities at LCM (Laboratório de Computação Musical) of UFRGS (Universidade Federal do Rio Grande do Sul).
在这篇论文中,我们对UFRGS (Universidade Federal do里约热内卢Grande do Sul)的LCM (laboratorio de computacao Musical)最近的主要活动进行了历史概述和简要报告。
{"title":"LCM-Ufrgs Research Group Report: What are we doing in Computer Music?","authors":"Marcelo Pimenta, M. Johann, Rodrigo Schramm","doi":"10.5753/sbcm.2019.10442","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10442","url":null,"abstract":"In this paper, we present a historical overview and a brief report of the main recent activities at LCM (Laboratório de Computação Musical) of UFRGS (Universidade Federal do Rio Grande do Sul).","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125594836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music Information Retrieval (MIR) is a growing field of research concerned about recovering and generating useful information about music in general. One classic problem of MIR is key-finding, which could be described as the activity of finding the most stable tone and mode of a determined musical piece or a fragment of it. This problem, however, is usually modeled for audio as an input, sometimes MIDI, but little attention seems to be given to approaches considering musical notations and musictheory. This paper will present a method of key-finding that has chord annotations as its only input. A new metric is proposed for calculating distances between tonal pitch spaces and chords, which will be later used to create a key-finding method for chord annotations sequences. We achieve a success rate from 77.85% up to 88.75% for the whole database, depending on whether or not and how some parameters of approximation are configured. We argue that musical-theoretical approaches independent of audio could still bring progress to the MIR area and definitely could be used as complementary techniques.
音乐信息检索(Music Information Retrieval, MIR)是一个新兴的研究领域,它关注于音乐信息的恢复和生成。MIR的一个经典问题是找键,它可以被描述为为确定的音乐作品或其片段找到最稳定的音调和模式的活动。然而,这个问题通常是将音频建模为输入,有时是MIDI,但似乎很少注意到考虑音乐符号和音乐理论的方法。本文将介绍一种以和弦注释为唯一输入的键查找方法。提出了一种计算音高空间与和弦之间距离的新度量,该度量将用于创建和弦注释序列的寻键方法。我们在整个数据库中实现了从77.85%到88.75%的成功率,这取决于是否以及如何配置一些近似参数。我们认为,独立于音频的音乐理论方法仍然可以为MIR领域带来进步,并且绝对可以用作补充技术。
{"title":"A chord distance metric based on the Tonal Pitch Space and a key-finding method for chord annotation sequences","authors":"Lucas Marques","doi":"10.5753/sbcm.2019.10435","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10435","url":null,"abstract":"Music Information Retrieval (MIR) is a growing field of research concerned about recovering and generating useful information about music in general. One classic problem of MIR is key-finding, which could be described as the activity of finding the most stable tone and mode of a determined musical piece or a fragment of it. This problem, however, is usually modeled for audio as an input, sometimes MIDI, but little attention seems to be given to approaches considering musical notations and musictheory. This paper will present a method of key-finding that has chord annotations as its only input. A new metric is proposed for calculating distances between tonal pitch spaces and chords, which will be later used to create a key-finding method for chord annotations sequences. We achieve a success rate from 77.85% up to 88.75% for the whole database, depending on whether or not and how some parameters of approximation are configured. We argue that musical-theoretical approaches independent of audio could still bring progress to the MIR area and definitely could be used as complementary techniques.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125562483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chaos-based encryption uses a chaotic dynamic system to encrypt a file. The aim of this study was to investigate the use of the chaotic Cubic Map to encrypt data, in particular, audio files. A simple algorithm was developed to encrypt and decrypt an audio data. The effectiveness of the method was measured by means of the correlation coefficient calculation, spectral entropy and also by comparing waveforms. The measurements were shown to lead to satisfactory confusion levels of the original data, within a few seconds. This indicates that the Cubic Map can be used as a source for encryption keys, with as good or better security indicators when compared to other schemes.
{"title":"Audio Encryption Scheme based on Pseudo-orbit of Chaotic Map","authors":"E. P. Magalhães, Thiago A Santos, E. Nepomuceno","doi":"10.5753/sbcm.2019.10448","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10448","url":null,"abstract":"Chaos-based encryption uses a chaotic dynamic system to encrypt a file. The aim of this study was to investigate the use of the chaotic Cubic Map to encrypt data, in particular, audio files. A simple algorithm was developed to encrypt and decrypt an audio data. The effectiveness of the method was measured by means of the correlation coefficient calculation, spectral entropy and also by comparing waveforms. The measurements were shown to lead to satisfactory confusion levels of the original data, within a few seconds. This indicates that the Cubic Map can be used as a source for encryption keys, with as good or better security indicators when compared to other schemes.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123325894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}