Marcos Garcia, Gutenberg Lima Marques, Matheus Barros, Juciane Araldi Beltrame
This short paper presents an ongoing research that intertwines the theme of educational digital content production for the internet, specifically the audio podcast format, with the pedagogical practices developed in the context of music teacher education and emergency remote teaching. We aim at analyzing the experience of producing digital pedagogical-musical content in the podcast format by students of two Music Education Degree courses. The study uses a qualitative approach and the methodological strategy is based on concepts of action-research. The research is being developed by Technologies and Music Education Research Group (Tedum-UFPB) and by a team of professors from two federal higher education institutions. Data collection will be carried out through the development of field diaries by the research team and through conversation roundtables with the participant students, besides the entire process of documentation, registration and analysis of the phases that make up the action-research cycle. The research presented here can contribute to the processes of creation and conception of audio format content, seeking methodologies that are specific to the musical field, enhancing collective spaces for creation, valuing different authorships and encouraging pedagogical and musical diversity.
{"title":"Production of digital content in music teacher education: a study about podcast’s possibilities","authors":"Marcos Garcia, Gutenberg Lima Marques, Matheus Barros, Juciane Araldi Beltrame","doi":"10.5753/sbcm.2021.19448","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19448","url":null,"abstract":"This short paper presents an ongoing research that intertwines the theme of educational digital content production for the internet, specifically the audio podcast format, with the pedagogical practices developed in the context of music teacher education and emergency remote teaching. We aim at analyzing the experience of producing digital pedagogical-musical content in the podcast format by students of two Music Education Degree courses. The study uses a qualitative approach and the methodological strategy is based on concepts of action-research. The research is being developed by Technologies and Music Education Research Group (Tedum-UFPB) and by a team of professors from two federal higher education institutions. Data collection will be carried out through the development of field diaries by the research team and through conversation roundtables with the participant students, besides the entire process of documentation, registration and analysis of the phases that make up the action-research cycle. The research presented here can contribute to the processes of creation and conception of audio format content, seeking methodologies that are specific to the musical field, enhancing collective spaces for creation, valuing different authorships and encouraging pedagogical and musical diversity.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130589890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The conceptualization of the musical timbre, which allows its quantitative evaluation in an audio record, is still an open-ended issue. This paper presents a set of dimensionless descriptors to assess the musical timbre of woodwind instruments in recordings of the fourth octave of the tempered musical scale. These descriptors are calculated from the Fast Fourier Transform (FFT) spectra using the Python Programming Language, specifically the SciPy library. The characteristic spectral signature of the clarinet, bassoon, transverse flute, and oboe are obtained in the fourth musical octave, observing the presence of degeneration for some musical sounds, that is, two given different aerophones may present the same harmonics. It is concluded that the proposed descriptors are sufficient to differentiate the aerophones studied, allowing their recognition, even in the case that there present the same set of harmonic frequencies.
{"title":"Applications of FFT for timbral characterization in woodwind instruments","authors":"Yubiry Gonzalez, R. Prati","doi":"10.5753/sbcm.2021.19428","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19428","url":null,"abstract":"The conceptualization of the musical timbre, which allows its quantitative evaluation in an audio record, is still an open-ended issue. This paper presents a set of dimensionless descriptors to assess the musical timbre of woodwind instruments in recordings of the fourth octave of the tempered musical scale. These descriptors are calculated from the Fast Fourier Transform (FFT) spectra using the Python Programming Language, specifically the SciPy library. The characteristic spectral signature of the clarinet, bassoon, transverse flute, and oboe are obtained in the fourth musical octave, observing the presence of degeneration for some musical sounds, that is, two given different aerophones may present the same harmonics. It is concluded that the proposed descriptors are sufficient to differentiate the aerophones studied, allowing their recognition, even in the case that there present the same set of harmonic frequencies.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125061546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The NESCoM is a multidisciplinary research centre formed by musicians, engineers and computer scientists. This paper reports the ongoing projects and developments over the last two years, thus it is an update over the research report published in 2019. As a Brazilian research group, with solid international collaboration, we have opted to intercalate the language used to write the report, therefore the 2021 version is written in Portuguese. The main projects developed in these two years timeframe are related to interaction design based on (bio)musicality, robotic music performance, and ubiquitous music. In addition, a strong artistic production is also described. If you are interested to get to know more about the projects, do not hesitate to contact us.
{"title":"Relatório de Pesquisa NESCoM 2021","authors":"L. Costalonga, Marcus Vinicius Marvila das Neves","doi":"10.5753/sbcm.2021.19465","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19465","url":null,"abstract":"The NESCoM is a multidisciplinary research centre formed by musicians, engineers and computer scientists. This paper reports the ongoing projects and developments over the last two years, thus it is an update over the research report published in 2019. As a Brazilian research group, with solid international collaboration, we have opted to intercalate the language used to write the report, therefore the 2021 version is written in Portuguese. The main projects developed in these two years timeframe are related to interaction design based on (bio)musicality, robotic music performance, and ubiquitous music. In addition, a strong artistic production is also described. If you are interested to get to know more about the projects, do not hesitate to contact us.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"574 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123125968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advance of electronics, techniques and algorithms for digital signal processing, digital equipment has been gaining more and more space in the music scene. Micro-processed tools now generate several effects such as modulation, echo, and distortion of sounds generated by musical instruments, previously obtained only by analog units. In this context, this study aimed to develop aprototype of distortion effects unit using a Raspberry Pi (a low-cost small single-board computer) and affordable electronic components. Therefore, five nonlinear functionswere used, four of which are present in the literature andone of them was originally developed by the authors. These functions model the behavior of an active element (suchas transistors, valves, and operational amplifiers), which when they exceed their amplification thresholds produce distortions in the audio signals. Throughout this article, all the steps in the development of the analog circuits for signal acquisition and output will be presented, as well as the simulation and implementation of the functions in the microcontroller. At the end, with the finished prototype, the frequency response analysis is performed and the sound results achieved by the algorithms is compared with each other and with other distortion units.
{"title":"Electric guitar distortion effects unit using a Raspberry Pi","authors":"Renato Santos Pereira, R. V. Andreão","doi":"10.5753/sbcm.2021.19436","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19436","url":null,"abstract":"With the advance of electronics, techniques and algorithms for digital signal processing, digital equipment has been gaining more and more space in the music scene. Micro-processed tools now generate several effects such as modulation, echo, and distortion of sounds generated by musical instruments, previously obtained only by analog units. In this context, this study aimed to develop aprototype of distortion effects unit using a Raspberry Pi (a low-cost small single-board computer) and affordable electronic components. Therefore, five nonlinear functionswere used, four of which are present in the literature andone of them was originally developed by the authors. These functions model the behavior of an active element (suchas transistors, valves, and operational amplifiers), which when they exceed their amplification thresholds produce distortions in the audio signals. Throughout this article, all the steps in the development of the analog circuits for signal acquisition and output will be presented, as well as the simulation and implementation of the functions in the microcontroller. At the end, with the finished prototype, the frequency response analysis is performed and the sound results achieved by the algorithms is compared with each other and with other distortion units.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114502674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leonardo Vilela de Abreu Silva Pereira, T. Tavares
Automatic classification problems are common in the music information retrieval domain. Among those we can find the automatic identification of music genre and music mood as frequently approached problems. The labels related to genre and mood are both generated by humans, according to subjective experiences related to each individual’s growth and development, that is, each person attributes different meanings to genre and mood labels. However, because both genre and mood arise from a similar process related to the social surroundings of an individual, we hypothesize that they are somehow related. In this study, we present experiments performed in the Emotify dataset, which comprises audio data and genre and mood-related tags for several pieces. We show that we can predict genre from audio data with a high accuracy; however, we consistently obtained low accuracy to predict mood tags. Additionally, we tried to use mood tags to predict genre, and also obtained a low accuracy. An analysis of the feature space reveals that our features are more related to genre than to mood, which explains the results from a linear algebra viewpoint. However, we still cannot find a music-related explanation to this difference.
{"title":"An interplay between genre and emotion prediction in music: a study in the Emotify dataset","authors":"Leonardo Vilela de Abreu Silva Pereira, T. Tavares","doi":"10.5753/sbcm.2021.19421","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19421","url":null,"abstract":"Automatic classification problems are common in the music information retrieval domain. Among those we can find the automatic identification of music genre and music mood as frequently approached problems. The labels related to genre and mood are both generated by humans, according to subjective experiences related to each individual’s growth and development, that is, each person attributes different meanings to genre and mood labels. However, because both genre and mood arise from a similar process related to the social surroundings of an individual, we hypothesize that they are somehow related. In this study, we present experiments performed in the Emotify dataset, which comprises audio data and genre and mood-related tags for several pieces. We show that we can predict genre from audio data with a high accuracy; however, we consistently obtained low accuracy to predict mood tags. Additionally, we tried to use mood tags to predict genre, and also obtained a low accuracy. An analysis of the feature space reveals that our features are more related to genre than to mood, which explains the results from a linear algebra viewpoint. However, we still cannot find a music-related explanation to this difference.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121626579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diego Furtado Silva, A. Silva, Luís Felipe Ortolan, R. Marcacini
Deep learning has become the standard procedure to deal with Music Information Retrieval problems. This category of machine learning algorithms has achieved state-of-the-art results in several tasks, such as classification and auto-tagging. However, obtaining a good-performing model requires a significant amount of data. At the same time, most of the music datasets available lack cultural diversity. Therefore, the performance of the currently most used pre-trained models on underrepresented music genres is unknown. If music models follow the same direction that language models in Natural Language Processing, they should have poorer performance on music styles that are not present in the data used to train them. To verify this assumption, we use a well-known music model designed for auto-tagging in the task of genre recognition. We trained this model from scratch using a large general-domain dataset and two subsets specifying different domains. We empirically show that models trained on specific-domain data perform better than generalist models to classify music in the same domain, even trained with a smaller dataset. This outcome is distinctly observed in the subset that mainly contains Brazilian music, including several usually underrepresented genres.
{"title":"On Generalist and Domain-Specific Music Classification Models and Their Impacts on Brazilian Music Genre Recognition","authors":"Diego Furtado Silva, A. Silva, Luís Felipe Ortolan, R. Marcacini","doi":"10.5753/sbcm.2021.19427","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19427","url":null,"abstract":"Deep learning has become the standard procedure to deal with Music Information Retrieval problems. This category of machine learning algorithms has achieved state-of-the-art results in several tasks, such as classification and auto-tagging. However, obtaining a good-performing model requires a significant amount of data. At the same time, most of the music datasets available lack cultural diversity. Therefore, the performance of the currently most used pre-trained models on underrepresented music genres is unknown. If music models follow the same direction that language models in Natural Language Processing, they should have poorer performance on music styles that are not present in the data used to train them. To verify this assumption, we use a well-known music model designed for auto-tagging in the task of genre recognition. We trained this model from scratch using a large general-domain dataset and two subsets specifying different domains. We empirically show that models trained on specific-domain data perform better than generalist models to classify music in the same domain, even trained with a smaller dataset. This outcome is distinctly observed in the subset that mainly contains Brazilian music, including several usually underrepresented genres.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122206774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Silva, Paulo Viviurka Do Carmo, R. Marcacini, D. F. Silva
In scenarios involving musical data, there are usually high-dimensional data and different modalities, such as audio and text, that cost more in machine learning tasks. Instance selection is a promising approach as pre-processing step to reduce these challenges. With the intent to explore the multimodality in music information, we introduce musical data instance selection into heterogeneous network models. We propose and evaluate ten different heterogeneous networks to identify more representative relationships with various musical features related, including songs, artists, genres, and melspectrogram. The results obtained allow us to define which network structure is more appropriate considering the volume of available data and the type of information that the features have. Finally, we analyze the relevance of the musical features, and the relationship does not contribute for instance selection.
{"title":"Instance Selection for Music Genre Classification using Heterogeneous Networks","authors":"A. Silva, Paulo Viviurka Do Carmo, R. Marcacini, D. F. Silva","doi":"10.5753/sbcm.2021.19419","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19419","url":null,"abstract":"In scenarios involving musical data, there are usually high-dimensional data and different modalities, such as audio and text, that cost more in machine learning tasks. Instance selection is a promising approach as pre-processing step to reduce these challenges. With the intent to explore the multimodality in music information, we introduce musical data instance selection into heterogeneous network models. We propose and evaluate ten different heterogeneous networks to identify more representative relationships with various musical features related, including songs, artists, genres, and melspectrogram. The results obtained allow us to define which network structure is more appropriate considering the volume of available data and the type of information that the features have. Finally, we analyze the relevance of the musical features, and the relationship does not contribute for instance selection.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127097614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic Chord Estimation is a subject of Music Information Retrieval who tries to extract the chords of a song in an usable manner. In the last year, many reseachers tried to overperform the quantitative metrics, but the results lack reprodutibility by who needs them, musicians. In this article, we reviewed the state of art of some of this areas and performed a code Challenge who was evaluated by some of the MIREX metrics and by musicians. Then, with this results, we evaluated the need of evolution on the Estimation task and on the Alignment Task of the MIR area.
{"title":"Evaluating the Automatic Chord Estimation and Alignments tasks needs using metrics from a code challenge","authors":"Valter Jorge da Silva, G. Cabral","doi":"10.5753/sbcm.2021.19425","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19425","url":null,"abstract":"Automatic Chord Estimation is a subject of Music Information Retrieval who tries to extract the chords of a song in an usable manner. In the last year, many reseachers tried to overperform the quantitative metrics, but the results lack reprodutibility by who needs them, musicians. In this article, we reviewed the state of art of some of this areas and performed a code Challenge who was evaluated by some of the MIREX metrics and by musicians. Then, with this results, we evaluated the need of evolution on the Estimation task and on the Alignment Task of the MIR area.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130126080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The production of animations for Musical Information Visualization is still scarce and has challenges in the way of visually communicating information. Due to the need for domains of editing software, demanding technical skills and specific knowledge related to each area: animation, music, design and computing. In this article, we present a systematic review of the animated visualization area and, based on its conception processes, we elaborate an experimental model for the creation, prototyping and construction of musical animations. And through sessions developing quick prototypes, where we obtained qualitative results with feedback collection. We conclude that the importance of animation is exceptional, as an ally in the process of designing and creating a musical visualization, as it facilitates the representation of time to communicate structural elements of music, as they are dynamically arranged in a graphic area.
{"title":"Design process and rapid prototyping of animated music visualizations","authors":"Horhanna Almeida, G. Cabral, Rute Moura","doi":"10.5753/sbcm.2021.19435","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19435","url":null,"abstract":"The production of animations for Musical Information Visualization is still scarce and has challenges in the way of visually communicating information. Due to the need for domains of editing software, demanding technical skills and specific knowledge related to each area: animation, music, design and computing. In this article, we present a systematic review of the animated visualization area and, based on its conception processes, we elaborate an experimental model for the creation, prototyping and construction of musical animations. And through sessions developing quick prototypes, where we obtained qualitative results with feedback collection. We conclude that the importance of animation is exceptional, as an ally in the process of designing and creating a musical visualization, as it facilitates the representation of time to communicate structural elements of music, as they are dynamically arranged in a graphic area.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131925105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriel R. G. Barbosa, Bruna C. Melo, Gabriel P. Oliveira, Mariana O. Silva, Danilo B. Seufitelli, M. Moro
Consuming music through streams has made huge volumes of data available. We collect a part of such data and perform cross-era comparative analyses between physical and digital media for successful artists within the music market in Brazil. Given an artist’s career, we focus on hot streak periods defined as high-impact bursts occurring in sequence. Specifically, we construct artists’ success time series to detect and characterize hot streak periods for both physical and digital eras. Then, we assess their features, analyze them in the genre scale, and perform a cluster analysis to identify groups of artists with distinct success levels. For both physical and digital eras, we find the same clusters: Spike Hit Artists, Big Hit Artists, and Top Hit Artists. Our insights shed light on significant changes in the dynamics of the music industry over the years, by identifying the core of each era.
通过流媒体消费音乐使得大量数据成为可能。我们收集了这些数据的一部分,并对巴西音乐市场上成功的艺术家进行了物理和数字媒体之间的跨时代比较分析。考虑到一个艺术家的职业生涯,我们关注的是连续出现的高冲击爆发时期。具体来说,我们构建了艺术家的成功时间序列,以检测和表征物理和数字时代的热门时期。然后,我们评估他们的特征,在类型尺度上分析他们,并进行聚类分析,以确定具有不同成功水平的艺术家群体。无论是在实体时代还是数字时代,我们都发现了相同的集群:Spike Hit Artists, Big Hit Artists和Top Hit Artists。通过识别每个时代的核心,我们的见解揭示了多年来音乐产业动态的重大变化。
{"title":"Hot Streaks in the Brazilian Music Market: A Comparison Between Physical and Digital Eras","authors":"Gabriel R. G. Barbosa, Bruna C. Melo, Gabriel P. Oliveira, Mariana O. Silva, Danilo B. Seufitelli, M. Moro","doi":"10.5753/sbcm.2021.19440","DOIUrl":"https://doi.org/10.5753/sbcm.2021.19440","url":null,"abstract":"Consuming music through streams has made huge volumes of data available. We collect a part of such data and perform cross-era comparative analyses between physical and digital media for successful artists within the music market in Brazil. Given an artist’s career, we focus on hot streak periods defined as high-impact bursts occurring in sequence. Specifically, we construct artists’ success time series to detect and characterize hot streak periods for both physical and digital eras. Then, we assess their features, analyze them in the genre scale, and perform a cluster analysis to identify groups of artists with distinct success levels. For both physical and digital eras, we find the same clusters: Spike Hit Artists, Big Hit Artists, and Top Hit Artists. Our insights shed light on significant changes in the dynamics of the music industry over the years, by identifying the core of each era.","PeriodicalId":292360,"journal":{"name":"Anais do XVIII Simpósio Brasileiro de Computação Musical (SBCM 2021)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121605180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}