Pub Date : 2021-08-08DOI: 10.1080/09298215.2021.1977336
D. Griffiths, Stuart Cunningham, Jonathan Weinel, R. Picking
ABSTRACT Making the link between human emotion and music is challenging. Our aim was to produce an efficient system that emotionally rates songs from multiple genres. To achieve this, we employed a series of online self-report studies, utilising Russell's circumplex model. The first study (n = 44) identified audio features that map to arousal and valence for 20 songs. From this, we constructed a set of linear regressors. The second study (n = 158) measured the efficacy of our system, utilising 40 new songs to create a ground truth. Results show our approach may be effective at emotionally rating music, particularly in the prediction of valence.
{"title":"A multi-genre model for music emotion recognition using linear regressors","authors":"D. Griffiths, Stuart Cunningham, Jonathan Weinel, R. Picking","doi":"10.1080/09298215.2021.1977336","DOIUrl":"https://doi.org/10.1080/09298215.2021.1977336","url":null,"abstract":"ABSTRACT Making the link between human emotion and music is challenging. Our aim was to produce an efficient system that emotionally rates songs from multiple genres. To achieve this, we employed a series of online self-report studies, utilising Russell's circumplex model. The first study (n = 44) identified audio features that map to arousal and valence for 20 songs. From this, we constructed a set of linear regressors. The second study (n = 158) measured the efficacy of our system, utilising 40 new songs to create a ground truth. Results show our approach may be effective at emotionally rating music, particularly in the prediction of valence.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"355 - 372"},"PeriodicalIF":1.1,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47452290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1080/09298215.2021.1977339
Paulo Sergio da Conceição Moreira, D. Tsunoda
This study aims to recognise emotions in music through the Adaptive-Network-Based Fuzzy (ANFIS). For this, we applied such structure in 877 MP3 files with thirty seconds duration each, collected directly on the YouTube platform, which represent the emotions anger, fear, happiness, sadness, and surprise. We developed four classification strategies, consisting of sets of five, four, three, and two emotions. The results were considered promising, especially for three and two emotions, whose highest hit rates were 65.83% for anger, happiness and sadness, and 88.75% for anger and sadness. A reduction in the hit rate was observed when the emotions fear and happiness were in the same set, raising the hypothesis that only the audio content is not enough to distinguish between these emotions. Based on the results, we identified potential in the application of the ANFIS framework for problems with uncertainty and subjectivity.
{"title":"Recognition of emotions in music through the Adaptive-Network-Based Fuzzy (ANFIS)","authors":"Paulo Sergio da Conceição Moreira, D. Tsunoda","doi":"10.1080/09298215.2021.1977339","DOIUrl":"https://doi.org/10.1080/09298215.2021.1977339","url":null,"abstract":"This study aims to recognise emotions in music through the Adaptive-Network-Based Fuzzy (ANFIS). For this, we applied such structure in 877 MP3 files with thirty seconds duration each, collected directly on the YouTube platform, which represent the emotions anger, fear, happiness, sadness, and surprise. We developed four classification strategies, consisting of sets of five, four, three, and two emotions. The results were considered promising, especially for three and two emotions, whose highest hit rates were 65.83% for anger, happiness and sadness, and 88.75% for anger and sadness. A reduction in the hit rate was observed when the emotions fear and happiness were in the same set, raising the hypothesis that only the audio content is not enough to distinguish between these emotions. Based on the results, we identified potential in the application of the ANFIS framework for problems with uncertainty and subjectivity.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"342 - 354"},"PeriodicalIF":1.1,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44382446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-27DOI: 10.1080/09298215.2021.1927116
Luca Danieli, Maria A. G. Witek, Christopher Haworth
This paper reports on an exploratory study in the field of electroacoustic music aimed at understanding whether a sensation similar to that associated with the concept of “cadence” in relation to tonal music can be identified when listening to sounds diffused in space. Using a variety of patterned stimuli in a perceptual experiment, we asked listeners to evaluate the completeness of multiple trajectories on the horizontal plane. The results show differences across multiple categories of listeners, and suggest that listeners acquainted with spatial music consider trajectories more complete when presenting the last two impulses at opposite directions from the centre.
{"title":"Space, sonic trajectories and the perception of cadence in electroacoustic music","authors":"Luca Danieli, Maria A. G. Witek, Christopher Haworth","doi":"10.1080/09298215.2021.1927116","DOIUrl":"https://doi.org/10.1080/09298215.2021.1927116","url":null,"abstract":"This paper reports on an exploratory study in the field of electroacoustic music aimed at understanding whether a sensation similar to that associated with the concept of “cadence” in relation to tonal music can be identified when listening to sounds diffused in space. Using a variety of patterned stimuli in a perceptual experiment, we asked listeners to evaluate the completeness of multiple trajectories on the horizontal plane. The results show differences across multiple categories of listeners, and suggest that listeners acquainted with spatial music consider trajectories more complete when presenting the last two impulses at opposite directions from the centre.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"266 - 278"},"PeriodicalIF":1.1,"publicationDate":"2021-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2021.1927116","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45274813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-27DOI: 10.1080/09298215.2021.1930062
Edward T. R. Hall, M. Pearce
The coherent organisation of thematic material into large-scale structures within a composition is an important concept in both traditional and cognitive theories of music. However, empirical evidence supporting their perception is scarce. Providing a more nuanced approach, this paper introduces a computational model of hypothesised cognitive mechanisms underlying perception of large-scale thematic structure. Repetition detection based on statistical learning forms the model's foundation, hypothesising that predictability arising from repetition creates perceived thematic coherence. Measures are produced that characterise structural properties of a corpus of 623 monophonic compositions. Exploratory analysis reveals the extent to which these measures vary systematically and independently.
{"title":"A model of large-scale thematic structure","authors":"Edward T. R. Hall, M. Pearce","doi":"10.1080/09298215.2021.1930062","DOIUrl":"https://doi.org/10.1080/09298215.2021.1930062","url":null,"abstract":"The coherent organisation of thematic material into large-scale structures within a composition is an important concept in both traditional and cognitive theories of music. However, empirical evidence supporting their perception is scarce. Providing a more nuanced approach, this paper introduces a computational model of hypothesised cognitive mechanisms underlying perception of large-scale thematic structure. Repetition detection based on statistical learning forms the model's foundation, hypothesising that predictability arising from repetition creates perceived thematic coherence. Measures are produced that characterise structural properties of a corpus of 623 monophonic compositions. Exploratory analysis reveals the extent to which these measures vary systematically and independently.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"220 - 241"},"PeriodicalIF":1.1,"publicationDate":"2021-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2021.1930062","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47217708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-27DOI: 10.1080/09298215.2021.1936076
Paolo Ammirante, J. Rovetti
A previous study showed that ‘bright’ vowels (i.e. front vowels, which have higher second formants) are favoured for on-beat words in hip-hop music. Here we partially replicated these findings in a more diverse sample of pop songs from the Rolling Stone Corpus. Stressed monosyllables were classified by their vowel’s place of articulation and their metric position. Bright vowels were 9–13% more likely on weak (but not strong) beats and metric positions immediately surrounding than other metric positions. Favouring bright vowels on and surrounding weak beats may mitigate masking by the snare drum, which typically plays on weak beats.
{"title":"Bright vowels are favoured on weak beats in popular music lyrics","authors":"Paolo Ammirante, J. Rovetti","doi":"10.1080/09298215.2021.1936076","DOIUrl":"https://doi.org/10.1080/09298215.2021.1936076","url":null,"abstract":"A previous study showed that ‘bright’ vowels (i.e. front vowels, which have higher second formants) are favoured for on-beat words in hip-hop music. Here we partially replicated these findings in a more diverse sample of pop songs from the Rolling Stone Corpus. Stressed monosyllables were classified by their vowel’s place of articulation and their metric position. Bright vowels were 9–13% more likely on weak (but not strong) beats and metric positions immediately surrounding than other metric positions. Favouring bright vowels on and surrounding weak beats may mitigate masking by the snare drum, which typically plays on weak beats.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"259 - 265"},"PeriodicalIF":1.1,"publicationDate":"2021-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2021.1936076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45870878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-08DOI: 10.1080/09298215.2021.1910313
M. Krzyżaniak
This paper studies swarms of autonomous musical robots and its contributions are twofold. First, I introduce Dr. Squiggles, a simple rhythmic musical robot, which serves as a general platform for studying human-robot and robot-robot musical interaction. Secondly, I use three Dr. Squiggles robots to study what happens when musical robots listen to, learn from, and respond to one another while improvising music together. This paper has a supplementary video at https://www.youtube.com/watch?v=yN711HXPfuY which shows the three robots playing some of the equilibrium rhythms.
{"title":"Musical robot swarms, timing, and equilibria","authors":"M. Krzyżaniak","doi":"10.1080/09298215.2021.1910313","DOIUrl":"https://doi.org/10.1080/09298215.2021.1910313","url":null,"abstract":"This paper studies swarms of autonomous musical robots and its contributions are twofold. First, I introduce Dr. Squiggles, a simple rhythmic musical robot, which serves as a general platform for studying human-robot and robot-robot musical interaction. Secondly, I use three Dr. Squiggles robots to study what happens when musical robots listen to, learn from, and respond to one another while improvising music together. This paper has a supplementary video at https://www.youtube.com/watch?v=yN711HXPfuY which shows the three robots playing some of the equilibrium rhythms.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"279 - 297"},"PeriodicalIF":1.1,"publicationDate":"2021-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2021.1910313","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43388498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-15DOI: 10.1080/09298215.2021.1906709
D. Ihde
This paper will follow a musically experimental trajectory from non-mediated musical sound through many centuries of musical innovation from the simplest forms of resonation to today’s synthesised musics in electronic – digital and synthesiser musics – with side looks at how changes in musical technologies play roles in the player-instrument and listener-music relations. I shall then look briefly at the modern, electric amplification of ‘electric’ instruments and much ‘louder’ musics with their equally radical changes in audience-performance situations. Finally, then, I will turn to electronic variants, which yet again drastically change the musical gestalt of player-instrument and listener-music relations.
{"title":"A Finnish turn: Digital and synthesiser musical instruments","authors":"D. Ihde","doi":"10.1080/09298215.2021.1906709","DOIUrl":"https://doi.org/10.1080/09298215.2021.1906709","url":null,"abstract":"This paper will follow a musically experimental trajectory from non-mediated musical sound through many centuries of musical innovation from the simplest forms of resonation to today’s synthesised musics in electronic – digital and synthesiser musics – with side looks at how changes in musical technologies play roles in the player-instrument and listener-music relations. I shall then look briefly at the modern, electric amplification of ‘electric’ instruments and much ‘louder’ musics with their equally radical changes in audience-performance situations. Finally, then, I will turn to electronic variants, which yet again drastically change the musical gestalt of player-instrument and listener-music relations.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"165 - 174"},"PeriodicalIF":1.1,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2021.1906709","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42934064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-15DOI: 10.1080/09298215.2021.1900275
Koray Tahiroglu
It is widely accepted that computational technologies shape the relationship of musicians, instrument builders and composers with music, affecting various socio-cultural realisms in music. In this article, I discuss in what ways music-making still emerges as a social construct, even as a result of the mutual cooperation with human musicians and AI-powered autonomous instruments. I argue that building, making, and performing with a digital musical instrument has undergone a gradual socio-technological change that has affected art, science, technology, culture and communities in general. I support my investigation through the current performance and composition practice of the autonomous AI-terity musical instrument.
{"title":"Ever-shifting roles in building, composing and performing with digital musical instruments","authors":"Koray Tahiroglu","doi":"10.1080/09298215.2021.1900275","DOIUrl":"https://doi.org/10.1080/09298215.2021.1900275","url":null,"abstract":"It is widely accepted that computational technologies shape the relationship of musicians, instrument builders and composers with music, affecting various socio-cultural realisms in music. In this article, I discuss in what ways music-making still emerges as a social construct, even as a result of the mutual cooperation with human musicians and AI-powered autonomous instruments. I argue that building, making, and performing with a digital musical instrument has undergone a gradual socio-technological change that has affected art, science, technology, culture and communities in general. I support my investigation through the current performance and composition practice of the autonomous AI-terity musical instrument.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"155 - 164"},"PeriodicalIF":1.1,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2021.1900275","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45252462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-15DOI: 10.1080/09298215.2021.1898646
Claudia Molitor, Thor Magnusson
This conversation between Thor Magnusson and Claudia Molitor introduces the idea of composition as cultural technology, where compositions are understood as systems that create spaces within which ‘things’ can occur and can be explored. In this conception of composition, the composer becomes the curator of an experience for an audience, shifting the focus of the work on the encounter of the audience. Talking about some of Molitor’s pieces from the past decade, the discussion explores how these ideas can manifest in compositional practice.
{"title":"Curating experience: Composition as cultural technology – a conversation","authors":"Claudia Molitor, Thor Magnusson","doi":"10.1080/09298215.2021.1898646","DOIUrl":"https://doi.org/10.1080/09298215.2021.1898646","url":null,"abstract":"This conversation between Thor Magnusson and Claudia Molitor introduces the idea of composition as cultural technology, where compositions are understood as systems that create spaces within which ‘things’ can occur and can be explored. In this conception of composition, the composer becomes the curator of an experience for an audience, shifting the focus of the work on the encounter of the audience. Talking about some of Molitor’s pieces from the past decade, the discussion explores how these ideas can manifest in compositional practice.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"184 - 189"},"PeriodicalIF":1.1,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2021.1898646","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44397050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-15DOI: 10.1080/09298215.2021.1907421
Koray Tahiroglu, Thor Magnusson
ABSTRACT This special issue, arising from a symposium in Helsinki in 2019, presents contributions from a diverse group of practitioners, representing a broad range of approaches in the making, thinking and writing about digital musical instruments. The authors consider the socio-cultural role of technology in current and emerging digital music practices with changing social roles, historical and critical reflections. This introduction explains the context and motivation for the issue and summarises the contribution of each of the eight articles. Together they provide what we believe is a unique contribution to the research of new interfaces for musical expression and related areas.
{"title":"Introduction to the special issue on socio-cultural role of technology in digital musical instruments","authors":"Koray Tahiroglu, Thor Magnusson","doi":"10.1080/09298215.2021.1907421","DOIUrl":"https://doi.org/10.1080/09298215.2021.1907421","url":null,"abstract":"ABSTRACT This special issue, arising from a symposium in Helsinki in 2019, presents contributions from a diverse group of practitioners, representing a broad range of approaches in the making, thinking and writing about digital musical instruments. The authors consider the socio-cultural role of technology in current and emerging digital music practices with changing social roles, historical and critical reflections. This introduction explains the context and motivation for the issue and summarises the contribution of each of the eight articles. Together they provide what we believe is a unique contribution to the research of new interfaces for musical expression and related areas.","PeriodicalId":16553,"journal":{"name":"Journal of New Music Research","volume":"50 1","pages":"117 - 120"},"PeriodicalIF":1.1,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09298215.2021.1907421","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42319774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}