Pub Date : 2021-04-01DOI: 10.1525/MP.2021.38.4.406
Ziyong Lin, A. Werner, U. Lindenberger, A. Brandmaier, Elisabeth Wenger
We introduce the Berlin Gehoerbildung Scale (BGS), a multidimensional assessment of music expertise in amateur musicians and music professionals. The BGS is informed by music theory and uses a variety of testing methods in the ear-training tradition, with items covering four different dimensions of music expertise: (1) intervals and scales, (2) dictation, (3) chords and cadences, and (4) complex listening. We validated the test in a sample of amateur musicians, aspiring professional musicians, and students attending a highly competitive music conservatory (n = 59). Using structural equation modeling, we compared two factor models: a unidimensional model postulating a single factor of music expertise; and a hierarchical model, according to which four first-order subscale factors load on a second-order factor of general music expertise. The hierarchical model showed better fit to the data than the unidimensional model, indicating that the four subscales capture reliable variance above and beyond the general factor of music expertise. There were reliable group differences on both the second-order general factor and the four subscales, with music students outperforming aspiring professionals and amateur musicians. We conclude that the BGS is an adequate measurement instrument for assessing individual differences in music expertise, especially at high levels of expertise.
{"title":"Assessing Music Expertise","authors":"Ziyong Lin, A. Werner, U. Lindenberger, A. Brandmaier, Elisabeth Wenger","doi":"10.1525/MP.2021.38.4.406","DOIUrl":"https://doi.org/10.1525/MP.2021.38.4.406","url":null,"abstract":"We introduce the Berlin Gehoerbildung Scale (BGS), a multidimensional assessment of music expertise in amateur musicians and music professionals. The BGS is informed by music theory and uses a variety of testing methods in the ear-training tradition, with items covering four different dimensions of music expertise: (1) intervals and scales, (2) dictation, (3) chords and cadences, and (4) complex listening. We validated the test in a sample of amateur musicians, aspiring professional musicians, and students attending a highly competitive music conservatory (n = 59). Using structural equation modeling, we compared two factor models: a unidimensional model postulating a single factor of music expertise; and a hierarchical model, according to which four first-order subscale factors load on a second-order factor of general music expertise. The hierarchical model showed better fit to the data than the unidimensional model, indicating that the four subscales capture reliable variance above and beyond the general factor of music expertise. There were reliable group differences on both the second-order general factor and the four subscales, with music students outperforming aspiring professionals and amateur musicians. We conclude that the BGS is an adequate measurement instrument for assessing individual differences in music expertise, especially at high levels of expertise.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"406-421"},"PeriodicalIF":2.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44225425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Among the three primary tonal functions described in modern theory textbooks, the pre-dominant has the highest number of representative chords. We posit that one unifying feature of the pre-dominant function is its attraction to V, and the experiment reported here investigates factors that may contribute to this perception. Participants were junior/senior music majors, freshman music majors, and people from the general population recruited on Prolific.co. In each trial, four Shepard-tone sounds in the key of C were presented: 1) the tonic note, 2) one of 31 different chords, 3) the dominant triad, and 4) the tonic note. Participants rated the strength of attraction between the second and third chords. Across all individuals, diatonic and chromatic pre-dominant chords were rated significantly higher than non-pre-dominant chords and bridge chords. Further, music theory training moderated this relationship, with individuals with more theory training rating pre-dominant chords as being more attracted to the dominant. A final data analysis modeled the role of empirical features of the chords preceding the V chord, finding that chords with roots moving to V down by fifth, chords with less acoustical roughness, and chords with more semitones adjacent to V were all significant predictors of attraction ratings.
{"title":"The Perceptual Attraction of Pre-Dominant Chords","authors":"J. Brown, Daphne Tan, D. Baker","doi":"10.1525/mp.2021.39.1.21","DOIUrl":"https://doi.org/10.1525/mp.2021.39.1.21","url":null,"abstract":"Among the three primary tonal functions described in modern theory textbooks, the pre-dominant has the highest number of representative chords. We posit that one unifying feature of the pre-dominant function is its attraction to V, and the experiment reported here investigates factors that may contribute to this perception. Participants were junior/senior music majors, freshman music majors, and people from the general population recruited on Prolific.co. In each trial, four Shepard-tone sounds in the key of C were presented: 1) the tonic note, 2) one of 31 different chords, 3) the dominant triad, and 4) the tonic note. Participants rated the strength of attraction between the second and third chords. Across all individuals, diatonic and chromatic pre-dominant chords were rated significantly higher than non-pre-dominant chords and bridge chords. Further, music theory training moderated this relationship, with individuals with more theory training rating pre-dominant chords as being more attracted to the dominant. A final data analysis modeled the role of empirical features of the chords preceding the V chord, finding that chords with roots moving to V down by fifth, chords with less acoustical roughness, and chords with more semitones adjacent to V were all significant predictors of attraction ratings.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41968384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-17DOI: 10.1525/mp.2022.39.5.443
N. Maimon, D. Lamy, Z. Eitan
Western tonality provides a hierarchy among melodic scale-degrees, from the closural tonic triad notes to “out-of-key” chromatic notes. That hierarchy has been occasionally linked to emotion, with more closural degrees associated with more positive valence. However, systematic investigations of that association are lacking. Here, we examined the associations between tonality and emotion in three experiments, in musicians and in nonmusicians. We used an explicit task, in which participants matched probe tones following key-establishing sequences in major and minor keys to facial expressions ranging from sad to happy, and an implicit speeded task, adapting the Implicit Association Test. More closural scale-degrees were associated with more positive valence in all experiments, for both musicians and nonmusicians, with larger effects for major keys. The pattern of results significantly differed from that observed in a comparable goodness-of-fit task, suggesting that perceived scale-degree valence is not reducible to tonal fit. The comparison between the results from the explicit and implicit measures suggests that tonal valence may rely on two distinct mechanisms, one mediated by conceptual musical knowledge and conscious decisional processes, and the other largely modulated by nonconceptual, involuntary processes. The experimental paradigms introduced here may help mapping additional connotative meanings, both emotional and cross-modal, embedded in tonal structure, thus suggesting how “extra-musical” meanings are conveyed through tonal hierarchy.
{"title":"Do Picardy Thirds Smile? Tonal Hierarchy and Tonal Valence","authors":"N. Maimon, D. Lamy, Z. Eitan","doi":"10.1525/mp.2022.39.5.443","DOIUrl":"https://doi.org/10.1525/mp.2022.39.5.443","url":null,"abstract":"Western tonality provides a hierarchy among melodic scale-degrees, from the closural tonic triad notes to “out-of-key” chromatic notes. That hierarchy has been occasionally linked to emotion, with more closural degrees associated with more positive valence. However, systematic investigations of that association are lacking. Here, we examined the associations between tonality and emotion in three experiments, in musicians and in nonmusicians. We used an explicit task, in which participants matched probe tones following key-establishing sequences in major and minor keys to facial expressions ranging from sad to happy, and an implicit speeded task, adapting the Implicit Association Test. More closural scale-degrees were associated with more positive valence in all experiments, for both musicians and nonmusicians, with larger effects for major keys. The pattern of results significantly differed from that observed in a comparable goodness-of-fit task, suggesting that perceived scale-degree valence is not reducible to tonal fit. The comparison between the results from the explicit and implicit measures suggests that tonal valence may rely on two distinct mechanisms, one mediated by conceptual musical knowledge and conscious decisional processes, and the other largely modulated by nonconceptual, involuntary processes. The experimental paradigms introduced here may help mapping additional connotative meanings, both emotional and cross-modal, embedded in tonal structure, thus suggesting how “extra-musical” meanings are conveyed through tonal hierarchy.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43402799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1525/MP.2021.38.3.335
E. Smit, A. Milne
In the article “Consonance preferences within an unconventional tuning system,” Friedman and colleagues (2021) examine consonance ratings of a large range of dyads and triads from the Bohlen-Pierce chromatic just (BPCJ) scale. The study is designed as a replication of a recent paper by Bowling, Purves, and Gill (2018), which proposes that perception of consonance in dyads, triads, and tetrads can be predicted by their harmonic similarity to human vocalisations.In this commentary, we would like to correct some interpretations regarding Friedman et al.’s (2021) discussion of our paper (Smit, Milne, Dean, & Weidemann, 2019), as well as express some concerns regarding the statistical methods used. We also propose a stronger emphasis on the use of, as named by Friedman et al., composite models as a range of recent evidence strongly suggests that no single acoustic measure can fully predict the complex experience of consonance.
{"title":"The Need for Composite Models of Music Perception","authors":"E. Smit, A. Milne","doi":"10.1525/MP.2021.38.3.335","DOIUrl":"https://doi.org/10.1525/MP.2021.38.3.335","url":null,"abstract":"In the article “Consonance preferences within an unconventional tuning system,” Friedman and colleagues (2021) examine consonance ratings of a large range of dyads and triads from the Bohlen-Pierce chromatic just (BPCJ) scale. The study is designed as a replication of a recent paper by Bowling, Purves, and Gill (2018), which proposes that perception of consonance in dyads, triads, and tetrads can be predicted by their harmonic similarity to human vocalisations.In this commentary, we would like to correct some interpretations regarding Friedman et al.’s (2021) discussion of our paper (Smit, Milne, Dean, & Weidemann, 2019), as well as express some concerns regarding the statistical methods used. We also propose a stronger emphasis on the use of, as named by Friedman et al., composite models as a range of recent evidence strongly suggests that no single acoustic measure can fully predict the complex experience of consonance.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42941071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1525/mp.2021.38.3.331
Daniel L Bowling
Evidence supporting a link between harmoni-city and the attractiveness of simultaneous tone combinations has emerged from an experiment designed to mitigate effects of musical enculturation. I examine the analysis undertaken to produce this evidence and clarify its relation to an account of tonal aesthetics based on the biology of auditory-vocal communication.
{"title":"Harmonicity and Roughness in the Biology of Tonal Aesthetics.","authors":"Daniel L Bowling","doi":"10.1525/mp.2021.38.3.331","DOIUrl":"https://doi.org/10.1525/mp.2021.38.3.331","url":null,"abstract":"<p><p><b>Evidence supporting a link between harmoni</b>-city and the attractiveness of simultaneous tone combinations has emerged from an experiment designed to mitigate effects of musical enculturation. I examine the analysis undertaken to produce this evidence and clarify its relation to an account of tonal aesthetics based on the biology of auditory-vocal communication.</p>","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 3","pages":"331-334"},"PeriodicalIF":2.3,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8460127/pdf/nihms-1668358.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39452470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1525/MP.2021.38.3.340
R. Friedman, Douglas A. Kowalewski, Dominique Vuvan, W. Neill
T HE ORIGINS OF TONAL CONSONANCE—THE tendency to perceive some simultaneously sounded combinations of musical tones as more pleasant than others—is arguably among the most fundamental questions in music perception. For more than a century, the issue has been the subject of vigorous debate, undoubtedly fueled by the formidable complexities involved in investigating music-induced affective qualia that are not directly observable and often ineffable. The challenge of drawing definitive conclusions in this area of inquiry is well exemplified by the markedly divergent, yet equally thoughtful, responses offered in these commentaries. According to Bowling, our findings are an important source of converging evidence for his Vocal Similarity Hypothesis (VSH), the notion that consonance derives from an evolved preference for harmonic vocal sounds (Bowling, Purves, & Gill, 2018). However, he suggests that our interpretation of the results may cast a less favorable light on the VSH than is warranted. For example, he is skeptical of our contention that spectral interference (SI) accounts for greater variance in consonance judgments than harmonicity, arguing that the high correlation between these predictors ‘‘present[s] a problem for their separation via regression.’’ Yet, upon examination, the correlations between the harmonicity and SI measures that we used in our regression analyses were only moderate at best for our unconventional chord stimuli (-.54). Moreover, a Variance Inflation Factor analysis (Chatterjee & Price, 2012) for all four relevant regressions yields values under 1.26, close to their lower bound. This suggests that the precision of our regression coefficients was not likely to have been diminished due to multicollinearity. Our conclusion regarding the relative strength of the impact of SI on consonance ratings gains further credence from the work of Harrison and Pearce (2020), who reported analogous findings based on a reanalysis of four different behavioral datasets using conventional chords. Nevertheless, we agree with Bowling that consonance researchers should be wary of multicollinearity when comparing the predictive utility of different musical features, as certain harmonicity or SI metrics may indeed share substantial variance (see e.g., Bowling, this issue, Figure 2). Whereas Bowling suggests that our analysis and study design may have sold the VSH short by underweighting the contribution of harmonicity to consonance, both Smit and Milne as well as Harrison argue the opposite, proposing that we may have oversold the extent to which our findings support the VSH. Indeed, Harrison argues that our results leave open at least two alternative hypotheses: First, harmonicity may be preferred, not due to an evolved preference for voice-like sounds, but because harmonicity facilitates the identification of distinct auditory sources in the environment. Second, a preference for harmonic sounds may have evolved not because it reinforced attention
{"title":"Response to Invited Commentaries on “Consonance Preferences Within an Unconventional Tuning System”","authors":"R. Friedman, Douglas A. Kowalewski, Dominique Vuvan, W. Neill","doi":"10.1525/MP.2021.38.3.340","DOIUrl":"https://doi.org/10.1525/MP.2021.38.3.340","url":null,"abstract":"T HE ORIGINS OF TONAL CONSONANCE—THE tendency to perceive some simultaneously sounded combinations of musical tones as more pleasant than others—is arguably among the most fundamental questions in music perception. For more than a century, the issue has been the subject of vigorous debate, undoubtedly fueled by the formidable complexities involved in investigating music-induced affective qualia that are not directly observable and often ineffable. The challenge of drawing definitive conclusions in this area of inquiry is well exemplified by the markedly divergent, yet equally thoughtful, responses offered in these commentaries. According to Bowling, our findings are an important source of converging evidence for his Vocal Similarity Hypothesis (VSH), the notion that consonance derives from an evolved preference for harmonic vocal sounds (Bowling, Purves, & Gill, 2018). However, he suggests that our interpretation of the results may cast a less favorable light on the VSH than is warranted. For example, he is skeptical of our contention that spectral interference (SI) accounts for greater variance in consonance judgments than harmonicity, arguing that the high correlation between these predictors ‘‘present[s] a problem for their separation via regression.’’ Yet, upon examination, the correlations between the harmonicity and SI measures that we used in our regression analyses were only moderate at best for our unconventional chord stimuli (-.54). Moreover, a Variance Inflation Factor analysis (Chatterjee & Price, 2012) for all four relevant regressions yields values under 1.26, close to their lower bound. This suggests that the precision of our regression coefficients was not likely to have been diminished due to multicollinearity. Our conclusion regarding the relative strength of the impact of SI on consonance ratings gains further credence from the work of Harrison and Pearce (2020), who reported analogous findings based on a reanalysis of four different behavioral datasets using conventional chords. Nevertheless, we agree with Bowling that consonance researchers should be wary of multicollinearity when comparing the predictive utility of different musical features, as certain harmonicity or SI metrics may indeed share substantial variance (see e.g., Bowling, this issue, Figure 2). Whereas Bowling suggests that our analysis and study design may have sold the VSH short by underweighting the contribution of harmonicity to consonance, both Smit and Milne as well as Harrison argue the opposite, proposing that we may have oversold the extent to which our findings support the VSH. Indeed, Harrison argues that our results leave open at least two alternative hypotheses: First, harmonicity may be preferred, not due to an evolved preference for voice-like sounds, but because harmonicity facilitates the identification of distinct auditory sources in the environment. Second, a preference for harmonic sounds may have evolved not because it reinforced attention","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45813159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1525/MP.2021.38.3.245
M. Broughton, Jessie Dimmick, R. Dean
Effective audience engagement with musical performance involves social, cognitive and affective elements. We investigate the influence of observers’ musical expertise and instrumental motor expertise on their affective and cognitive responses to complex and unfamiliar classical piano performances of works by Scriabin and Hanson presented in audio and audio-visual formats. Observers gave their felt affect (arousal and valence) and their action understanding responses continuously while observing the performances. Liking and familiarity were rated after each excerpt. As hypothesized: visual information enhanced observers’ action understanding and liking ratings; observers with music training rated their action understanding, liking and familiarity higher than did nonmusicians; observers’ felt affect did not vary according to their musical or motor expertise. Contrary to our hypotheses: visual information had only a slight effect on observers’ arousal felt affect responses and none on valence; musicians’ specific instrumental motor expertise did not influence action understanding responses. We also observed a significant negative relationship between action understanding and felt affect responses. Ideas of empathy in musical interactions motivated the research; the empathy framework in relation to musical performance is discussed. Nonmusician audiences might be sensitized to challenging musical performances through multimodal strategies to build the performer-observer connection and increase understanding of performance.
{"title":"Affective and Cognitive Responses to Musical Performances of Early 20th Century Classical Solo Piano Compositions","authors":"M. Broughton, Jessie Dimmick, R. Dean","doi":"10.1525/MP.2021.38.3.245","DOIUrl":"https://doi.org/10.1525/MP.2021.38.3.245","url":null,"abstract":"Effective audience engagement with musical performance involves social, cognitive and affective elements. We investigate the influence of observers’ musical expertise and instrumental motor expertise on their affective and cognitive responses to complex and unfamiliar classical piano performances of works by Scriabin and Hanson presented in audio and audio-visual formats. Observers gave their felt affect (arousal and valence) and their action understanding responses continuously while observing the performances. Liking and familiarity were rated after each excerpt. As hypothesized: visual information enhanced observers’ action understanding and liking ratings; observers with music training rated their action understanding, liking and familiarity higher than did nonmusicians; observers’ felt affect did not vary according to their musical or motor expertise. Contrary to our hypotheses: visual information had only a slight effect on observers’ arousal felt affect responses and none on valence; musicians’ specific instrumental motor expertise did not influence action understanding responses. We also observed a significant negative relationship between action understanding and felt affect responses. Ideas of empathy in musical interactions motivated the research; the empathy framework in relation to musical performance is discussed. Nonmusician audiences might be sensitized to challenging musical performances through multimodal strategies to build the performer-observer connection and increase understanding of performance.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"245-266"},"PeriodicalIF":2.3,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49466241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1525/MP.2021.38.3.313
R. Friedman, Douglas A. Kowalewski, Dominique Vuvan, W. Neill
Recently, Bowling, Purves, and Gill (2018a), found that individuals perceive chords with spectra resembling a harmonic series as more consonant. This is consistent with their vocal similarity hypothesis (VSH), the notion that the experience of consonance is based on an evolved preference for sounds that resemble human vocalizations. To rule out confounding between harmonicity and familiarity, we extended Bowling et al.’s (2018a) procedure to chords from the unconventional Bohlen-Pierce chromatic just (BPCJ) scale. We also assessed whether the association between harmonicity and consonance was moderated by timbre by presenting chords generated from either piano or clarinet samples. Results failed to straightforwardly replicate this association; however, evidence of a positive correlation between harmonicity and consonance did emerge across timbres following post hoc exclusion of chords containing intervals that were particularly similar to conventional equal-tempered dyads. Supplementary regression analyses using a more comprehensive measure of harmonicity confirmed its positive association with consonance ratings of BPCJ chords, yet also showed that spectral interference independently contributed to these ratings. In sum, our results are consistent with the VSH; however, they also suggest that a composite model, incorporating both harmonicity as well as spectral interference as predictors, would best account for variance in consonance judgments.
{"title":"Consonance Preferences Within an Unconventional Tuning System","authors":"R. Friedman, Douglas A. Kowalewski, Dominique Vuvan, W. Neill","doi":"10.1525/MP.2021.38.3.313","DOIUrl":"https://doi.org/10.1525/MP.2021.38.3.313","url":null,"abstract":"Recently, Bowling, Purves, and Gill (2018a), found that individuals perceive chords with spectra resembling a harmonic series as more consonant. This is consistent with their vocal similarity hypothesis (VSH), the notion that the experience of consonance is based on an evolved preference for sounds that resemble human vocalizations. To rule out confounding between harmonicity and familiarity, we extended Bowling et al.’s (2018a) procedure to chords from the unconventional Bohlen-Pierce chromatic just (BPCJ) scale. We also assessed whether the association between harmonicity and consonance was moderated by timbre by presenting chords generated from either piano or clarinet samples. Results failed to straightforwardly replicate this association; however, evidence of a positive correlation between harmonicity and consonance did emerge across timbres following post hoc exclusion of chords containing intervals that were particularly similar to conventional equal-tempered dyads. Supplementary regression analyses using a more comprehensive measure of harmonicity confirmed its positive association with consonance ratings of BPCJ chords, yet also showed that spectral interference independently contributed to these ratings. In sum, our results are consistent with the VSH; however, they also suggest that a composite model, incorporating both harmonicity as well as spectral interference as predictors, would best account for variance in consonance judgments.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41965449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1525/MP.2021.38.3.337
Peter M. C. Harrison
I discuss three fundamental questions underpinning the study of consonance: 1) What features cause a particular chord to be perceived as consonant? 2) How did humans evolve the ability to perceive these features? 3) Why did humans evolve to attribute particular aesthetic valences to these features (if they did at all)? The first question has been addressed by several recent articles, including Friedman, Kowalewski, Vuvan, and Neill (2021), with the common conclusion that consonance in Western listeners is driven by multiple features such as harmonicity, interference between partials, and familiarity. On this basis, it seems relatively straightforward to answer the second question: each of these consonance features seems to be grounded in fundamental aspects of human auditory perception, such as auditory scene analysis and auditory long-term memory. However, the third question is harder to resolve. I describe several potential answers, and argue that the present evidence is insufficient to distinguish between them, despite what has been claimed in the literature. I conclude by discussing what kinds of future studies might be able to shed light on this problem.
{"title":"Three Questions Concerning Consonance Perception","authors":"Peter M. C. Harrison","doi":"10.1525/MP.2021.38.3.337","DOIUrl":"https://doi.org/10.1525/MP.2021.38.3.337","url":null,"abstract":"I discuss three fundamental questions underpinning the study of consonance: 1) What features cause a particular chord to be perceived as consonant? 2) How did humans evolve the ability to perceive these features? 3) Why did humans evolve to attribute particular aesthetic valences to these features (if they did at all)? The first question has been addressed by several recent articles, including Friedman, Kowalewski, Vuvan, and Neill (2021), with the common conclusion that consonance in Western listeners is driven by multiple features such as harmonicity, interference between partials, and familiarity. On this basis, it seems relatively straightforward to answer the second question: each of these consonance features seems to be grounded in fundamental aspects of human auditory perception, such as auditory scene analysis and auditory long-term memory. However, the third question is harder to resolve. I describe several potential answers, and argue that the present evidence is insufficient to distinguish between them, despite what has been claimed in the literature. I conclude by discussing what kinds of future studies might be able to shed light on this problem.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42730158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1525/MP.2021.38.3.267
M. Cabon, Anais Le Fur-Bonnabesse, S. Genestet, B. Quinio, L. Miséry, A. Woda, C. Bodéré
Passive music listening has shown its capacity to soothe pain in several clinical and experimental studies. This phenomenon—known as music-induced analgesia—could partly be explained by the modulation of pain signals in response to the stimulation of brain and brainstem centers. We hypothesized that music-induced analgesia may involve inhibitory descending pain systems. We assessed pain-related responses to endogenous pain control mechanisms known to depend on descending pain modulation: peak of first pain (PP), temporal summation (TS), and diffuse noxious inhibitory control (DNIC). Twenty-seven healthy participants (14 men, 13 women) were exposed to a conditioned pain modulation paradigm during a 20-minute relaxing music session and a silence condition. Pain was continually measured with a visual analogue scale. Pain ratings were significantly lower with music listening (p < .02). Repeated measures ANOVA indicated significant differences between conditions within PP and TS (p < .05) but not in DNIC. Those findings suggested that music listening could strengthen components of the inhibitory descending pain pathways operating at the dorsal spinal cord level.
{"title":"Impact of Music on First Pain and Temporal Summation of Second Pain","authors":"M. Cabon, Anais Le Fur-Bonnabesse, S. Genestet, B. Quinio, L. Miséry, A. Woda, C. Bodéré","doi":"10.1525/MP.2021.38.3.267","DOIUrl":"https://doi.org/10.1525/MP.2021.38.3.267","url":null,"abstract":"Passive music listening has shown its capacity to soothe pain in several clinical and experimental studies. This phenomenon—known as music-induced analgesia—could partly be explained by the modulation of pain signals in response to the stimulation of brain and brainstem centers. We hypothesized that music-induced analgesia may involve inhibitory descending pain systems. We assessed pain-related responses to endogenous pain control mechanisms known to depend on descending pain modulation: peak of first pain (PP), temporal summation (TS), and diffuse noxious inhibitory control (DNIC). Twenty-seven healthy participants (14 men, 13 women) were exposed to a conditioned pain modulation paradigm during a 20-minute relaxing music session and a silence condition. Pain was continually measured with a visual analogue scale. Pain ratings were significantly lower with music listening (p < .02). Repeated measures ANOVA indicated significant differences between conditions within PP and TS (p < .05) but not in DNIC. Those findings suggested that music listening could strengthen components of the inhibitory descending pain pathways operating at the dorsal spinal cord level.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"267-281"},"PeriodicalIF":2.3,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41537186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}