Emotional communication is central to music performance expression and empathy. Research has shown that music activities can enhance empathy in children and that more empathic adults can more accurately recognize and feel performers’ expressive intentions. Nevertheless, little is known about performance expression during childhood and the specific music-related factors affecting empathy development. This paper explores children’s sensitivity to a performer’s expressive or mechanical intentions and its relationship to children’s everyday empathy. Twenty-seven children listened to expressive and mechanical versions of Romantic flute excerpts with and without accompanying video, rating their perceived level of the performer’s expression and their enjoyment of the performance. The results indicate that children recognize performers’ intended expression or lack thereof and enjoy expressive performances more than mechanical ones. Children aged 10–12 recognized performance expression better than those aged 8–9, especially in audiovisual conditions. Children with higher cognitive empathy rated performance expression more in line with their enjoyment of the performance, which was also more concordant with the performer’s expressive intention. The findings support a relationship between music and socio-emotional skills and emphasize the importance of the visual component of music performance for children, an aspect that has received little attention among researchers and educators.
{"title":"Children’s Sensitivity to Performance Expression and its Relationship to Children’s Empathy","authors":"Cecilia Taher","doi":"10.1525/mp.2022.40.1.12","DOIUrl":"https://doi.org/10.1525/mp.2022.40.1.12","url":null,"abstract":"Emotional communication is central to music performance expression and empathy. Research has shown that music activities can enhance empathy in children and that more empathic adults can more accurately recognize and feel performers’ expressive intentions. Nevertheless, little is known about performance expression during childhood and the specific music-related factors affecting empathy development. This paper explores children’s sensitivity to a performer’s expressive or mechanical intentions and its relationship to children’s everyday empathy. Twenty-seven children listened to expressive and mechanical versions of Romantic flute excerpts with and without accompanying video, rating their perceived level of the performer’s expression and their enjoyment of the performance. The results indicate that children recognize performers’ intended expression or lack thereof and enjoy expressive performances more than mechanical ones. Children aged 10–12 recognized performance expression better than those aged 8–9, especially in audiovisual conditions. Children with higher cognitive empathy rated performance expression more in line with their enjoyment of the performance, which was also more concordant with the performer’s expressive intention. The findings support a relationship between music and socio-emotional skills and emphasize the importance of the visual component of music performance for children, an aspect that has received little attention among researchers and educators.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47736552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrik N. Juslin, Laura S. Sakka, G. Barradas, O. Lartillot
Emotions have been found to play a paramount role in both everyday music experiences and health applications of music, but the applicability of musical emotions depends on: 1) which emotions music can induce, 2) how it induces them, and 3) how individual differences may be explained. These questions were addressed in a listening test, where 44 participants (aged 19–66 years) reported both felt emotions and subjective impressions of emotion mechanisms (Mec Scale), while listening to 72 pieces of music from 12 genres, selected using a stratified random sampling procedure. The results showed that: 1) positive emotions (e.g., happiness) were more prevalent than negative emotions (e.g., anger); 2) Rhythmic entrainment was the most and Brain stem reflex the least frequent of the mechanisms featured in the BRECVEMA theory; 3) felt emotions could be accurately predicted based on self-reported mechanisms in multiple regression analyses; 4) self-reported mechanisms predicted felt emotions better than did acoustic features; and 5) individual listeners showed partly different emotion-mechanism links across stimuli, which may help to explain individual differences in emotional responses. Implications for future research and applications of musical emotions are discussed.
{"title":"Emotions, Mechanisms, and Individual Differences in Music Listening","authors":"Patrik N. Juslin, Laura S. Sakka, G. Barradas, O. Lartillot","doi":"10.1525/mp.2022.40.1.55","DOIUrl":"https://doi.org/10.1525/mp.2022.40.1.55","url":null,"abstract":"Emotions have been found to play a paramount role in both everyday music experiences and health applications of music, but the applicability of musical emotions depends on: 1) which emotions music can induce, 2) how it induces them, and 3) how individual differences may be explained. These questions were addressed in a listening test, where 44 participants (aged 19–66 years) reported both felt emotions and subjective impressions of emotion mechanisms (Mec Scale), while listening to 72 pieces of music from 12 genres, selected using a stratified random sampling procedure. The results showed that: 1) positive emotions (e.g., happiness) were more prevalent than negative emotions (e.g., anger); 2) Rhythmic entrainment was the most and Brain stem reflex the least frequent of the mechanisms featured in the BRECVEMA theory; 3) felt emotions could be accurately predicted based on self-reported mechanisms in multiple regression analyses; 4) self-reported mechanisms predicted felt emotions better than did acoustic features; and 5) individual listeners showed partly different emotion-mechanism links across stimuli, which may help to explain individual differences in emotional responses. Implications for future research and applications of musical emotions are discussed.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43131392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vocal range location is an important vocal affective signal. Humans use different areas of their vocal range to communicate emotional intensity. Consequently, humans are good at identifying where someone is speaking within their vocal range. Research on music and emotion has demonstrated that musical expressive behaviors often reflect or take inspiration from vocal expressive behaviors. Is it possible for musicians to utilize range-related signals on their instrument similarly to how humans use vocal range-related signals? Might musicians therefore be similarly sensitive to instrumental range location? We present two experiments that investigate musicians’ ability to hear instrumental range location, specifically string register location on the violoncello. Experiment 1 is a behavioral study that tests whether musicians can reliably distinguish between higher and lower string register locations. In Experiment 2, we analyze acoustic features that could be impacted by string register location. Our results support the conjecture that musicians can reliably discriminate between string register locations, although perhaps only when vibrato is utilized. Our results also suggest that higher string register locations have a darker timbre and possibly a wider and faster vibrato. Further research on whether musicians can effectively imitate vocal range location signals with their instruments is warranted.
{"title":"Musicians Can Reliably Discriminate Between String Register Locations on the Violoncello","authors":"C. Trevor, J. Devaney, David Huron","doi":"10.1525/mp.2022.40.1.27","DOIUrl":"https://doi.org/10.1525/mp.2022.40.1.27","url":null,"abstract":"Vocal range location is an important vocal affective signal. Humans use different areas of their vocal range to communicate emotional intensity. Consequently, humans are good at identifying where someone is speaking within their vocal range. Research on music and emotion has demonstrated that musical expressive behaviors often reflect or take inspiration from vocal expressive behaviors. Is it possible for musicians to utilize range-related signals on their instrument similarly to how humans use vocal range-related signals? Might musicians therefore be similarly sensitive to instrumental range location? We present two experiments that investigate musicians’ ability to hear instrumental range location, specifically string register location on the violoncello. Experiment 1 is a behavioral study that tests whether musicians can reliably distinguish between higher and lower string register locations. In Experiment 2, we analyze acoustic features that could be impacted by string register location. Our results support the conjecture that musicians can reliably discriminate between string register locations, although perhaps only when vibrato is utilized. Our results also suggest that higher string register locations have a darker timbre and possibly a wider and faster vibrato. Further research on whether musicians can effectively imitate vocal range location signals with their instruments is warranted.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43958090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
According to Feldman (1993), musical prodigies are expected to perform at the same high level as professional adult musicians and, therefore, are indistinguishable from adults. This widespread definition was the basis for the study by Comeau et al. (2017), which investigated if participants could determine whether an audio sample was played by a professional pianist or a child prodigy. Our paper is a replication of this previous study under more controlled conditions. Our main findings partly confirmed the previous findings: Comparable to Comeau et al.’s (2017) study (N = 51), the participants in our study (N = 278) were able to discriminate between prodigies and adult professionals by listening to music recordings of the same pieces. The overall discrimination performance was slightly above chance (correct responses: 53.7%; sensitivity d’ = 0.20), which was similar to Comeau et al.’s (2017) results of the identification task with prodigies aged between 11 and 14 years (approximately 54.6% correct responses; sensitivity approximately d’ = 0.13). Contrary to the original study, musicians and pianists in our study did not perform significantly better than other participants. Nevertheless, it is generally possible for listeners to differentiate prodigies from adult performers—although this is a demanding task.
{"title":"You Can Tell a Prodigy From a Professional Musician","authors":"Viola Pausch, Nina Düvel, R. Kopiez","doi":"10.1525/mp.2022.40.1.39","DOIUrl":"https://doi.org/10.1525/mp.2022.40.1.39","url":null,"abstract":"According to Feldman (1993), musical prodigies are expected to perform at the same high level as professional adult musicians and, therefore, are indistinguishable from adults. This widespread definition was the basis for the study by Comeau et al. (2017), which investigated if participants could determine whether an audio sample was played by a professional pianist or a child prodigy. Our paper is a replication of this previous study under more controlled conditions. Our main findings partly confirmed the previous findings: Comparable to Comeau et al.’s (2017) study (N = 51), the participants in our study (N = 278) were able to discriminate between prodigies and adult professionals by listening to music recordings of the same pieces. The overall discrimination performance was slightly above chance (correct responses: 53.7%; sensitivity d’ = 0.20), which was similar to Comeau et al.’s (2017) results of the identification task with prodigies aged between 11 and 14 years (approximately 54.6% correct responses; sensitivity approximately d’ = 0.13). Contrary to the original study, musicians and pianists in our study did not perform significantly better than other participants. Nevertheless, it is generally possible for listeners to differentiate prodigies from adult performers—although this is a demanding task.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41591905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music empathizing (ME) and music systemizing (MS) are constructs representing cognitive styles that address different facets of interest in music listening. Here we investigate whether ME and MS are positively associated with feelings of reward in response to music listening (MR). We conducted an online-survey in which n = 202 (127 identifying as female) participants, Mage = 26.06 years, SDage = 8.66 years, filled out the Music-Empathizing-Music-Systemizing (MEMS) Inventory, the Barcelona Questionnaire of Music Reward (BMRQ), further music-related inventories, and ad hoc items representing general interest and investment in music listening. Results from a conditional inference tree analysis confirm our hypothesis by showing ME followed by MS were the most important predictors of MR. In addition, subscription to music streaming services and investing free time into music listening were also associated with higher MR. These results suggest that perceiving reward through music listening is a function of both music empathizing and music systemizing. The nonsignificant contributions of music sophistication and music style preferences deny a larger role of these factors in MR. Further research is needed to investigate the interrelationships of musical cognitive styles and MR to refine our understanding of the affective value of music listening.
{"title":"Music Empathizing and Music Systemizing are Associated with Music Listening Reward","authors":"G. Kreutz, Anja-Xiaoxing Cui","doi":"10.1525/mp.2022.40.1.3","DOIUrl":"https://doi.org/10.1525/mp.2022.40.1.3","url":null,"abstract":"Music empathizing (ME) and music systemizing (MS) are constructs representing cognitive styles that address different facets of interest in music listening. Here we investigate whether ME and MS are positively associated with feelings of reward in response to music listening (MR). We conducted an online-survey in which n = 202 (127 identifying as female) participants, Mage = 26.06 years, SDage = 8.66 years, filled out the Music-Empathizing-Music-Systemizing (MEMS) Inventory, the Barcelona Questionnaire of Music Reward (BMRQ), further music-related inventories, and ad hoc items representing general interest and investment in music listening. Results from a conditional inference tree analysis confirm our hypothesis by showing ME followed by MS were the most important predictors of MR. In addition, subscription to music streaming services and investing free time into music listening were also associated with higher MR. These results suggest that perceiving reward through music listening is a function of both music empathizing and music systemizing. The nonsignificant contributions of music sophistication and music style preferences deny a larger role of these factors in MR. Further research is needed to investigate the interrelationships of musical cognitive styles and MR to refine our understanding of the affective value of music listening.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43539198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1525/mp.2022.39.5.423
T. Matthews, Maria A. G. Witek, J. Thibodeau, P. Vuust, V. Penhune
The sensation of groove can be defined as the pleasurable urge to move to rhythmic music. When moving to the beat of a rhythm, both how well movements are synchronized to the beat, and the perceived difficulty in doing so, are associated with groove. Interestingly, when tapping to a rhythm, participants tend to overestimate their synchrony, suggesting a potential discrepancy between perceived and measured synchrony, which may impact their relative relation with groove. However, these relations, and the influence of syncopation and musicianship on these relations, have yet to be tested. Therefore, we asked participants to listen to 50 drum patterns with varying rhythmic complexity and rate their sensation of groove. They then tapped to the beat of the same drum patterns and rated how well they thought their taps synchronized with the beat. Perceived synchrony showed a stronger relation with groove ratings than measured synchrony and syncopation, and this effect was strongest for medium complexity rhythms. We interpret these results in the context of meter-based temporal predictions. We propose that the certainty of these predictions determine the weight and number of movements that are perceived as synchronous and thus reflect rewarding prediction confirmations.
{"title":"Perceived Motor Synchrony With the Beat is More Strongly Related to Groove Than Measured Synchrony","authors":"T. Matthews, Maria A. G. Witek, J. Thibodeau, P. Vuust, V. Penhune","doi":"10.1525/mp.2022.39.5.423","DOIUrl":"https://doi.org/10.1525/mp.2022.39.5.423","url":null,"abstract":"The sensation of groove can be defined as the pleasurable urge to move to rhythmic music. When moving to the beat of a rhythm, both how well movements are synchronized to the beat, and the perceived difficulty in doing so, are associated with groove. Interestingly, when tapping to a rhythm, participants tend to overestimate their synchrony, suggesting a potential discrepancy between perceived and measured synchrony, which may impact their relative relation with groove. However, these relations, and the influence of syncopation and musicianship on these relations, have yet to be tested. Therefore, we asked participants to listen to 50 drum patterns with varying rhythmic complexity and rate their sensation of groove. They then tapped to the beat of the same drum patterns and rated how well they thought their taps synchronized with the beat. Perceived synchrony showed a stronger relation with groove ratings than measured synchrony and syncopation, and this effect was strongest for medium complexity rhythms. We interpret these results in the context of meter-based temporal predictions. We propose that the certainty of these predictions determine the weight and number of movements that are perceived as synchronous and thus reflect rewarding prediction confirmations.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45414521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1525/mp.2022.39.5.468
Victor Rosi, O. Houix, N. Misdariis, P. Susini
Music or sound professionals use specific terminology to communicate about timbre. Some key terms do not come from the sound domain and do not have a clear definition due to their metaphorical nature. This work aims to reveal shared meanings of four well-used timbre attributes: bright, warm, round, and rough. We conducted two complementary studies with French sound and music experts (e.g., composers, sound engineers, sound designers, musicians, etc.). First, we led interviews to gather definitions and instrumental sound examples for the four attributes (N = 32). Second, using an online survey, we tested the relevance and consensus on multiple descriptions most frequently evoked during the interviews (N = 51). The analysis of the rich corpus of verbalizations from the interviews yielded the main description strategies used by the experts, namely acoustic, metaphorical, and source-related. We also derived definitions for the attributes based on significantly relevant and consensual descriptions according to the survey results. Importantly, the definitions rely heavily on metaphorical descriptions. In sum, this study presents an overview of the shared meaning and perception of four metaphorical timbre attributes in the French language.
{"title":"Investigating the Shared Meaning of Metaphorical Sound Attributes","authors":"Victor Rosi, O. Houix, N. Misdariis, P. Susini","doi":"10.1525/mp.2022.39.5.468","DOIUrl":"https://doi.org/10.1525/mp.2022.39.5.468","url":null,"abstract":"Music or sound professionals use specific terminology to communicate about timbre. Some key terms do not come from the sound domain and do not have a clear definition due to their metaphorical nature. This work aims to reveal shared meanings of four well-used timbre attributes: bright, warm, round, and rough. We conducted two complementary studies with French sound and music experts (e.g., composers, sound engineers, sound designers, musicians, etc.). First, we led interviews to gather definitions and instrumental sound examples for the four attributes (N = 32). Second, using an online survey, we tested the relevance and consensus on multiple descriptions most frequently evoked during the interviews (N = 51). The analysis of the rich corpus of verbalizations from the interviews yielded the main description strategies used by the experts, namely acoustic, metaphorical, and source-related. We also derived definitions for the attributes based on significantly relevant and consensual descriptions according to the survey results. Importantly, the definitions rely heavily on metaphorical descriptions. In sum, this study presents an overview of the shared meaning and perception of four metaphorical timbre attributes in the French language.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47699724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1525/mp.2022.39.5.484
Brendon Mizener, W. Dowling
The task of music listening involves an auditory scene analysis in which the listener makes judgments related to melody, harmony, and consonance or dissonance, all of which are made within the context of key or tonic region. Here we examine whether the process of tracking key region is independent of the process of tracking surface cues, and what surface cues may influence that process. To this end, highly trained, moderately trained, and untrained listeners listened to excerpts from string quartets, quintets, and sextets from the classical and romantic eras and responded when they heard a modulation. Each excerpt featured either a pivot chord modulation, a direct modulation, a common tone modulation, or no modulation. Listeners performed above chance across modulation conditions, and an interaction effect was observed for modulation type and participant training level. We also present an exploratory PCA that suggests that harmonic language and phrasing are both significant factors in guiding modulation perception, both of which merit further investigation.
{"title":"Real-Time Modulation Perception in Western Classical Music","authors":"Brendon Mizener, W. Dowling","doi":"10.1525/mp.2022.39.5.484","DOIUrl":"https://doi.org/10.1525/mp.2022.39.5.484","url":null,"abstract":"The task of music listening involves an auditory scene analysis in which the listener makes judgments related to melody, harmony, and consonance or dissonance, all of which are made within the context of key or tonic region. Here we examine whether the process of tracking key region is independent of the process of tracking surface cues, and what surface cues may influence that process. To this end, highly trained, moderately trained, and untrained listeners listened to excerpts from string quartets, quintets, and sextets from the classical and romantic eras and responded when they heard a modulation. Each excerpt featured either a pivot chord modulation, a direct modulation, a common tone modulation, or no modulation. Listeners performed above chance across modulation conditions, and an interaction effect was observed for modulation type and participant training level. We also present an exploratory PCA that suggests that harmonic language and phrasing are both significant factors in guiding modulation perception, both of which merit further investigation.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44445169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1525/mp.2022.39.5.503
G. Sioros, G. Madison, Diogo Cocharro, A. Danielsen, F. Gouyon
Music often evokes a regular beat and a pleasurable sensation of wanting to move to that beat called groove. Recent studies show that a rhythmic pattern’s ability to evoke groove increases at moderate levels of syncopation, essentially, when some notes occur earlier than expected. We present two studies that investigate that effect of syncopation in more realistic polyphonic music examples. First, listeners rated their urge to move to music excerpts transcribed from funk and rock songs, and to algorithmically transformed versions of these excerpts: 1) with the original syncopation removed, and 2) with various levels of pseudorandom syncopation introduced. While the original excerpts were rated higher than the de-syncopated, the algorithmic syncopation was not as successful in evoking groove. Consequently, a moderate level of syncopation increases groove, but only for certain syncopation patterns. The second study provides detailed comparisons of the original and transformed rhythmic structures that revealed key differences between them in: 1) the distribution of syncopation across instruments and metrical positions, 2) the counter-meter figures formed by the syncopating notes, and 3) the number of pickup notes. On this basis, we form four concrete hypotheses about the function of syncopation in groove, to be tested in future experiments.
{"title":"Syncopation and Groove in Polyphonic Music","authors":"G. Sioros, G. Madison, Diogo Cocharro, A. Danielsen, F. Gouyon","doi":"10.1525/mp.2022.39.5.503","DOIUrl":"https://doi.org/10.1525/mp.2022.39.5.503","url":null,"abstract":"Music often evokes a regular beat and a pleasurable sensation of wanting to move to that beat called groove. Recent studies show that a rhythmic pattern’s ability to evoke groove increases at moderate levels of syncopation, essentially, when some notes occur earlier than expected. We present two studies that investigate that effect of syncopation in more realistic polyphonic music examples. First, listeners rated their urge to move to music excerpts transcribed from funk and rock songs, and to algorithmically transformed versions of these excerpts: 1) with the original syncopation removed, and 2) with various levels of pseudorandom syncopation introduced. While the original excerpts were rated higher than the de-syncopated, the algorithmic syncopation was not as successful in evoking groove. Consequently, a moderate level of syncopation increases groove, but only for certain syncopation patterns. The second study provides detailed comparisons of the original and transformed rhythmic structures that revealed key differences between them in: 1) the distribution of syncopation across instruments and metrical positions, 2) the counter-meter figures formed by the syncopating notes, and 3) the number of pickup notes. On this basis, we form four concrete hypotheses about the function of syncopation in groove, to be tested in future experiments.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"211 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67421613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-01DOI: 10.1525/mp.2022.39.4.361
Kathleen A. Corrigall, B. Tillmann, E. Schellenberg
We used implicit and explicit tasks to measure knowledge of Western harmony in musically trained and untrained Canadian children. Younger children were 6–7 years of age; older children were 10–11. On each trial, participants heard a sequence of five piano chords. The first four chords established a major-key context. The final chord was the standard, expected tonic of the context or one of two deviant endings: the highly unexpected flat supertonic or the moderately unexpected subdominant. In the implicit task, children identified the timbre of the final chord (guitar or piano) as quickly as possible. Response times were faster for the tonic ending than for either deviant ending, but the magnitude of the priming effect was similar for the two deviants, and the effect did not vary as a function of age or music training. In the explicit task, children rated how good each chord sequence sounded. Ratings were highest for sequences with the tonic ending, intermediate for the subdominant, and lowest for the flat supertonic. Moreover, the difference between the tonic and deviant sequences was larger for older children with music training. Thus, the explicit task provided a more nuanced picture of musical knowledge than did the implicit task.
{"title":"Measuring Children’s Harmonic Knowledge with Implicit and Explicit Tests","authors":"Kathleen A. Corrigall, B. Tillmann, E. Schellenberg","doi":"10.1525/mp.2022.39.4.361","DOIUrl":"https://doi.org/10.1525/mp.2022.39.4.361","url":null,"abstract":"We used implicit and explicit tasks to measure knowledge of Western harmony in musically trained and untrained Canadian children. Younger children were 6–7 years of age; older children were 10–11. On each trial, participants heard a sequence of five piano chords. The first four chords established a major-key context. The final chord was the standard, expected tonic of the context or one of two deviant endings: the highly unexpected flat supertonic or the moderately unexpected subdominant. In the implicit task, children identified the timbre of the final chord (guitar or piano) as quickly as possible. Response times were faster for the tonic ending than for either deviant ending, but the magnitude of the priming effect was similar for the two deviants, and the effect did not vary as a function of age or music training. In the explicit task, children rated how good each chord sequence sounded. Ratings were highest for sequences with the tonic ending, intermediate for the subdominant, and lowest for the flat supertonic. Moreover, the difference between the tonic and deviant sequences was larger for older children with music training. Thus, the explicit task provided a more nuanced picture of musical knowledge than did the implicit task.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"1 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41452186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}