Pub Date : 2021-06-01DOI: 10.1525/MP.2021.38.5.473
Manda Fischer, Kit Soden, E. Thoret, Marcel R. Montrey, S. McAdams
Timbre perception and auditory grouping principles can provide a theoretical basis for aspects of orchestration. In Experiment 1, 36 excerpts contained two streams and 12 contained one stream as determined by music analysts. Streams—the perceptual connecting of successive events—comprised either single instruments or blended combinations of instruments from the same or different families. Musicians and nonmusicians rated the degree of segregation perceived in the excerpts. Heterogeneous instrument combinations between streams yielded greater segregation than did homogeneous ones. Experiment 2 presented the individual streams from each two-stream excerpt. Blend ratings on isolated individual streams from the two-stream excerpts did not predict global segregation between streams. In Experiment 3, Experiment 1 excerpts were reorchestrated with only string instruments to determine the relative contribution of timbre to segregation beyond other musical cues. Decreasing timbral differences reduced segregation ratings. Acoustic and score-based descriptors were extracted from the recordings and scores, respectively, to statistically quantify the factors involved in these effects. Instrument family, part crossing, consonance, spectral factors related to timbre, and onset synchrony all played a role, providing evidence of how timbral differences enhance segregation in orchestral music.
{"title":"Instrument Timbre Enhances Perceptual Segregation in Orchestral Music","authors":"Manda Fischer, Kit Soden, E. Thoret, Marcel R. Montrey, S. McAdams","doi":"10.1525/MP.2021.38.5.473","DOIUrl":"https://doi.org/10.1525/MP.2021.38.5.473","url":null,"abstract":"Timbre perception and auditory grouping principles can provide a theoretical basis for aspects of orchestration. In Experiment 1, 36 excerpts contained two streams and 12 contained one stream as determined by music analysts. Streams—the perceptual connecting of successive events—comprised either single instruments or blended combinations of instruments from the same or different families. Musicians and nonmusicians rated the degree of segregation perceived in the excerpts. Heterogeneous instrument combinations between streams yielded greater segregation than did homogeneous ones. Experiment 2 presented the individual streams from each two-stream excerpt. Blend ratings on isolated individual streams from the two-stream excerpts did not predict global segregation between streams. In Experiment 3, Experiment 1 excerpts were reorchestrated with only string instruments to determine the relative contribution of timbre to segregation beyond other musical cues. Decreasing timbral differences reduced segregation ratings. Acoustic and score-based descriptors were extracted from the recordings and scores, respectively, to statistically quantify the factors involved in these effects. Instrument family, part crossing, consonance, spectral factors related to timbre, and onset synchrony all played a role, providing evidence of how timbral differences enhance segregation in orchestral music.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"473-498"},"PeriodicalIF":2.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44199170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-01DOI: 10.1525/MP.2021.38.5.435
Kelly Jakubowski, Amy M. Belfi, T. Eerola
Music can be a potent cue for autobiographical memories in both everyday and clinical settings. Understanding the extent to which music may have privileged access to aspects of our personal histories requires critical comparisons to other types of memories and exploration of how music-evoked autobiographical memories (MEAMs) vary across individuals. We compared the retrieval characteristics, content, and emotions of MEAMs to television-evoked autobiographical memories (TEAMs) in an online sample of 657 participants who were representative of the British adult population on age, gender, income, and education. Each participant reported details of a recent MEAM and a recent TEAM experience. MEAMs exhibited significantly greater episodic reliving, personal significance, and social content than TEAMs, and elicited more positive and intense emotions. The majority of these differences between MEAMs and TEAMs persisted in an analysis of a subset of responses in which the music and television cues were matched on familiarity. Age and gender effects were smaller, and consistent across both MEAMs and TEAMs. These results indicate phenomenological differences in naturally occurring memories cued by music as compared to television that are maintained across adulthood. Findings are discussed in the context of theoretical accounts of autobiographical memory, functions of music, and healthy aging.
{"title":"Phenomenological Differences in Music- and Television-Evoked Autobiographical Memories","authors":"Kelly Jakubowski, Amy M. Belfi, T. Eerola","doi":"10.1525/MP.2021.38.5.435","DOIUrl":"https://doi.org/10.1525/MP.2021.38.5.435","url":null,"abstract":"Music can be a potent cue for autobiographical memories in both everyday and clinical settings. Understanding the extent to which music may have privileged access to aspects of our personal histories requires critical comparisons to other types of memories and exploration of how music-evoked autobiographical memories (MEAMs) vary across individuals. We compared the retrieval characteristics, content, and emotions of MEAMs to television-evoked autobiographical memories (TEAMs) in an online sample of 657 participants who were representative of the British adult population on age, gender, income, and education. Each participant reported details of a recent MEAM and a recent TEAM experience. MEAMs exhibited significantly greater episodic reliving, personal significance, and social content than TEAMs, and elicited more positive and intense emotions. The majority of these differences between MEAMs and TEAMs persisted in an analysis of a subset of responses in which the music and television cues were matched on familiarity. Age and gender effects were smaller, and consistent across both MEAMs and TEAMs. These results indicate phenomenological differences in naturally occurring memories cued by music as compared to television that are maintained across adulthood. Findings are discussed in the context of theoretical accounts of autobiographical memory, functions of music, and healthy aging.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47415023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-01DOI: 10.1525/MP.2021.38.5.425
Dominique Vuvan, Bryn Hughes
Krumhansl and Kessler’s (1982) pioneering experiments on tonal hierarchies in Western music have long been considered the gold standard for researchers interested in the mental representation of musical pitch structure. The current experiment used the probe tone technique to investigate the tonal hierarchy in classical and rock music. As predicted, the observed profiles for these two styles were structurally similar, reflecting a shared underlying Western tonal structure. Most interestingly, however, the rock profile was significantly less differentiated than the classical profile, reflecting theoretical work that describes pitch organization in rock music as more permissive and less hierarchical than in classical music. This line of research contradicts the idea that music from the common-practice era is representative of all Western musics, and challenges music cognition researchers to explore style-appropriate stimuli and models of pitch structure for their experiments.
{"title":"Probe Tone Paradigm Reveals Less Differentiated Tonal Hierarchy in Rock Music","authors":"Dominique Vuvan, Bryn Hughes","doi":"10.1525/MP.2021.38.5.425","DOIUrl":"https://doi.org/10.1525/MP.2021.38.5.425","url":null,"abstract":"Krumhansl and Kessler’s (1982) pioneering experiments on tonal hierarchies in Western music have long been considered the gold standard for researchers interested in the mental representation of musical pitch structure. The current experiment used the probe tone technique to investigate the tonal hierarchy in classical and rock music. As predicted, the observed profiles for these two styles were structurally similar, reflecting a shared underlying Western tonal structure. Most interestingly, however, the rock profile was significantly less differentiated than the classical profile, reflecting theoretical work that describes pitch organization in rock music as more permissive and less hierarchical than in classical music. This line of research contradicts the idea that music from the common-practice era is representative of all Western musics, and challenges music cognition researchers to explore style-appropriate stimuli and models of pitch structure for their experiments.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"425-434"},"PeriodicalIF":2.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47928086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-01DOI: 10.1525/MP.2021.38.5.509
J. Devin McAuley, P. Wong, Lucas Bellaiche, E. Margulis
Although people across multiple cultures have been shown to experience music narratively, it has proven difficult to disentangle whether narrative dimensions of music derive from learned extramusical associations within a culture or from less experience-dependent elements of the music, such as musical contrast. Toward this end, two experiments investigated factors contributing to listeners’ narrative engagement with music, comparing the narrative experiences of Western and Chinese instrumental music for listeners in two suburban locations in the United States with those of listeners living in a remote rural village in China with different patterns of musical exposure. Supporting an enculturation perspective where learned extramusical associations (i.e., Topicality) play an important role in narrative perceptions of music, results from the first experiment show that for Western listeners, greater Topicality, rather than greater Contrast, increases narrative engagement, as long as listeners have sufficient exposure to its patterns of use within a culture. Strengthening this interpretation, results for the second experiment, which directly manipulated Topicality and Contrast, show that reducing an excerpt’s Topicality, but not its Contrast reduces listeners’ narrative engagement.
{"title":"What Drives Narrative Engagement With Music?","authors":"J. Devin McAuley, P. Wong, Lucas Bellaiche, E. Margulis","doi":"10.1525/MP.2021.38.5.509","DOIUrl":"https://doi.org/10.1525/MP.2021.38.5.509","url":null,"abstract":"Although people across multiple cultures have been shown to experience music narratively, it has proven difficult to disentangle whether narrative dimensions of music derive from learned extramusical associations within a culture or from less experience-dependent elements of the music, such as musical contrast. Toward this end, two experiments investigated factors contributing to listeners’ narrative engagement with music, comparing the narrative experiences of Western and Chinese instrumental music for listeners in two suburban locations in the United States with those of listeners living in a remote rural village in China with different patterns of musical exposure. Supporting an enculturation perspective where learned extramusical associations (i.e., Topicality) play an important role in narrative perceptions of music, results from the first experiment show that for Western listeners, greater Topicality, rather than greater Contrast, increases narrative engagement, as long as listeners have sufficient exposure to its patterns of use within a culture. Strengthening this interpretation, results for the second experiment, which directly manipulated Topicality and Contrast, show that reducing an excerpt’s Topicality, but not its Contrast reduces listeners’ narrative engagement.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47035118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-01DOI: 10.1525/MP.2021.38.5.499
Tamara Rathcke, Simone Falk, S. D. Bella
Listeners usually have no difficulties telling the difference between speech and song. Yet when a spoken phrase is repeated several times, they often report a perceptual transformation that turns speech into song. There is a great deal of variability in the perception of the speech-to-song illusion (STS). It may result partly from linguistic properties of spoken phrases and be partly due to the individual processing difference of listeners exposed to STS. To date, existing evidence is insufficient to predict who is most likely to experience the transformation, and which sentences may be more conducive to the transformation once spoken repeatedly. The present study investigates these questions with French and English listeners, testing the hypothesis that the transformation is achieved by means of functional re-evaluation of phrasal prosody during repetition. Such prosodic re-analysis places demands on the phonological structure of sentences and language proficiency of listeners. Two experiments show that STS is facilitated in high-sonority sentences and in listeners’ non-native languages and support the hypothesis that STS involves a switch between musical and linguistic perception modes.
{"title":"Music to Your Ears","authors":"Tamara Rathcke, Simone Falk, S. D. Bella","doi":"10.1525/MP.2021.38.5.499","DOIUrl":"https://doi.org/10.1525/MP.2021.38.5.499","url":null,"abstract":"Listeners usually have no difficulties telling the difference between speech and song. Yet when a spoken phrase is repeated several times, they often report a perceptual transformation that turns speech into song. There is a great deal of variability in the perception of the speech-to-song illusion (STS). It may result partly from linguistic properties of spoken phrases and be partly due to the individual processing difference of listeners exposed to STS. To date, existing evidence is insufficient to predict who is most likely to experience the transformation, and which sentences may be more conducive to the transformation once spoken repeatedly. The present study investigates these questions with French and English listeners, testing the hypothesis that the transformation is achieved by means of functional re-evaluation of phrasal prosody during repetition. Such prosodic re-analysis places demands on the phonological structure of sentences and language proficiency of listeners. Two experiments show that STS is facilitated in high-sonority sentences and in listeners’ non-native languages and support the hypothesis that STS involves a switch between musical and linguistic perception modes.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47914250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-01DOI: 10.1525/MP.2021.38.5.456
Callula Killingly, Philippe Lacherez, R. Meuter
Music that gets “stuck” in the head is commonly conceptualized as an intrusive “thought”; however, we argue that this experience is better characterized as automatic mental singing without an accompanying sense of agency. In two experiments, a dual-task paradigm was employed, in which participants undertook a phonological task once while hearing music, and then again in silence following its presentation. We predicted that the music would be maintained in working memory, interfering with the task. Experiment 1 (N = 30) used songs predicted to be more or less catchy; half of the sample heard truncated versions. Performance was indeed poorer following catchier songs, particularly if the songs were unfinished. Moreover, the effect was stronger for songs rated higher in terms of the desire to sing along. Experiment 2 (N = 50) replicated the effect using songs with which the participants felt compelled to sing along. Additionally, results from a lexical decision task indicated that many participants’ keystrokes synchronized with the tempo of the song just heard. Together, these findings suggest that an earworm results from an unconscious desire to sing along to a familiar song.
{"title":"Singing in the Brain","authors":"Callula Killingly, Philippe Lacherez, R. Meuter","doi":"10.1525/MP.2021.38.5.456","DOIUrl":"https://doi.org/10.1525/MP.2021.38.5.456","url":null,"abstract":"Music that gets “stuck” in the head is commonly conceptualized as an intrusive “thought”; however, we argue that this experience is better characterized as automatic mental singing without an accompanying sense of agency. In two experiments, a dual-task paradigm was employed, in which participants undertook a phonological task once while hearing music, and then again in silence following its presentation. We predicted that the music would be maintained in working memory, interfering with the task. Experiment 1 (N = 30) used songs predicted to be more or less catchy; half of the sample heard truncated versions. Performance was indeed poorer following catchier songs, particularly if the songs were unfinished. Moreover, the effect was stronger for songs rated higher in terms of the desire to sing along. Experiment 2 (N = 50) replicated the effect using songs with which the participants felt compelled to sing along. Additionally, results from a lexical decision task indicated that many participants’ keystrokes synchronized with the tempo of the song just heard. Together, these findings suggest that an earworm results from an unconscious desire to sing along to a familiar song.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49220788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1525/MP.2021.38.4.386
Y. Tan, I. Peretz, G. McPherson, Sarah J Wilson
In this study, the robustness of an online tool for objectively assessing singing ability was examined by: (1) determining the internal consistency and test-retest reliability of the tool; (2) comparing the task performance of web-based participants (n = 285) with a group (n = 52) completing the tool in a controlled laboratory setting, and then determining the convergent validity between settings, and (3) comparing participants’ task performance with previous research using similar singing tasks and populations. Results indicated that the online singing tool exhibited high internal consistency (Cronbach’s alpha = .92), and moderate-to-high test-retest reliabilities (.65–.80) across an average 4.5-year-span. Task performance for web- and laboratory-based participants (n = 82) matched on age, sex, and music training were not significantly different. Moderate-to-large correlations (|r| =.31–.59) were found between self-rated singing ability and the various singing tasks, supporting convergent validity. Finally, task performance of the web-based sample was not significantly different to previously reported findings. Overall the findings support the robustness of the online tool for objectively measuring singing pitch accuracy beyond a controlled laboratory environment and its potential application in large-scale investigations of singing and music ability.
{"title":"Establishing the Reliability and Validity of Web-based Singing Research","authors":"Y. Tan, I. Peretz, G. McPherson, Sarah J Wilson","doi":"10.1525/MP.2021.38.4.386","DOIUrl":"https://doi.org/10.1525/MP.2021.38.4.386","url":null,"abstract":"In this study, the robustness of an online tool for objectively assessing singing ability was examined by: (1) determining the internal consistency and test-retest reliability of the tool; (2) comparing the task performance of web-based participants (n = 285) with a group (n = 52) completing the tool in a controlled laboratory setting, and then determining the convergent validity between settings, and (3) comparing participants’ task performance with previous research using similar singing tasks and populations. Results indicated that the online singing tool exhibited high internal consistency (Cronbach’s alpha = .92), and moderate-to-high test-retest reliabilities (.65–.80) across an average 4.5-year-span. Task performance for web- and laboratory-based participants (n = 82) matched on age, sex, and music training were not significantly different. Moderate-to-large correlations (|r| =.31–.59) were found between self-rated singing ability and the various singing tasks, supporting convergent validity. Finally, task performance of the web-based sample was not significantly different to previously reported findings. Overall the findings support the robustness of the online tool for objectively measuring singing pitch accuracy beyond a controlled laboratory environment and its potential application in large-scale investigations of singing and music ability.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"386-405"},"PeriodicalIF":2.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44355524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1525/MP.2021.38.4.345
A. Turrell, A. Halpern, A. Javadi
Previous brain-related studies on music-evoked emotions have relied on listening to long music segments, which may reduce the precision of correlating emotional cues to specific brain areas. Break routines in electronic dance music (EDM) are emotive but short music moments containing three passages: breakdown, build-up, and drop. Within build-ups music features increase to peak moments prior to highly expected drop passages and peak-pleasurable emotions when these expectations are fulfilled. The neural correlates of peak-pleasurable emotions (such as excitement) in the short seconds of build-up and drop passages in EDM break routines are therefore good candidates to study brain correlates of emotion. Thirty-six participants listened to break routines while undergoing continuous EEG. Source reconstruction of EEG epochs for one second of build-up and of drop passages showed that pre- and post-central gyri and precuneus were more active during build-ups, and the inferior frontal gyrus (IFG) and middle frontal gyrus (MFG) were more active within drop passages. Importantly, IFG and MFG activity showed a correlation with ratings of subjective excitement during drop passages. The results suggest expectation is important in inducing peak-pleasurable experiences and brain activity changes within seconds of reported feelings of excitement during EDM break routines.
{"title":"Wait For It","authors":"A. Turrell, A. Halpern, A. Javadi","doi":"10.1525/MP.2021.38.4.345","DOIUrl":"https://doi.org/10.1525/MP.2021.38.4.345","url":null,"abstract":"Previous brain-related studies on music-evoked emotions have relied on listening to long music segments, which may reduce the precision of correlating emotional cues to specific brain areas. Break routines in electronic dance music (EDM) are emotive but short music moments containing three passages: breakdown, build-up, and drop. Within build-ups music features increase to peak moments prior to highly expected drop passages and peak-pleasurable emotions when these expectations are fulfilled. The neural correlates of peak-pleasurable emotions (such as excitement) in the short seconds of build-up and drop passages in EDM break routines are therefore good candidates to study brain correlates of emotion. Thirty-six participants listened to break routines while undergoing continuous EEG. Source reconstruction of EEG epochs for one second of build-up and of drop passages showed that pre- and post-central gyri and precuneus were more active during build-ups, and the inferior frontal gyrus (IFG) and middle frontal gyrus (MFG) were more active within drop passages. Importantly, IFG and MFG activity showed a correlation with ratings of subjective excitement during drop passages. The results suggest expectation is important in inducing peak-pleasurable experiences and brain activity changes within seconds of reported feelings of excitement during EDM break routines.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"1 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41459818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1525/MP.2021.38.4.372
C. Corcoran, K. Frieler
The most recognizable features of the jazz phrasing style known as “swing” is the articulation of tactus beat subdivisions into long-short patterns (known as “swing eighths”). The subdivisions are traditionally assumed to form a 2:1 beat-upbeat ratio (BUR); however, several smaller case studies have suggested that the 2:1 BUR is a gross oversimplification. Here we offer a more conclusive approach to the issue, offering a corpus analysis of 456 jazz solos using the Weimar Jazz Database. Results indicate that most jazz soloists tend to play with only slightly uneven swing eighths (BUR = 1.3:1), while BURs approaching 2:1 and higher are only used occasionally. High BURs are more likely to be used systematically at slow and moderate tempi and in Postbop and Hardbop styles. Overall, the data suggests that a stable 2:1 swing BUR for solos is a conceptual myth, which may be based on various perceptual effects. We suggest that higher BURs are likely saved for specific effect, since higher BURs may maximize entrainment and the sense of groove at the tactus beat level among listeners and performers. Consequently our results contribute with insights relevant to jazz, groove, and microrhythm studies, practical and historical jazz research, and music perception.
{"title":"Playing It Straight","authors":"C. Corcoran, K. Frieler","doi":"10.1525/MP.2021.38.4.372","DOIUrl":"https://doi.org/10.1525/MP.2021.38.4.372","url":null,"abstract":"The most recognizable features of the jazz phrasing style known as “swing” is the articulation of tactus beat subdivisions into long-short patterns (known as “swing eighths”). The subdivisions are traditionally assumed to form a 2:1 beat-upbeat ratio (BUR); however, several smaller case studies have suggested that the 2:1 BUR is a gross oversimplification. Here we offer a more conclusive approach to the issue, offering a corpus analysis of 456 jazz solos using the Weimar Jazz Database. Results indicate that most jazz soloists tend to play with only slightly uneven swing eighths (BUR = 1.3:1), while BURs approaching 2:1 and higher are only used occasionally. High BURs are more likely to be used systematically at slow and moderate tempi and in Postbop and Hardbop styles. Overall, the data suggests that a stable 2:1 swing BUR for solos is a conceptual myth, which may be based on various perceptual effects. We suggest that higher BURs are likely saved for specific effect, since higher BURs may maximize entrainment and the sense of groove at the tactus beat level among listeners and performers. Consequently our results contribute with insights relevant to jazz, groove, and microrhythm studies, practical and historical jazz research, and music perception.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41500770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1525/MP.2021.38.4.360
Andrew V. Frane, M. Monti
Some researchers and study participants have expressed an intuition that novel rhythmic sequences are easier to recall and reproduce if they have a melody, implying that melodicity (the presence of musical pitch variation) fundamentally enhances perception and/or representation of rhythm. But the psychoacoustics literature suggests that pitch variation often impairs perception of temporal information. To examine the effect of melodicity on rhythm reproduction accuracy, we presented simple nine-note auditory rhythms to 100 college students, who attempted to reproduce those rhythms by tapping. Reproductions tended to be more accurate when the presented notes all had the same pitch than when the presented notes had a melody. Nonetheless, a plurality of participants judged that the melodically presented rhythms were easier to remember. We also found that sequences containing a Scotch snap (a sixteenth note at a quarter note beat position followed by a dotted eighth note) were reproduced less accurately than other sequences in general, and less accurately than other sequences containing a dotted eighth note.
{"title":"Reproduction Accuracy for Short Rhythms Following Melodic or Monotonic Presentation","authors":"Andrew V. Frane, M. Monti","doi":"10.1525/MP.2021.38.4.360","DOIUrl":"https://doi.org/10.1525/MP.2021.38.4.360","url":null,"abstract":"Some researchers and study participants have expressed an intuition that novel rhythmic sequences are easier to recall and reproduce if they have a melody, implying that melodicity (the presence of musical pitch variation) fundamentally enhances perception and/or representation of rhythm. But the psychoacoustics literature suggests that pitch variation often impairs perception of temporal information. To examine the effect of melodicity on rhythm reproduction accuracy, we presented simple nine-note auditory rhythms to 100 college students, who attempted to reproduce those rhythms by tapping. Reproductions tended to be more accurate when the presented notes all had the same pitch than when the presented notes had a melody. Nonetheless, a plurality of participants judged that the melodically presented rhythms were easier to remember. We also found that sequences containing a Scotch snap (a sixteenth note at a quarter note beat position followed by a dotted eighth note) were reproduced less accurately than other sequences in general, and less accurately than other sequences containing a dotted eighth note.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"360-371"},"PeriodicalIF":2.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41448926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}