Pub Date : 2023-02-01DOI: 10.1525/mp.2023.40.3.193
Sarah C. Creel, Reina Mizrahi, Alicia G. Escobedo, Li Zhao, Gail D. Heyman
Numerous studies suggest that speakers of some tone languages show advantages in musical pitch processing compared to non-tone language speakers. A recent study in adults (Jasmin et al., 2021) suggests that in addition to heightened pitch sensitivity, tone language speakers weight pitch information more strongly than other auditory cues (amplitude, duration) in both linguistic and nonlinguistic settings compared to non-tone language speakers. The current study asks whether pitch upweighting is evident in early childhood. To test this, two groups of 3- to 5-year-old children—tone-language speakers (n = 48), a group previously shown to have a perceptual advantage in musical pitch tasks (Creel et al., 2018), and non-tone-language speakers (n = 48)—took part in a musical “word learning” task. Children associated two cartoon characters with two brief musical phrases differing in both musical instrument and contour. If tone language speakers weight pitch more strongly, cue conflict trials should show stronger pitch responding than for non-tone speakers. In contrast to both adult speakers’ stronger pitch weighting and child and adult pitch perception advantages, tone-language-speaking children did not show greater weighting of pitch information than non-tone-language speaking children. This suggests a slow developmental course for pitch reweighting, contrasting with apparent early emergence of pitch sensitivity.
大量研究表明,一些声调语言的使用者在音高处理方面比非声调语言的使用者有优势。最近一项针对成年人的研究(Jasmin et al., 2021)表明,与非声调语言者相比,声调语言者在语言和非语言环境中比其他听觉线索(振幅、持续时间)更重视音调信息。目前的研究询问音调升高在儿童早期是否明显。为了验证这一点,两组3至5岁的儿童——声调语言者(n = 48),一组先前被证明在音乐音高任务中具有感知优势(Creel et al., 2018),以及非声调语言者(n = 48)——参加了音乐“单词学习”任务。孩子们将两个卡通人物与两个乐器和轮廓不同的简短乐句联系在一起。如果使用声调语言的人更重视音高,线索冲突试验应该显示出比非声调语言的人更强的音高反应。与成人说话者更强的音高权重和儿童和成人的音高感知优势相比,说声调语言的儿童的音高信息权重并不比不说声调语言的儿童大。这表明,与明显早期出现的音高敏感性相比,音高重加权的发育过程较慢。
{"title":"No Heightened Musical Pitch Weighting For Tone Language Speakers in Early Childhood","authors":"Sarah C. Creel, Reina Mizrahi, Alicia G. Escobedo, Li Zhao, Gail D. Heyman","doi":"10.1525/mp.2023.40.3.193","DOIUrl":"https://doi.org/10.1525/mp.2023.40.3.193","url":null,"abstract":"Numerous studies suggest that speakers of some tone languages show advantages in musical pitch processing compared to non-tone language speakers. A recent study in adults (Jasmin et al., 2021) suggests that in addition to heightened pitch sensitivity, tone language speakers weight pitch information more strongly than other auditory cues (amplitude, duration) in both linguistic and nonlinguistic settings compared to non-tone language speakers. The current study asks whether pitch upweighting is evident in early childhood. To test this, two groups of 3- to 5-year-old children—tone-language speakers (n = 48), a group previously shown to have a perceptual advantage in musical pitch tasks (Creel et al., 2018), and non-tone-language speakers (n = 48)—took part in a musical “word learning” task. Children associated two cartoon characters with two brief musical phrases differing in both musical instrument and contour. If tone language speakers weight pitch more strongly, cue conflict trials should show stronger pitch responding than for non-tone speakers. In contrast to both adult speakers’ stronger pitch weighting and child and adult pitch perception advantages, tone-language-speaking children did not show greater weighting of pitch information than non-tone-language speaking children. This suggests a slow developmental course for pitch reweighting, contrasting with apparent early emergence of pitch sensitivity.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46410648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-01DOI: 10.1525/mp.2023.40.3.237
Marjaana Puurtinen, Erkki Huovinen, Anna‐Kaisa Ylitalo
Music-reading research has not yet fully grasped the variety and roles of different cognitive mechanisms that underlie visual processing of music notation; instead, studies have often explored one factor at a time. Based on prior research, we identified three possible cognitive mechanisms regarding visual processing during music reading: symbol comprehension, visual anticipation, and symbol performance demands. We also summed up the eye-movement indicators of each mechanism. We then asked which of the three cognitive mechanisms were needed to explain how note symbols are visually processed during temporally controlled rhythm reading. In our eye-tracking study, twenty-nine participants performed simple rhythm-tapping tasks, in which the relative complexity of consecutive rhythm symbols was systematically varied. Eye-time span (i.e., “looking ahead”) and first-pass fixation time at target symbols were analyzed with linear mixed-effects modeling. As a result, the mechanisms symbol comprehension and visual anticipation found support in our empirical data, whereas evidence for symbol performance demands was more ambiguous. Future studies could continue from here by exploring the interplay of these and other possible mechanisms; in general, we argue that music-reading research should begin to emphasize the systematic creating and testing of cognitive models of eye movements in music reading.
{"title":"Cognitive Mechanisms in Temporally Controlled Rhythm Reading","authors":"Marjaana Puurtinen, Erkki Huovinen, Anna‐Kaisa Ylitalo","doi":"10.1525/mp.2023.40.3.237","DOIUrl":"https://doi.org/10.1525/mp.2023.40.3.237","url":null,"abstract":"Music-reading research has not yet fully grasped the variety and roles of different cognitive mechanisms that underlie visual processing of music notation; instead, studies have often explored one factor at a time. Based on prior research, we identified three possible cognitive mechanisms regarding visual processing during music reading: symbol comprehension, visual anticipation, and symbol performance demands. We also summed up the eye-movement indicators of each mechanism. We then asked which of the three cognitive mechanisms were needed to explain how note symbols are visually processed during temporally controlled rhythm reading. In our eye-tracking study, twenty-nine participants performed simple rhythm-tapping tasks, in which the relative complexity of consecutive rhythm symbols was systematically varied. Eye-time span (i.e., “looking ahead”) and first-pass fixation time at target symbols were analyzed with linear mixed-effects modeling. As a result, the mechanisms symbol comprehension and visual anticipation found support in our empirical data, whereas evidence for symbol performance demands was more ambiguous. Future studies could continue from here by exploring the interplay of these and other possible mechanisms; in general, we argue that music-reading research should begin to emphasize the systematic creating and testing of cognitive models of eye movements in music reading.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42423054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-01DOI: 10.1525/mp.2023.40.3.253
L. Reymore, Jason Noble, C. Saitis, C. Traube, Zachary Wallmark
The main objective of this study is to understand how timbre semantic associations—for example, a sound’s timbre perceived as bright, rough, or hollow—vary with register and pitch height across instruments. In this experiment, 540 online participants rated single, sustained notes from eight Western orchestral instruments (flute, oboe, bass clarinet, trumpet, trombone, violin, cello, and vibraphone) across three registers (low, medium, and high) on 20 semantic scales derived from Reymore and Huron (2020). The 24 two-second stimuli, equalized in loudness, were produced using the Vienna Symphonic Library. Exploratory modeling examined relationships between mean ratings of each semantic dimension and instrument, register, and participant musician identity (“musician” vs. “nonmusician”). For most semantic descriptors, both register and instrument were significant predictors, though the amount of variance explained differed (marginal R2). Terms that had the strongest positive relationships with register include shrill/harsh/noisy, sparkling/brilliant/bright, ringing/long decay, and percussive. Terms with the strongest negative relationships with register include deep/thick/heavy, raspy/grainy/gravelly, hollow, and woody. Post hoc modeling using only pitch height and only register to predict mean semantic rating suggests that pitch height may explain more variance than does register. Results help clarify the influence of both instrument and relative register (and pitch height) on common timbre semantic associations.
{"title":"Timbre Semantic Associations Vary Both Between and Within Instruments","authors":"L. Reymore, Jason Noble, C. Saitis, C. Traube, Zachary Wallmark","doi":"10.1525/mp.2023.40.3.253","DOIUrl":"https://doi.org/10.1525/mp.2023.40.3.253","url":null,"abstract":"The main objective of this study is to understand how timbre semantic associations—for example, a sound’s timbre perceived as bright, rough, or hollow—vary with register and pitch height across instruments. In this experiment, 540 online participants rated single, sustained notes from eight Western orchestral instruments (flute, oboe, bass clarinet, trumpet, trombone, violin, cello, and vibraphone) across three registers (low, medium, and high) on 20 semantic scales derived from Reymore and Huron (2020). The 24 two-second stimuli, equalized in loudness, were produced using the Vienna Symphonic Library.\u0000 Exploratory modeling examined relationships between mean ratings of each semantic dimension and instrument, register, and participant musician identity (“musician” vs. “nonmusician”). For most semantic descriptors, both register and instrument were significant predictors, though the amount of variance explained differed (marginal R2). Terms that had the strongest positive relationships with register include shrill/harsh/noisy, sparkling/brilliant/bright, ringing/long decay, and percussive. Terms with the strongest negative relationships with register include deep/thick/heavy, raspy/grainy/gravelly, hollow, and woody. Post hoc modeling using only pitch height and only register to predict mean semantic rating suggests that pitch height may explain more variance than does register. Results help clarify the influence of both instrument and relative register (and pitch height) on common timbre semantic associations.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46709674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-01DOI: 10.1525/mp.2023.40.3.220
S. J. Philibotte, Stephen Spivack, Nathaniel H. Spilka, I. Passman, P. Wallisch
Music psychology has a long history, but the question of whether brief music excerpts are representative of whole songs has been largely unaddressed. Here, we explore whether preference and familiarity ratings in response to excerpts are predictive of these ratings in response to whole songs. We asked 643 participants to judge 3,120 excerpts of varying durations taken from different sections of 260 songs from a broad range of genres and time periods in terms of preference and familiarity. We found that within the range of durations commonly used in music research, responses to excerpts are strongly predictive of whole song affect and cognition, with only minor effects of duration and location within the song. We concluded that preference and familiarity ratings in response to brief music excerpts are representative of the responses to whole songs. Even the shortest excerpt duration that is commonly used in research yields preference and familiarity ratings that are close to those for whole songs, suggesting that listeners are able to rapidly and reliably ascertain recognition as well as preference and familiarity ratings of whole songs.
{"title":"The Whole is Not Different From its Parts","authors":"S. J. Philibotte, Stephen Spivack, Nathaniel H. Spilka, I. Passman, P. Wallisch","doi":"10.1525/mp.2023.40.3.220","DOIUrl":"https://doi.org/10.1525/mp.2023.40.3.220","url":null,"abstract":"Music psychology has a long history, but the question of whether brief music excerpts are representative of whole songs has been largely unaddressed. Here, we explore whether preference and familiarity ratings in response to excerpts are predictive of these ratings in response to whole songs. We asked 643 participants to judge 3,120 excerpts of varying durations taken from different sections of 260 songs from a broad range of genres and time periods in terms of preference and familiarity. We found that within the range of durations commonly used in music research, responses to excerpts are strongly predictive of whole song affect and cognition, with only minor effects of duration and location within the song. We concluded that preference and familiarity ratings in response to brief music excerpts are representative of the responses to whole songs. Even the shortest excerpt duration that is commonly used in research yields preference and familiarity ratings that are close to those for whole songs, suggesting that listeners are able to rapidly and reliably ascertain recognition as well as preference and familiarity ratings of whole songs.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47137971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-01DOI: 10.1525/mp.2023.40.3.202
Nathan R. Carr, Kirk N. Olsen, W. Thompson
Two experiments investigated perceptual and emotional consequences of note articulation in music by examining the degree to which participants perceived notes to be separated from each other in a musical phrase. Seven-note piano melodies were synthesized with staccato notes (short decay) or legato notes (gradual/sustained decay). Experiment 1 (n = 64) addressed the impact of articulation on perceived melodic cohesion and perceived emotion expressed through melodies. Participants rated melodic cohesion and perceived emotions conveyed by 32 legato and 32 staccato melodies. Legato melodies were rated more cohesive than staccato melodies and perceived as emotionally calmer and sadder than staccato melodies. Staccato melodies were perceived as having greater tension and energy. Experiment 2 (n = 60) addressed whether articulation is associated with humor and fear in music, and whether the impact of articulation depends on major vs. minor mode. For both modes, legato melodies were scarier than staccato melodies, whereas staccato melodies were more amusing and surprising. The effect of articulation on perceived happiness and sadness was dependent on mode: staccato enhanced perceived happiness for minor melodies; legato enhanced perceived sadness for minor melodies. Findings are discussed in relation to theories of music processing, with implications for music composition, performance, and pedagogy.
{"title":"The Perceptual and Emotional Consequences of Articulation in Music","authors":"Nathan R. Carr, Kirk N. Olsen, W. Thompson","doi":"10.1525/mp.2023.40.3.202","DOIUrl":"https://doi.org/10.1525/mp.2023.40.3.202","url":null,"abstract":"Two experiments investigated perceptual and emotional consequences of note articulation in music by examining the degree to which participants perceived notes to be separated from each other in a musical phrase. Seven-note piano melodies were synthesized with staccato notes (short decay) or legato notes (gradual/sustained decay). Experiment 1 (n = 64) addressed the impact of articulation on perceived melodic cohesion and perceived emotion expressed through melodies. Participants rated melodic cohesion and perceived emotions conveyed by 32 legato and 32 staccato melodies. Legato melodies were rated more cohesive than staccato melodies and perceived as emotionally calmer and sadder than staccato melodies. Staccato melodies were perceived as having greater tension and energy. Experiment 2 (n = 60) addressed whether articulation is associated with humor and fear in music, and whether the impact of articulation depends on major vs. minor mode. For both modes, legato melodies were scarier than staccato melodies, whereas staccato melodies were more amusing and surprising. The effect of articulation on perceived happiness and sadness was dependent on mode: staccato enhanced perceived happiness for minor melodies; legato enhanced perceived sadness for minor melodies. Findings are discussed in relation to theories of music processing, with implications for music composition, performance, and pedagogy.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"1 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41761637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1525/mp.2022.40.2.112
Christian Weining
While the use of music in everyday life is much studied, the ways of listening to music during live performances have hardly been considered. To fill this gap and provide a starting point for further research, this article accomplishes two goals: First, it presents a literature review of the field of listening modes, encompassing seven categories of modes of listening to music. The categories identified in the literature are: diffuse listening, bodily listening, emotional listening, associative listening, structural listening, reduced listening, and causal listening. Subsequently, a conceptual model of music listening in Western classical concerts is developed on the basis of the identified categories and the Ecological Theory of Perception. In this framework, the Western classical concert is understood as a social-aesthetic event in which the experience of the audience is determined by many factors. It is argued that the frame of the concert (location, setting, staging, light, etc.) influences the listening mode and this in turn influences the aesthetic experience. The hypotheses derived from the review and the model are suitable for empirical investigation and expand the understanding of music listening in concerts and beyond.
{"title":"Listening Modes in Concerts","authors":"Christian Weining","doi":"10.1525/mp.2022.40.2.112","DOIUrl":"https://doi.org/10.1525/mp.2022.40.2.112","url":null,"abstract":"While the use of music in everyday life is much studied, the ways of listening to music during live performances have hardly been considered. To fill this gap and provide a starting point for further research, this article accomplishes two goals: First, it presents a literature review of the field of listening modes, encompassing seven categories of modes of listening to music. The categories identified in the literature are: diffuse listening, bodily listening, emotional listening, associative listening, structural listening, reduced listening, and causal listening. Subsequently, a conceptual model of music listening in Western classical concerts is developed on the basis of the identified categories and the Ecological Theory of Perception. In this framework, the Western classical concert is understood as a social-aesthetic event in which the experience of the audience is determined by many factors. It is argued that the frame of the concert (location, setting, staging, light, etc.) influences the listening mode and this in turn influences the aesthetic experience. The hypotheses derived from the review and the model are suitable for empirical investigation and expand the understanding of music listening in concerts and beyond.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47745536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We studied memory for harmony using a melody-and-accompaniment texture and 10 commercially successful songs of Western popular music. The harmony was presented as a timbrally matching block-chord accompaniment to digitally isolated vocals. We used three test chord variants: the target was harmonically identical to the original chord, the lure was schematically plausible but different from the original, and the clash conflicted with both the tonal center and the local pitches of the melody. We used two conditions: in the one-chord condition we presented only the test chord, while in the all-chords condition the test chord was presented with all the chords of the original excerpt. One hundred and twenty participants with varying levels of music training rated on a seven-point scale if the test chord was the original. We analyzed the results on two dimensions of memory: veridical–schematic and specialized–general. The target chords were rated higher on average than the lures and considerably higher than the clash chords. Schematic memory (knowledge of Western tonal harmony) seemed to be important for rating the test chords in the all-chords condition, while veridical memory (familiarity with the songs) was especially important for rating the lure chords in the one-chord condition.
{"title":"Veridical and Schematic Memory for Harmony in Melody-and-Accompaniment Textures","authors":"Ivan Jimenez, Tuire Kuusi, J. Ojala","doi":"10.1525/mp.2022.40.2.89","DOIUrl":"https://doi.org/10.1525/mp.2022.40.2.89","url":null,"abstract":"We studied memory for harmony using a melody-and-accompaniment texture and 10 commercially successful songs of Western popular music. The harmony was presented as a timbrally matching block-chord accompaniment to digitally isolated vocals. We used three test chord variants: the target was harmonically identical to the original chord, the lure was schematically plausible but different from the original, and the clash conflicted with both the tonal center and the local pitches of the melody. We used two conditions: in the one-chord condition we presented only the test chord, while in the all-chords condition the test chord was presented with all the chords of the original excerpt. One hundred and twenty participants with varying levels of music training rated on a seven-point scale if the test chord was the original. We analyzed the results on two dimensions of memory: veridical–schematic and specialized–general. The target chords were rated higher on average than the lures and considerably higher than the clash chords. Schematic memory (knowledge of Western tonal harmony) seemed to be important for rating the test chords in the all-chords condition, while veridical memory (familiarity with the songs) was especially important for rating the lure chords in the one-chord condition.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47815830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1525/mp.2022.40.2.168
Frank Hentschel, Anja-Xiaoxing Cui
The perception and experience of emotions in response to music listening are subject of a growing body of empirical research across the humanities and social sciences. While we are now able to investigate music perception in different parts of the world, insights into historical music perception remain elusive, mainly because the direct interrogation of music listeners of the past is no longer possible. Here, we present an approach to the retroactive exploration of historical music perception using semantic network analysis of historical text documents. To illustrate this approach, we analyzed written accounts of 19th-century perception of music that is described as “uncanny” (unheimlich). The high centrality values of “eerie” (gespenstisch) indicate that music termed as such should be highly similar to “uncanny” (unheimlich) music. We thus also analyzed written accounts of 19th-century perception of music described as “eerie” (gespenstisch). Using semantic network analyses on other expressive qualities as well as compositional features, we were then able to highlight in which way “uncanny” (unheimlich) and “eerie” (gespenstisch) music are similar and how they might be distinguished. Semantic network analysis may thus be a valuable tool in describing what compositional features were associated with particular expressive qualities by listeners of the past.
{"title":"Exploring 19th-century Perception of “Uncanny” Music Using a Semantic Network Approach","authors":"Frank Hentschel, Anja-Xiaoxing Cui","doi":"10.1525/mp.2022.40.2.168","DOIUrl":"https://doi.org/10.1525/mp.2022.40.2.168","url":null,"abstract":"The perception and experience of emotions in response to music listening are subject of a growing body of empirical research across the humanities and social sciences. While we are now able to investigate music perception in different parts of the world, insights into historical music perception remain elusive, mainly because the direct interrogation of music listeners of the past is no longer possible. Here, we present an approach to the retroactive exploration of historical music perception using semantic network analysis of historical text documents. To illustrate this approach, we analyzed written accounts of 19th-century perception of music that is described as “uncanny” (unheimlich). The high centrality values of “eerie” (gespenstisch) indicate that music termed as such should be highly similar to “uncanny” (unheimlich) music. We thus also analyzed written accounts of 19th-century perception of music described as “eerie” (gespenstisch). Using semantic network analyses on other expressive qualities as well as compositional features, we were then able to highlight in which way “uncanny” (unheimlich) and “eerie” (gespenstisch) music are similar and how they might be distinguished. Semantic network analysis may thus be a valuable tool in describing what compositional features were associated with particular expressive qualities by listeners of the past.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49150102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1525/mp.2022.40.2.135
Adéla Becková, V. Rudolfová, J. Horáček, T. Nekovářová
Interval timing plays an essential role in various types of behavior including perception and production of music. However, subjectively perceived intervals may substantially differ from their objective durations. One of the phenomena, the filled duration illusion (FDI), is well described in the literature; however, there are still many questions to address concerning mechanisms behind this phenomenon. To further unravel the FDI, we asked 61 healthy adults to reproduce the duration of various acoustic stimuli (from 2 to 3 seconds). We used empty intervals (marked by two short tones) and filled intervals: a continuous tone or rhythmical tone sequences in legato or staccato. We demonstrated that the reproduction of empty intervals was shorter than reproduction of all filled intervals, whereas the reproduction of rhythmic intervals was the longest. Therefore, we clearly demonstrated and distinguished both types of the FDI—the sustained sound illusion and the divided time illusion—and documented their test-retest stability in two subsequent measurements. Moreover, we confirmed the effect of tone pitch on the reproduction—higher pitch tones were judged as longer. By testing all the mentioned phenomena in repeated measurements, we demonstrated the stability of the illusions and prepared the ground for an investigation of more complex musical stimuli.
{"title":"Unraveling the Filled Duration Illusion and its Stability in Repeated Measurements","authors":"Adéla Becková, V. Rudolfová, J. Horáček, T. Nekovářová","doi":"10.1525/mp.2022.40.2.135","DOIUrl":"https://doi.org/10.1525/mp.2022.40.2.135","url":null,"abstract":"Interval timing plays an essential role in various types of behavior including perception and production of music. However, subjectively perceived intervals may substantially differ from their objective durations. One of the phenomena, the filled duration illusion (FDI), is well described in the literature; however, there are still many questions to address concerning mechanisms behind this phenomenon. To further unravel the FDI, we asked 61 healthy adults to reproduce the duration of various acoustic stimuli (from 2 to 3 seconds). We used empty intervals (marked by two short tones) and filled intervals: a continuous tone or rhythmical tone sequences in legato or staccato. We demonstrated that the reproduction of empty intervals was shorter than reproduction of all filled intervals, whereas the reproduction of rhythmic intervals was the longest. Therefore, we clearly demonstrated and distinguished both types of the FDI—the sustained sound illusion and the divided time illusion—and documented their test-retest stability in two subsequent measurements. Moreover, we confirmed the effect of tone pitch on the reproduction—higher pitch tones were judged as longer. By testing all the mentioned phenomena in repeated measurements, we demonstrated the stability of the illusions and prepared the ground for an investigation of more complex musical stimuli.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47094892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1525/mp.2022.40.2.150
Geoffrey McDonald, Clemens Wöllner
While previous research has raised doubts about the ability of listeners to perceive large-scale musical form, we hypothesize that untrained and unfamiliar listeners can, indeed, recognize structure when cognitive form judgments (coherence and predictability) are differentiated from enjoyment ratings (pleasantness, interest, and desire to hear again). In a between-groups experiment, listeners (n = 125) were randomly assigned to hear one of four versions of Bach’s Prelude in C minor from Book I of The Well-Tempered Clavier: 1) the original; 2) a mildly scrambled one in which two larger sections were switched; 3) a highly scrambled one; and 4) a randomized one. Significant differences were observed between versions in ratings of coherence and predictability, but not in ratings of pleasantness, interest, or desire to hear again. Individuals who had played the piece before could also explicitly identify structural intervention. It was assumed that relative incoherence would result in higher complexity and, thus, be reflected in longer retrospective duration estimates; however, estimates did not differ between stimuli. These results suggest that untrained listeners can evaluate global form, independently of their level of familiarity with a musical piece, while also suggesting that awareness of incoherence does not always correspond with decreased enjoyment.
{"title":"Appreciation of Form in Bach’s Well-Tempered Clavier","authors":"Geoffrey McDonald, Clemens Wöllner","doi":"10.1525/mp.2022.40.2.150","DOIUrl":"https://doi.org/10.1525/mp.2022.40.2.150","url":null,"abstract":"While previous research has raised doubts about the ability of listeners to perceive large-scale musical form, we hypothesize that untrained and unfamiliar listeners can, indeed, recognize structure when cognitive form judgments (coherence and predictability) are differentiated from enjoyment ratings (pleasantness, interest, and desire to hear again). In a between-groups experiment, listeners (n = 125) were randomly assigned to hear one of four versions of Bach’s Prelude in C minor from Book I of The Well-Tempered Clavier: 1) the original; 2) a mildly scrambled one in which two larger sections were switched; 3) a highly scrambled one; and 4) a randomized one. Significant differences were observed between versions in ratings of coherence and predictability, but not in ratings of pleasantness, interest, or desire to hear again. Individuals who had played the piece before could also explicitly identify structural intervention. It was assumed that relative incoherence would result in higher complexity and, thus, be reflected in longer retrospective duration estimates; however, estimates did not differ between stimuli. These results suggest that untrained listeners can evaluate global form, independently of their level of familiarity with a musical piece, while also suggesting that awareness of incoherence does not always correspond with decreased enjoyment.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44254254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}