A. J. Weaver, Matthew Hoch, Lindsey Soles Quinn, J. Blumsack
In studies of perceptual and neural processing differences between musicians and nonmusicians, participants are typically dichotomized on the basis of personal report of musical experience. The present study relates self-reported musical experience and objectively measured musical aptitude to a skill that is important in music perception: temporal resolution (or acuity). The Advanced Measures of Music Audiation (AMMA) test was used to objectively assess participant musical aptitude, and adaptive psychophysical measurements were obtained to assess temporal resolution on two tasks: within-channel gap detection and across-channel gap detection. Results suggest that musical aptitude measured with the AMMA and self-reporting of music experiences (duration of music instruction) are both related to temporal resolution ability in musicians. The relationship between musical aptitude and/or duration of music training is important to music educators advocating for the benefits of music programs as well as in behavioral and neurophysiological research.
{"title":"Across-Channel Auditory Gap Detection","authors":"A. J. Weaver, Matthew Hoch, Lindsey Soles Quinn, J. Blumsack","doi":"10.1525/mp.2020.38.1.66","DOIUrl":"https://doi.org/10.1525/mp.2020.38.1.66","url":null,"abstract":"In studies of perceptual and neural processing differences between musicians and nonmusicians, participants are typically dichotomized on the basis of personal report of musical experience. The present study relates self-reported musical experience and objectively measured musical aptitude to a skill that is important in music perception: temporal resolution (or acuity). The Advanced Measures of Music Audiation (AMMA) test was used to objectively assess participant musical aptitude, and adaptive psychophysical measurements were obtained to assess temporal resolution on two tasks: within-channel gap detection and across-channel gap detection. Results suggest that musical aptitude measured with the AMMA and self-reporting of music experiences (duration of music instruction) are both related to temporal resolution ability in musicians. The relationship between musical aptitude and/or duration of music training is important to music educators advocating for the benefits of music programs as well as in behavioral and neurophysiological research.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"66-77"},"PeriodicalIF":2.3,"publicationDate":"2020-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.38.1.66","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49215768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guilherme Câmara, Kristian Nymoen, O. Lartillot, A. Danielsen
THIS STUDY REPORTS ON AN EXPERIMENT THAT tested whether drummers systematically manipulated not only onset but also duration and/or intensity of strokes in order to achieve different timing styles. Twenty-two professional drummers performed two patterns (a simple ‘‘back-beat’’ and a complex variation) on a drum kit (hi-hat, snare, kick) in three different timing styles (laid-back, pushed, on-beat), in tandem with two timing references (metronome and instrumental backing track). As expected, onset location corresponded to the instructed timing styles for all instruments. The instrumental reference led to more pronounced timing profiles than the metronome (pushed strokes earlier, laid-back strokes later). Also, overall the metronome reference led to earlier mean onsets than the instrumental reference, possibly related to the ‘‘negative mean asynchrony’’ phenomenon. Regarding sound, results revealed systematic differences across participants in the duration (snare) and intensity (snare and hi-hat) of strokes played using the different timing styles. Pattern also had an impact: drummers generally played the rhythmically more complex pattern 2 louder than the simpler pattern 1 (snare and kick). Overall, our results lend further evidence to the hypothesis that both temporal and sound-related features contribute to the indication of the timing of a rhythmic event in groove-based performance.
{"title":"Timing Is Everything…Or Is It? Effects of Instructed Timing Style, Reference, and Pattern on Drum Kit Sound in Groove-Based Performance","authors":"Guilherme Câmara, Kristian Nymoen, O. Lartillot, A. Danielsen","doi":"10.1525/mp.2020.38.1.1","DOIUrl":"https://doi.org/10.1525/mp.2020.38.1.1","url":null,"abstract":"THIS STUDY REPORTS ON AN EXPERIMENT THAT tested whether drummers systematically manipulated not only onset but also duration and/or intensity of strokes in order to achieve different timing styles. Twenty-two professional drummers performed two patterns (a simple ‘‘back-beat’’ and a complex variation) on a drum kit (hi-hat, snare, kick) in three different timing styles (laid-back, pushed, on-beat), in tandem with two timing references (metronome and instrumental backing track). As expected, onset location corresponded to the instructed timing styles for all instruments. The instrumental reference led to more pronounced timing profiles than the metronome (pushed strokes earlier, laid-back strokes later). Also, overall the metronome reference led to earlier mean onsets than the instrumental reference, possibly related to the ‘‘negative mean asynchrony’’ phenomenon. Regarding sound, results revealed systematic differences across participants in the duration (snare) and intensity (snare and hi-hat) of strokes played using the different timing styles. Pattern also had an impact: drummers generally played the rhythmically more complex pattern 2 louder than the simpler pattern 1 (snare and kick). Overall, our results lend further evidence to the hypothesis that both temporal and sound-related features contribute to the indication of the timing of a rhythmic event in groove-based performance.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"1-26"},"PeriodicalIF":2.3,"publicationDate":"2020-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.38.1.1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41672433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Olivier Senn, T. Bechtold, Dawn Rose, Guilherme Câmara, Nina Düvel, R. Jerjen, Lorenz Kilchenmann, Florian Hoesl, A. Baldassarre, Elena Alessandri
Music often triggers a pleasurable urge in listeners to move their bodies in response to the rhythm. In music psychology, this experience is commonly referred to as groove. This study presents the Experience of Groove Questionnaire, a newly developed self-report questionnaire that enables respondents to subjectively assess how strongly they feel an urge to move and pleasure while listening to music. The development of the questionnaire was carried out in several stages: candidate questionnaire items were generated on the basis of the groove literature, and their suitability was judged by fifteen groove and rhythm research experts. Two listening experiments were carried out in order to reduce the number of items, to validate the instrument, and to estimate its reliability. The final questionnaire consists of two scales with three items each that reliably measure respondents’ urge to move (Cronbach’s α = .92) and their experience of pleasure (α = .97) while listening to music. The two scales are highly correlated (r = .80), which indicates a strong association between motor and emotional responses to music. The scales of the Experience of Groove Questionnaire can independently be applied in groove research and in a variety of other research contexts in which listeners’ subjective experience of music-induced movement and enjoyment need to be addressed: for example the study of the interaction between music and motivation in sports and research on therapeutic applications of music in people with neurological movement disorders.
{"title":"Experience of Groove Questionnaire","authors":"Olivier Senn, T. Bechtold, Dawn Rose, Guilherme Câmara, Nina Düvel, R. Jerjen, Lorenz Kilchenmann, Florian Hoesl, A. Baldassarre, Elena Alessandri","doi":"10.1525/mp.2020.38.1.46","DOIUrl":"https://doi.org/10.1525/mp.2020.38.1.46","url":null,"abstract":"Music often triggers a pleasurable urge in listeners to move their bodies in response to the rhythm. In music psychology, this experience is commonly referred to as groove. This study presents the Experience of Groove Questionnaire, a newly developed self-report questionnaire that enables respondents to subjectively assess how strongly they feel an urge to move and pleasure while listening to music. The development of the questionnaire was carried out in several stages: candidate questionnaire items were generated on the basis of the groove literature, and their suitability was judged by fifteen groove and rhythm research experts. Two listening experiments were carried out in order to reduce the number of items, to validate the instrument, and to estimate its reliability. The final questionnaire consists of two scales with three items each that reliably measure respondents’ urge to move (Cronbach’s α = .92) and their experience of pleasure (α = .97) while listening to music. The two scales are highly correlated (r = .80), which indicates a strong association between motor and emotional responses to music. The scales of the Experience of Groove Questionnaire can independently be applied in groove research and in a variety of other research contexts in which listeners’ subjective experience of music-induced movement and enjoyment need to be addressed: for example the study of the interaction between music and motivation in sports and research on therapeutic applications of music in people with neurological movement disorders.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.38.1.46","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45847261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-10DOI: 10.1525/mp.2020.37.5.373
Jonathan de Souza, Adam Roy, Andrew Goldman
Sonata and rondo movements are often defined in terms of large-scale form, yet in the classical era, rondos were also identified according to their lively, cheerful character. We hypothesized that sonatas and rondos could be categorized based on stylistic features, and that rondos would involve more acoustic cues for happiness (e.g., higher average pitch height and higher average attack rate). In a corpus analysis, we examined paired movement openings from 180 instrumental works, composed between 1770 and 1799. Rondos had significantly higher pitch height and attack rate, as predicted, and there were also significant differences related to dynamics, meter, and cadences. We then conducted an experiment involving participants with at least 5 years of formal music training or less than 6 months of formal music training. Participants listened to 120 15-second audio clips, taken from the beginnings of movements in our corpus. After a training phase, they attempted to categorize the excerpts (2AFC task). D-prime scores were significantly higher than chance levels for both groups, and in post-experiment questionnaires, participants without music training reported that rondos sounded happier than sonatas. Overall, these results suggest that classical formal types have distinct stylistic and affective conventions.
{"title":"Classical Rondos and Sonatas as Stylistic Categories","authors":"Jonathan de Souza, Adam Roy, Andrew Goldman","doi":"10.1525/mp.2020.37.5.373","DOIUrl":"https://doi.org/10.1525/mp.2020.37.5.373","url":null,"abstract":"Sonata and rondo movements are often defined in terms of large-scale form, yet in the classical era, rondos were also identified according to their lively, cheerful character. We hypothesized that sonatas and rondos could be categorized based on stylistic features, and that rondos would involve more acoustic cues for happiness (e.g., higher average pitch height and higher average attack rate). In a corpus analysis, we examined paired movement openings from 180 instrumental works, composed between 1770 and 1799. Rondos had significantly higher pitch height and attack rate, as predicted, and there were also significant differences related to dynamics, meter, and cadences. We then conducted an experiment involving participants with at least 5 years of formal music training or less than 6 months of formal music training. Participants listened to 120 15-second audio clips, taken from the beginnings of movements in our corpus. After a training phase, they attempted to categorize the excerpts (2AFC task). D-prime scores were significantly higher than chance levels for both groups, and in post-experiment questionnaires, participants without music training reported that rondos sounded happier than sonatas. Overall, these results suggest that classical formal types have distinct stylistic and affective conventions.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"37 1","pages":"373-391"},"PeriodicalIF":2.3,"publicationDate":"2020-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.37.5.373","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45786864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1525/mp.2020.37.5.392
Amy M. Belfi, Elena Bai, Ava Stroud
The study of music-evoked autobiographical memories (MEAMs) has grown substantially in recent years. Prior work has used various methods to compare MEAMs to memories evoked by other cues (e.g., images, words). Here, we sought to identify which methods could distinguish between MEAMs and picture-evoked memories. Participants (N = 18) listened to popular music and viewed pictures of famous persons, and described any autobiographical memories evoked by the stimuli. Memories were scored using the Autobiographical Interview (AI; Levine, Svoboda, Hay, Winocur, & Moscovitch, 2002), Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2015), and Evaluative Lexicon (EL; Rocklage & Fazio, 2018). We trained three logistic regression models (one for each scoring method) to differentiate between memories evoked by music and faces. Models trained on LIWC and AI data exhibited significantly above chance accuracy when classifying whether a memory was evoked by a face or a song. The EL, which focuses on the affective nature of a text, failed to predict whether memories were evoked by music or faces. This demonstrates that various memory scoring techniques provide complementary information about cued autobiographical memories, and suggests that MEAMs differ from memories evoked by pictures in some aspects (e.g., perceptual and episodic content) but not others (e.g., emotional content).
{"title":"Comparing Methods for Analyzing Music-Evoked Autobiographical Memories","authors":"Amy M. Belfi, Elena Bai, Ava Stroud","doi":"10.1525/mp.2020.37.5.392","DOIUrl":"https://doi.org/10.1525/mp.2020.37.5.392","url":null,"abstract":"The study of music-evoked autobiographical memories (MEAMs) has grown substantially in recent years. Prior work has used various methods to compare MEAMs to memories evoked by other cues (e.g., images, words). Here, we sought to identify which methods could distinguish between MEAMs and picture-evoked memories. Participants (N = 18) listened to popular music and viewed pictures of famous persons, and described any autobiographical memories evoked by the stimuli. Memories were scored using the Autobiographical Interview (AI; Levine, Svoboda, Hay, Winocur, & Moscovitch, 2002), Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2015), and Evaluative Lexicon (EL; Rocklage & Fazio, 2018). We trained three logistic regression models (one for each scoring method) to differentiate between memories evoked by music and faces. Models trained on LIWC and AI data exhibited significantly above chance accuracy when classifying whether a memory was evoked by a face or a song. The EL, which focuses on the affective nature of a text, failed to predict whether memories were evoked by music or faces. This demonstrates that various memory scoring techniques provide complementary information about cued autobiographical memories, and suggests that MEAMs differ from memories evoked by pictures in some aspects (e.g., perceptual and episodic content) but not others (e.g., emotional content).","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.37.5.392","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48721281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1525/mp.2020.37.5.403
A. Schiavio, Jan Stupacher, R. Parncutt, R. Timmers
In an experimental study, we investigated how well novices can learn from each other in situations of technology-aided musical skill acquisition, comparing joint and solo learning, and learning through imitation, synchronization, and turn-taking. Fifty-four participants became familiar, either solo or in pairs, with three short musical melodies and then individually performed each from memory. Each melody was learned in a different way: participants from the solo group were asked via an instructional video to: 1) play in synchrony with the video, 2) take turns with the video, or 3) imitate the video. Participants from the duo group engaged in the same learning trials, but with a partner. Novices in both groups performed more accurately in pitch and time when learning in synchrony and turn-taking than in imitation. No differences were found between solo and joint learning. These results suggest that musical learning benefits from a shared, in-the-moment, musical experience, where responsibilities and cognitive resources are distributed between biological (i.e., peers) and hybrid (i.e., participant(s) and computer) assemblies.
{"title":"Learning Music From Each Other: Synchronization, Turn-taking, or Imitation?","authors":"A. Schiavio, Jan Stupacher, R. Parncutt, R. Timmers","doi":"10.1525/mp.2020.37.5.403","DOIUrl":"https://doi.org/10.1525/mp.2020.37.5.403","url":null,"abstract":"In an experimental study, we investigated how well novices can learn from each other in situations of technology-aided musical skill acquisition, comparing joint and solo learning, and learning through imitation, synchronization, and turn-taking. Fifty-four participants became familiar, either solo or in pairs, with three short musical melodies and then individually performed each from memory. Each melody was learned in a different way: participants from the solo group were asked via an instructional video to: 1) play in synchrony with the video, 2) take turns with the video, or 3) imitate the video. Participants from the duo group engaged in the same learning trials, but with a partner. Novices in both groups performed more accurately in pitch and time when learning in synchrony and turn-taking than in imitation. No differences were found between solo and joint learning. These results suggest that musical learning benefits from a shared, in-the-moment, musical experience, where responsibilities and cognitive resources are distributed between biological (i.e., peers) and hybrid (i.e., participant(s) and computer) assemblies.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49359118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1525/mp.2020.37.5.423
William Choi
The OPERA hypothesis theorizes how musical experience heightens perceptual acuity to lexical tones. One missing element in the hypothesis is whether musical advantage is general to all or specific to some lexical tones. To further extend the hypothesis, this study investigated whether English musicians consistently outperformed English nonmusicians in perceiving a variety of Cantonese tones. In an AXB discrimination task, the musicians exhibited superior discriminatory performance over the nonmusicians only in the high level, high rising, and mid-level tone contexts. Similarly, in a Cantonese tone sequence recall task, the musicians significantly outperformed the nonmusicians only in the contour tone context but not in the level tone context. Collectively, the results reflect the selectivity of musical advantage—musical experience is only advantageous to the perception of some but not all Cantonese tones, and elements of selectivity can be introduced to the OPERA hypothesis. Methodologically, the findings highlight the need to include a wide variety of lexical tone contrasts when studying music-to-language transfer.
{"title":"The Selectivity of Musical Advantage","authors":"William Choi","doi":"10.1525/mp.2020.37.5.423","DOIUrl":"https://doi.org/10.1525/mp.2020.37.5.423","url":null,"abstract":"The OPERA hypothesis theorizes how musical experience heightens perceptual acuity to lexical tones. One missing element in the hypothesis is whether musical advantage is general to all or specific to some lexical tones. To further extend the hypothesis, this study investigated whether English musicians consistently outperformed English nonmusicians in perceiving a variety of Cantonese tones. In an AXB discrimination task, the musicians exhibited superior discriminatory performance over the nonmusicians only in the high level, high rising, and mid-level tone contexts. Similarly, in a Cantonese tone sequence recall task, the musicians significantly outperformed the nonmusicians only in the contour tone context but not in the level tone context. Collectively, the results reflect the selectivity of musical advantage—musical experience is only advantageous to the perception of some but not all Cantonese tones, and elements of selectivity can be introduced to the OPERA hypothesis. Methodologically, the findings highlight the need to include a wide variety of lexical tone contrasts when studying music-to-language transfer.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46146110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-11DOI: 10.1525/MP.2020.37.4.366
Fred Cummins
{"title":"Response to Invited Commentaries on The Territory Between Speech and Song","authors":"Fred Cummins","doi":"10.1525/MP.2020.37.4.366","DOIUrl":"https://doi.org/10.1525/MP.2020.37.4.366","url":null,"abstract":"","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"37 1","pages":"366-367"},"PeriodicalIF":2.3,"publicationDate":"2020-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45532413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-11DOI: 10.1525/MP.2020.37.4.323
Eliot Farmer, Crescent Jicol, K. Petrini
Music expertise has been shown to enhance emotion recognition from speech prosody. Yet, it is currently unclear whether music training enhances the recognition of emotions through other communicative modalities such as vision and whether it enhances the feeling of such emotions. Musicians and nonmusicians were presented with visual, auditory, and audiovisual clips consisting of the biological motion and speech prosody of two agents interacting. Participants judged as quickly as possible whether the expressed emotion was happiness or anger, and subsequently indicated whether they also felt the emotion they had perceived. Measures of accuracy and reaction time were collected from the emotion recognition judgements, while yes/no responses were collected as indication of felt emotions. Musicians were more accurate than nonmusicians at recognizing emotion in the auditory-only condition, but not in the visual-only or audiovisual conditions. Although music training enhanced recognition of emotion through sound, it did not affect the felt emotion. These findings indicate that emotional processing in music and language may use overlapping but also divergent resources, or that some aspects of emotional processing are less responsive to music training than others. Hence music training may be an effective rehabilitative device for interpreting others’ emotion through speech.
{"title":"Musicianship Enhances Perception But Not Feeling of Emotion From Others’ Social Interaction Through Speech Prosody","authors":"Eliot Farmer, Crescent Jicol, K. Petrini","doi":"10.1525/MP.2020.37.4.323","DOIUrl":"https://doi.org/10.1525/MP.2020.37.4.323","url":null,"abstract":"Music expertise has been shown to enhance emotion recognition from speech prosody. Yet, it is currently unclear whether music training enhances the recognition of emotions through other communicative modalities such as vision and whether it enhances the feeling of such emotions. Musicians and nonmusicians were presented with visual, auditory, and audiovisual clips consisting of the biological motion and speech prosody of two agents interacting. Participants judged as quickly as possible whether the expressed emotion was happiness or anger, and subsequently indicated whether they also felt the emotion they had perceived. Measures of accuracy and reaction time were collected from the emotion recognition judgements, while yes/no responses were collected as indication of felt emotions. Musicians were more accurate than nonmusicians at recognizing emotion in the auditory-only condition, but not in the visual-only or audiovisual conditions. Although music training enhanced recognition of emotion through sound, it did not affect the felt emotion. These findings indicate that emotional processing in music and language may use overlapping but also divergent resources, or that some aspects of emotional processing are less responsive to music training than others. Hence music training may be an effective rehabilitative device for interpreting others’ emotion through speech.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47084844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-11DOI: 10.1525/MP.2020.37.4.263
David Hammerschmidt, Clemens Wöllner
The aim of the present study was to investigate if the perception of time is affected by actively attending to different metrical levels in musical rhythmic patterns. In an experiment with a repeated-measures design, musicians and non-musicians were presented with musical rhythmic patterns played at three different tempi. They synchronised with multiple metrical levels (half notes, quarter notes, eighth notes) of these patterns using a finger-tapping paradigm and listened without tapping. After each trial, stimulus duration was judged using a verbal estimation paradigm. Results show that the metrical level participants synchronised with influenced perceived time: actively attending to a higher metrical level (half notes, longer inter-tap intervals) led to the shortest time estimations, hence time was experienced as passing more quickly. Listening without tapping led to the longest time estimations. The faster the tempo of the patterns, the longer the time estimation. While there were no differences between musicians and non-musicians, those participants who tapped more consistently and accurately (as analysed by circular statistics) estimated durations to be shorter. Thus, attending to different metrical levels in music, by deliberately directing attention and motor activity, affects time perception.
{"title":"Sensorimotor synchronisation with higher metrical levels in music shortens perceived time.","authors":"David Hammerschmidt, Clemens Wöllner","doi":"10.1525/MP.2020.37.4.263","DOIUrl":"https://doi.org/10.1525/MP.2020.37.4.263","url":null,"abstract":"The aim of the present study was to investigate if the perception of time is affected by actively attending to different metrical levels in musical rhythmic patterns. In an experiment with a repeated-measures design, musicians and non-musicians were presented with musical rhythmic patterns played at three different tempi. They synchronised with multiple metrical levels (half notes, quarter notes, eighth notes) of these patterns using a finger-tapping paradigm and listened without tapping. After each trial, stimulus duration was judged using a verbal estimation paradigm. Results show that the metrical level participants synchronised with influenced perceived time: actively attending to a higher metrical level (half notes, longer inter-tap intervals) led to the shortest time estimations, hence time was experienced as passing more quickly. Listening without tapping led to the longest time estimations. The faster the tempo of the patterns, the longer the time estimation. While there were no differences between musicians and non-musicians, those participants who tapped more consistently and accurately (as analysed by circular statistics) estimated durations to be shorter. Thus, attending to different metrical levels in music, by deliberately directing attention and motor activity, affects time perception.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"37 4 1","pages":"263-277"},"PeriodicalIF":2.3,"publicationDate":"2020-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46105163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}