In this study, we tested the hypothesis that reality-based, relative to fiction-based, inductions of sadness would lead individuals to exhibit more pronounced affect-congruency in music preference. To test this, we induced participants to experience feelings of sadness by having them view 1 of 2 film clips that vividly portrayed a profound personal loss. The thematic content of the film clips was held constant, and the intensity of the sad feelings that they elicited was equated. In the first group of participants (fiction-based induction condition), the induction was based on a fictional feature film, whereas in the second group (reality-based induction condition), it was based on a real-life documentary. Findings revealed that in contrast to participants in the fiction-based sadness condition, those in the reality-based sadness condition expressed a preference for listening to songs that were expressively sadder than those of participants in a neutral-affect control condition. Likewise, songs chosen by participants in the reality-based induction condition were also rated as expressing less happiness than those of participants in both the fiction-based induction and control conditions. The results help to elucidate the motivational dynamics underlying emotion regulation via selective exposure to music. Moreover, they suggest that the exclusive reliance on fiction-based sadness inductions in several recent lab-based experiments may have threatened their ecological validity, leading them to underestimate the extent to which “misery loves company” in real-world music choice.
{"title":"Reality-Based Sadness Induction Fosters Affect-Congruency in Music Preference","authors":"Tina C. DeMarco, R. Friedman","doi":"10.1037/pmu0000221","DOIUrl":"https://doi.org/10.1037/pmu0000221","url":null,"abstract":"In this study, we tested the hypothesis that reality-based, relative to fiction-based, inductions of sadness would lead individuals to exhibit more pronounced affect-congruency in music preference. To test this, we induced participants to experience feelings of sadness by having them view 1 of 2 film clips that vividly portrayed a profound personal loss. The thematic content of the film clips was held constant, and the intensity of the sad feelings that they elicited was equated. In the first group of participants (fiction-based induction condition), the induction was based on a fictional feature film, whereas in the second group (reality-based induction condition), it was based on a real-life documentary. Findings revealed that in contrast to participants in the fiction-based sadness condition, those in the reality-based sadness condition expressed a preference for listening to songs that were expressively sadder than those of participants in a neutral-affect control condition. Likewise, songs chosen by participants in the reality-based induction condition were also rated as expressing less happiness than those of participants in both the fiction-based induction and control conditions. The results help to elucidate the motivational dynamics underlying emotion regulation via selective exposure to music. Moreover, they suggest that the exclusive reliance on fiction-based sadness inductions in several recent lab-based experiments may have threatened their ecological validity, leading them to underestimate the extent to which “misery loves company” in real-world music choice.","PeriodicalId":29942,"journal":{"name":"Psychomusicology","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91013254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, I propose that embodied cognition in music has two distinct levels. The “surface” level relates to the apparent corporeal articulation such as the activated psychomotor program of a music performer, visible gestures in response to music, and rhythmic entrainment. The primary (though concealed) “deep” level of embodied cognition relates to the main coding aspects in music: the tonal relationships arranged in time. Music is made of combinations of a small number of basic melodic intervals that differ by their psychophysical characteristics, among which the level of tonal stability and consonant-dissonant dichotomy are the most important for the formation of tonal expectations that guide music perception; tonal expectations are at the heart of melodic intentionality and musical motion. The tonal/temporal relationships encode musical content that dictates the motor behavior of music performers. The proposed two-level model of embodied cognition connects core musicology with the data from studies in music perception and cognition as well as studies in affective neuroscience and musicianship-related brain plasticity. The paper identifies the need for collaboration among various subdisciplines in musicology and cognitive sciences in order to further the development of the nascent field of embodied cognition in music. The presented discourse relies on research in the tonal music of European tradition and it does not address either aleatoric music or the exotic musics of non-Western traditions. To make the proposed model of embodied cognition in music available for nonmusicians, the paper includes the basics of music theory.
{"title":"Two-Level Model of Embodied Cognition in Music","authors":"M. Korsakova-Kreyn","doi":"10.1037/pmu0000228","DOIUrl":"https://doi.org/10.1037/pmu0000228","url":null,"abstract":"In this paper, I propose that embodied cognition in music has two distinct levels. The “surface” level relates to the apparent corporeal articulation such as the activated psychomotor program of a music performer, visible gestures in response to music, and rhythmic entrainment. The primary (though concealed) “deep” level of embodied cognition relates to the main coding aspects in music: the tonal relationships arranged in time. Music is made of combinations of a small number of basic melodic intervals that differ by their psychophysical characteristics, among which the level of tonal stability and consonant-dissonant dichotomy are the most important for the formation of tonal expectations that guide music perception; tonal expectations are at the heart of melodic intentionality and musical motion. The tonal/temporal relationships encode musical content that dictates the motor behavior of music performers. The proposed two-level model of embodied cognition connects core musicology with the data from studies in music perception and cognition as well as studies in affective neuroscience and musicianship-related brain plasticity. The paper identifies the need for collaboration among various subdisciplines in musicology and cognitive sciences in order to further the development of the nascent field of embodied cognition in music. The presented discourse relies on research in the tonal music of European tradition and it does not address either aleatoric music or the exotic musics of non-Western traditions. To make the proposed model of embodied cognition in music available for nonmusicians, the paper includes the basics of music theory.","PeriodicalId":29942,"journal":{"name":"Psychomusicology","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84942112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Matsunaga, Toshinori Yasuda, Michelle Johnson‐Motoyama, P. Hartono, K. Yokosawa, J. Abe
We investigated tonal perception of melodies from 2 cultures (Western and traditional Japanese) by 5 different cultural groups (44 Japanese, 25 Chinese, 16 Vietnamese, 18 Indonesians, and 25 U.S. citizens). Listeners rated the degree of “melodic completeness” of the final tone (a tonic vs. a nontonic) and “happiness–sadness” in the mode (major vs. minor, YOH vs. IN) of each melody. When Western melodies were presented, American and Japanese listeners responded similarly, such that they reflected implicit tonal knowledge of Western music. By contrast, the responses of Chinese, Vietnamese, and Indonesian listeners were different from those of American and Japanese listeners. When traditional Japanese melodies were presented, Japanese listeners exhibited responses that reflected implicit tonal knowledge of traditional Japanese music. American listeners also showed responses that were like the Japanese; however, the pattern of responses differed between the 2 groups. Alternatively, Chinese, Vietnamese, and Indonesian listeners exhibited different responses from the Japanese. These results show large differences between the Chinese/Vietnamese/Indonesian group and the American/Japanese group. Furthermore, the differences in responses to Western melodies between Americans and Japanese were less pronounced than that between Chinese, Vietnamese, and Indonesians. These findings imply that cultural differences in tonal perception are more diverse and distinctive than previously believed.
{"title":"A Cross-Cultural Comparison of Tonality Perception in Japanese, Chinese, Vietnamese, Indonesian, and American Listeners","authors":"R. Matsunaga, Toshinori Yasuda, Michelle Johnson‐Motoyama, P. Hartono, K. Yokosawa, J. Abe","doi":"10.1037/pmu0000219","DOIUrl":"https://doi.org/10.1037/pmu0000219","url":null,"abstract":"We investigated tonal perception of melodies from 2 cultures (Western and traditional Japanese) by 5 different cultural groups (44 Japanese, 25 Chinese, 16 Vietnamese, 18 Indonesians, and 25 U.S. citizens). Listeners rated the degree of “melodic completeness” of the final tone (a tonic vs. a nontonic) and “happiness–sadness” in the mode (major vs. minor, YOH vs. IN) of each melody. When Western melodies were presented, American and Japanese listeners responded similarly, such that they reflected implicit tonal knowledge of Western music. By contrast, the responses of Chinese, Vietnamese, and Indonesian listeners were different from those of American and Japanese listeners. When traditional Japanese melodies were presented, Japanese listeners exhibited responses that reflected implicit tonal knowledge of traditional Japanese music. American listeners also showed responses that were like the Japanese; however, the pattern of responses differed between the 2 groups. Alternatively, Chinese, Vietnamese, and Indonesian listeners exhibited different responses from the Japanese. These results show large differences between the Chinese/Vietnamese/Indonesian group and the American/Japanese group. Furthermore, the differences in responses to Western melodies between Americans and Japanese were less pronounced than that between Chinese, Vietnamese, and Indonesians. These findings imply that cultural differences in tonal perception are more diverse and distinctive than previously believed.","PeriodicalId":29942,"journal":{"name":"Psychomusicology","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74033803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Involuntary cognitions—thoughts that arise spontaneously without conscious effort—are an everyday phenomenon. These cognitions include future thoughts, autobiographical memories, and, perhaps most commonly, earworms. Earworms—the experience of having a song stuck in your head—provide a window into the mind. We used earworms of instrumental music to investigate whether the likelihood of music returning involuntarily depends on the music’s emotional valence. We generalize these findings to understand the possible similarities and differences between positive and negative involuntary cognitions. We also assessed whether the music’s familiarity influences the likelihood of it returning involuntarily. We exposed participants (n = 143) to positive or negative instrumental film music that was low versus high in familiarity, and measured subsequent frequency, duration, and characteristics of earworms inside and outside the lab. We effectively induced earworms; 94% of participants experienced earworms inside the lab and 62% over the subsequent 8 hr. All participants experienced a similar number of earworms, regardless of the music’s emotional valence, but these earworms differed in quality. Participants reported earworms for negative music as more distressing and subjectively less frequent than earworms for positive music. Contrary to existing earworm research, the music’s familiarity had no effect on the presence or qualitative experience of earworms. Our findings suggest that involuntary cognitions for positive and negative content are similar in their frequency, but distinctive in how they are experienced (e.g., distress ratings).
{"title":"Understanding the Overlap Between Positive and Negative Involuntary Cognitions Using Instrumental Earworms","authors":"Ella K. Moeck, I. Hyman, Melanie K. T. Takarangi","doi":"10.1037/pmu0000217","DOIUrl":"https://doi.org/10.1037/pmu0000217","url":null,"abstract":"Involuntary cognitions—thoughts that arise spontaneously without conscious effort—are an everyday phenomenon. These cognitions include future thoughts, autobiographical memories, and, perhaps most commonly, earworms. Earworms—the experience of having a song stuck in your head—provide a window into the mind. We used earworms of instrumental music to investigate whether the likelihood of music returning involuntarily depends on the music’s emotional valence. We generalize these findings to understand the possible similarities and differences between positive and negative involuntary cognitions. We also assessed whether the music’s familiarity influences the likelihood of it returning involuntarily. We exposed participants (n = 143) to positive or negative instrumental film music that was low versus high in familiarity, and measured subsequent frequency, duration, and characteristics of earworms inside and outside the lab. We effectively induced earworms; 94% of participants experienced earworms inside the lab and 62% over the subsequent 8 hr. All participants experienced a similar number of earworms, regardless of the music’s emotional valence, but these earworms differed in quality. Participants reported earworms for negative music as more distressing and subjectively less frequent than earworms for positive music. Contrary to existing earworm research, the music’s familiarity had no effect on the presence or qualitative experience of earworms. Our findings suggest that involuntary cognitions for positive and negative content are similar in their frequency, but distinctive in how they are experienced (e.g., distress ratings).","PeriodicalId":29942,"journal":{"name":"Psychomusicology","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78116161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Burunat, E. Brattico, M. Hartmann, P. Vuust, Teppo Särkämö, P. Toiviainen
Cerebello-hippocampal interactions occur during accurate spatiotemporal prediction of movements. In the context of music listening, differences in cerebello-hippocampal functional connectivity may result from differences in predictive listening accuracy. Using functional MRI, we studied differences in this network between 18 musicians and 18 nonmusicians while they listened to music. Musicians possess a predictive listening advantage over nonmusicians, facilitated by strengthened coupling between produced and heard sounds through lifelong musical experience. Thus, we hypothesized that musicians would exhibit greater functional connectivity than nonmusicians as a marker of accurate online predictions during music listening. To this end, we estimated the functional connectivity between cerebellum and hippocampus as modulated by a perceptual measure of the predictability of the music. Results revealed increased predictability-driven functional connectivity in this network in musicians compared with nonmusicians, which was positively correlated with the length of musical training. Findings may be explained by musicians’ improved predictive listening accuracy. Our findings advance the understanding of cerebellar integrative function.
{"title":"Musical Training Predicts Cerebello-Hippocampal Coupling During Music Listening","authors":"I. Burunat, E. Brattico, M. Hartmann, P. Vuust, Teppo Särkämö, P. Toiviainen","doi":"10.1037/pmu0000215","DOIUrl":"https://doi.org/10.1037/pmu0000215","url":null,"abstract":"Cerebello-hippocampal interactions occur during accurate spatiotemporal prediction of movements. In the context of music listening, differences in cerebello-hippocampal functional connectivity may result from differences in predictive listening accuracy. Using functional MRI, we studied differences in this network between 18 musicians and 18 nonmusicians while they listened to music. Musicians possess a predictive listening advantage over nonmusicians, facilitated by strengthened coupling between produced and heard sounds through lifelong musical experience. Thus, we hypothesized that musicians would exhibit greater functional connectivity than nonmusicians as a marker of accurate online predictions during music listening. To this end, we estimated the functional connectivity between cerebellum and hippocampus as modulated by a perceptual measure of the predictability of the music. Results revealed increased predictability-driven functional connectivity in this network in musicians compared with nonmusicians, which was positively correlated with the length of musical training. Findings may be explained by musicians’ improved predictive listening accuracy. Our findings advance the understanding of cerebellar integrative function.","PeriodicalId":29942,"journal":{"name":"Psychomusicology","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77437922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Schotanus, Hendrik Vincent Koops, Judy Reed Edworthy
Differences in the popularity of individual psalms and melodies from the Genevan Psalter, both in the Netherlands and elsewhere, offer an interesting case study for investigating factors that might influence the popularity of a song. The Genevan psalms form a relatively small set of hymns (N = 150) that has long played an important role in Dutch cultural life, and it is clear that some psalms are more popular than others. Previous researchers have shown that contents and musical mode influence popularity. In this article, we present evidence that interaction between melodic and poetical features also affects song popularity, presumably by affecting processing fluency. Pilot studies generated a set of preference rules, operationalized in two multinomial factors repetition and balanced motion. These were tested in three subsequent studies in regression analyses on scales indicating the popularity of Genevan psalms or melodies in specific “arenas” (i.e., countries, denominations, and era), with both separate regressions and regressions with full models including variables concerning contents, mode, and length. Both repetition and balanced motion turned out to be significant predictors in all regressions. Furthermore, the specific way many Dutch protestants have sung the psalms through the ages plays a part in this interaction.
{"title":"Interaction Between Musical and Poetic Form Affects Song Popularity: The Case of the Genevan Psalter","authors":"Y. Schotanus, Hendrik Vincent Koops, Judy Reed Edworthy","doi":"10.1037/pmu0000216","DOIUrl":"https://doi.org/10.1037/pmu0000216","url":null,"abstract":"Differences in the popularity of individual psalms and melodies from the Genevan Psalter, both in the Netherlands and elsewhere, offer an interesting case study for investigating factors that might influence the popularity of a song. The Genevan psalms form a relatively small set of hymns (N = 150) that has long played an important role in Dutch cultural life, and it is clear that some psalms are more popular than others. Previous researchers have shown that contents and musical mode influence popularity. In this article, we present evidence that interaction between melodic and poetical features also affects song popularity, presumably by affecting processing fluency. Pilot studies generated a set of preference rules, operationalized in two multinomial factors repetition and balanced motion. These were tested in three subsequent studies in regression analyses on scales indicating the popularity of Genevan psalms or melodies in specific “arenas” (i.e., countries, denominations, and era), with both separate regressions and regressions with full models including variables concerning contents, mode, and length. Both repetition and balanced motion turned out to be significant predictors in all regressions. Furthermore, the specific way many Dutch protestants have sung the psalms through the ages plays a part in this interaction.","PeriodicalId":29942,"journal":{"name":"Psychomusicology","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80534672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Not all musicians are created equal: Statistical concerns regarding the categorization of participants.","authors":"Heather R. Daly, M. D. Hall","doi":"10.1037/PMU0000213","DOIUrl":"https://doi.org/10.1037/PMU0000213","url":null,"abstract":"","PeriodicalId":29942,"journal":{"name":"Psychomusicology","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81007960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nina König, Nadine Fischer, Maja Friedemann, T. Pfeiffer, A. Göritz, M. Schredl
A connection between music and dreams has been reported in many cultures. Although inspirations by dreams were reported for famous musicians, there are few studies investigating the occurrence of music dreams in the general population. In the present online study, 1,966 participants filled out an online questionnaire concerning their involvement in music in waking and the occurrence of music in dreams. The basic framework for the study was the continuity hypothesis of dreaming; that is, more musical activity during waking should be related to a higher amount of music dreams. About 6% of all remembered dreams contained music, and the frequency was significantly higher when the participants spent time with music activities in waking like singing, playing an instrument, or listening actively to music—supporting the continuity hypothesis. In addition, music dreams were associated with more positive emotions. Future research should study the effects of music in waking on music in dreams over a longer period of time (dream diaries), as well as the dreams of professional musicians.
{"title":"Music in Dreams and Music in Waking: An Online Study","authors":"Nina König, Nadine Fischer, Maja Friedemann, T. Pfeiffer, A. Göritz, M. Schredl","doi":"10.1037/pmu0000208","DOIUrl":"https://doi.org/10.1037/pmu0000208","url":null,"abstract":"A connection between music and dreams has been reported in many cultures. Although inspirations by dreams were reported for famous musicians, there are few studies investigating the occurrence of music dreams in the general population. In the present online study, 1,966 participants filled out an online questionnaire concerning their involvement in music in waking and the occurrence of music in dreams. The basic framework for the study was the continuity hypothesis of dreaming; that is, more musical activity during waking should be related to a higher amount of music dreams. About 6% of all remembered dreams contained music, and the frequency was significantly higher when the participants spent time with music activities in waking like singing, playing an instrument, or listening actively to music—supporting the continuity hypothesis. In addition, music dreams were associated with more positive emotions. Future research should study the effects of music in waking on music in dreams over a longer period of time (dream diaries), as well as the dreams of professional musicians.","PeriodicalId":29942,"journal":{"name":"Psychomusicology","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88802585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we reassessed the hypothesis that musical scales take on a sadder expressive character when they include one or more scale degrees that are lower in pitch than “normal.” Two conceptual replications of a previous study by Yim (2014; Huron, Yim, & Chordia, 2010) were conducted, incorporating modifications meant to bolster statistical power, enhance internal and external validity, and refine measurement of perceived emotional expression. In both experiments, participants were exposed to a set of melodies based on a single, highly unconventional scale, the Bohlen–Pierce Scale. In the high versus low exposure conditions, participants were exposed to melodies based on a Bohlen–Pierce Scale variant in which selected scale degrees had been raised versus lowered relative to a comparison scale. Following the exposure phase, all participants rated the perceived sadness/happiness of the exact same test melodies, in this case based on the “intermediate” comparison scale. Results confirmed that lowering selected degrees of an exposure scale causes melodies based on the comparison scale to be perceived as sadder/less happy (Experiment 1). However, altering these scale degrees did not independently affect perceptions of sadness/happiness after controlling for the average pitch height of the scale variants (Experiment 2). As such, the findings provide qualified support for the contention that “lower than normal” scales are perceived as expressively sadder.
{"title":"Reexploring the Effects of Relative Pitch Cues on Perceived Sadness in an Unconventionally Tuned Musical Scale","authors":"R. Friedman","doi":"10.1037/pmu0000212","DOIUrl":"https://doi.org/10.1037/pmu0000212","url":null,"abstract":"In this study, we reassessed the hypothesis that musical scales take on a sadder expressive character when they include one or more scale degrees that are lower in pitch than “normal.” Two conceptual replications of a previous study by Yim (2014; Huron, Yim, & Chordia, 2010) were conducted, incorporating modifications meant to bolster statistical power, enhance internal and external validity, and refine measurement of perceived emotional expression. In both experiments, participants were exposed to a set of melodies based on a single, highly unconventional scale, the Bohlen–Pierce Scale. In the high versus low exposure conditions, participants were exposed to melodies based on a Bohlen–Pierce Scale variant in which selected scale degrees had been raised versus lowered relative to a comparison scale. Following the exposure phase, all participants rated the perceived sadness/happiness of the exact same test melodies, in this case based on the “intermediate” comparison scale. Results confirmed that lowering selected degrees of an exposure scale causes melodies based on the comparison scale to be perceived as sadder/less happy (Experiment 1). However, altering these scale degrees did not independently affect perceptions of sadness/happiness after controlling for the average pitch height of the scale variants (Experiment 2). As such, the findings provide qualified support for the contention that “lower than normal” scales are perceived as expressively sadder.","PeriodicalId":29942,"journal":{"name":"Psychomusicology","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74285929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study explores outcomes related to musical learning in a child with complex special educational needs. CB is a boy who was 8 years old at the start of the study and was diagnosed with comorbid autism spectrum disorder, attention deficit hyperactivity disorder, sensory processing difficulties, dyslexia, and dyspraxia during the study. He was evaluated on a battery of developmental measures before and after 1 year of music learning. At pretesting, CB obtained a high musical aptitude score and an average IQ score. However, his scores on tests measuring motor abilities, executive function, and social-emotional skills were low. Posttesting revealed improvements in CB’s fluid intelligence and motor skills, and although teacher and parent reports suggested a decline in his social-emotional functioning, his musical progress was good. The results are discussed in the context of impairments in developmental disorders, the importance of flexible teaching approaches, and family support for music learning during childhood.
{"title":"Learning a Musical Instrument Can Benefit a Child With Special Educational Needs","authors":"Dawn Rose, A. Jones Bartoli, P. Heaton","doi":"10.1037/pmu0000209","DOIUrl":"https://doi.org/10.1037/pmu0000209","url":null,"abstract":"This study explores outcomes related to musical learning in a child with complex special educational needs. CB is a boy who was 8 years old at the start of the study and was diagnosed with comorbid autism spectrum disorder, attention deficit hyperactivity disorder, sensory processing difficulties, dyslexia, and dyspraxia during the study. He was evaluated on a battery of developmental measures before and after 1 year of music learning. At pretesting, CB obtained a high musical aptitude score and an average IQ score. However, his scores on tests measuring motor abilities, executive function, and social-emotional skills were low. Posttesting revealed improvements in CB’s fluid intelligence and motor skills, and although teacher and parent reports suggested a decline in his social-emotional functioning, his musical progress was good. The results are discussed in the context of impairments in developmental disorders, the importance of flexible teaching approaches, and family support for music learning during childhood.","PeriodicalId":29942,"journal":{"name":"Psychomusicology","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89239865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}