Pub Date : 2022-01-01Epub Date: 2022-04-26DOI: 10.1007/s10109-021-00366-2
Peter Congdon
The COVID-19 epidemic has raised major issues with regard to modelling and forecasting outcomes such as cases, deaths and hospitalisations. In particular, the forecasting of area-specific counts of infectious disease poses problems when counts are changing rapidly and there are infection hotspots, as in epidemic situations. Such forecasts are of central importance for prioritizing interventions or making severity designations for different areas. In this paper, we consider different specifications of autoregressive dependence in incidence counts as these may considerably impact on adaptivity in epidemic situations. In particular, we introduce parameters to allow temporal adaptivity in autoregressive dependence. A case study considers COVID-19 data for 144 English local authorities during the UK epidemic second wave in late 2020 and early 2021, which demonstrate geographical clustering in new cases-linked to the then emergent alpha variant. The model allows for both spatial and time variation in autoregressive effects. We assess sensitivity in short-term predictions and fit to specification (spatial vs space-time autoregression, linear vs log-linear, and form of space decay), and show improved one-step ahead and in-sample prediction using space-time autoregression including temporal adaptivity.
{"title":"A spatio-temporal autoregressive model for monitoring and predicting COVID infection rates.","authors":"Peter Congdon","doi":"10.1007/s10109-021-00366-2","DOIUrl":"10.1007/s10109-021-00366-2","url":null,"abstract":"<p><p>The COVID-19 epidemic has raised major issues with regard to modelling and forecasting outcomes such as cases, deaths and hospitalisations. In particular, the forecasting of area-specific counts of infectious disease poses problems when counts are changing rapidly and there are infection hotspots, as in epidemic situations. Such forecasts are of central importance for prioritizing interventions or making severity designations for different areas. In this paper, we consider different specifications of autoregressive dependence in incidence counts as these may considerably impact on adaptivity in epidemic situations. In particular, we introduce parameters to allow temporal adaptivity in autoregressive dependence. A case study considers COVID-19 data for 144 English local authorities during the UK epidemic second wave in late 2020 and early 2021, which demonstrate geographical clustering in new cases-linked to the then emergent alpha variant. The model allows for both spatial and time variation in autoregressive effects. We assess sensitivity in short-term predictions and fit to specification (spatial vs space-time autoregression, linear vs log-linear, and form of space decay), and show improved one-step ahead and in-sample prediction using space-time autoregression including temporal adaptivity.</p>","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"24 1","pages":"583-610"},"PeriodicalIF":2.8,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9039004/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89525113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1525/mp.2021.39.2.103
Laurène Léard-Schneider, Y. Lévêque
The present study aimed to examine the perception of music and prosody in patients who had undergone a severe traumatic brain injury (TBI). Our second objective was to describe the association between music and prosody impairments in clinical individual presentations. Thirty-six patients who were out of the acute phase underwent a set of music and prosody tests: two subtests of the Montreal Battery for Evaluation of Amusia evaluating respectively melody (scale) and rhythm perception, two subtests of the Montreal Evaluation of Communication on prosody understanding in sentences, and two other tests evaluating prosody understanding in vowels. Forty-two percent of the patients were impaired in the melodic test, 51% were impaired in the rhythmic test, and 71% were impaired in at least one of the four prosody tests. The amusic patients performed significantly worse than non-amusics on the four prosody tests. This descriptive study shows for the first time the high prevalence of music deficits after severe TBI. It also suggests associations between prosody and music impairments, as well as between linguistic and emotional prosody impairments. Causes of these impairments remain to be explored.
{"title":"Perception of Music and Speech Prosody After Severe Traumatic Brain Injury","authors":"Laurène Léard-Schneider, Y. Lévêque","doi":"10.1525/mp.2021.39.2.103","DOIUrl":"https://doi.org/10.1525/mp.2021.39.2.103","url":null,"abstract":"The present study aimed to examine the perception of music and prosody in patients who had undergone a severe traumatic brain injury (TBI). Our second objective was to describe the association between music and prosody impairments in clinical individual presentations. Thirty-six patients who were out of the acute phase underwent a set of music and prosody tests: two subtests of the Montreal Battery for Evaluation of Amusia evaluating respectively melody (scale) and rhythm perception, two subtests of the Montreal Evaluation of Communication on prosody understanding in sentences, and two other tests evaluating prosody understanding in vowels. Forty-two percent of the patients were impaired in the melodic test, 51% were impaired in the rhythmic test, and 71% were impaired in at least one of the four prosody tests. The amusic patients performed significantly worse than non-amusics on the four prosody tests. This descriptive study shows for the first time the high prevalence of music deficits after severe TBI. It also suggests associations between prosody and music impairments, as well as between linguistic and emotional prosody impairments. Causes of these impairments remain to be explored.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48705932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1525/mp.2021.39.2.118
Andrew Goldman, Peter M. C. Harrison, Tyreek Jackson, M. Pearce
Electroencephalographic responses to unexpected musical events allow researchers to test listeners’ internal models of syntax. One major challenge is dissociating cognitive syntactic violations—based on the abstract identity of a particular musical structure—from unexpected acoustic features. Despite careful controls in past studies, recent work by Bigand, Delbe, Poulin-Carronnat, Leman, and Tillmann (2014) has argued that ERP findings attributed to cognitive surprisal cannot be unequivocally separated from sensory surprisal. Here we report a novel EEG paradigm that uses three auditory short-term memory models and one cognitive model to predict surprisal as indexed by several ERP components (ERAN, N5, P600, and P3a), directly comparing sensory and cognitive contributions. Our paradigm parameterizes a large set of stimuli rather than using categorically “high” and “low” surprisal conditions, addressing issues with past work in which participants may learn where to expect violations and may be biased by local context. The cognitive model (Harrison & Pearce, 2018) predicted higher P3a amplitudes, as did Leman’s (2000) model, indicating both sensory and cognitive contributions to expectation violation. However, no model predicted ERAN, N5, or P600 amplitudes, raising questions about whether traditional interpretations of these ERP components generalize to broader collections of stimuli or rather are limited to less naturalistic stimuli.
{"title":"Reassessing Syntax-Related ERP Components Using Popular Music Chord Sequences","authors":"Andrew Goldman, Peter M. C. Harrison, Tyreek Jackson, M. Pearce","doi":"10.1525/mp.2021.39.2.118","DOIUrl":"https://doi.org/10.1525/mp.2021.39.2.118","url":null,"abstract":"Electroencephalographic responses to unexpected musical events allow researchers to test listeners’ internal models of syntax. One major challenge is dissociating cognitive syntactic violations—based on the abstract identity of a particular musical structure—from unexpected acoustic features. Despite careful controls in past studies, recent work by Bigand, Delbe, Poulin-Carronnat, Leman, and Tillmann (2014) has argued that ERP findings attributed to cognitive surprisal cannot be unequivocally separated from sensory surprisal. Here we report a novel EEG paradigm that uses three auditory short-term memory models and one cognitive model to predict surprisal as indexed by several ERP components (ERAN, N5, P600, and P3a), directly comparing sensory and cognitive contributions. Our paradigm parameterizes a large set of stimuli rather than using categorically “high” and “low” surprisal conditions, addressing issues with past work in which participants may learn where to expect violations and may be biased by local context. The cognitive model (Harrison & Pearce, 2018) predicted higher P3a amplitudes, as did Leman’s (2000) model, indicating both sensory and cognitive contributions to expectation violation. However, no model predicted ERAN, N5, or P600 amplitudes, raising questions about whether traditional interpretations of these ERP components generalize to broader collections of stimuli or rather are limited to less naturalistic stimuli.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44181181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1525/mp.2021.39.2.202
R. Dean, D. Bulger, A. Milne
Production of relatively few rhythms with non-isochronous beats has been studied. So we assess reproduction of most well-formed looped rhythms comprising K=2-11 cues (a uniform piano tone, indicating where participants should tap) and N=3-13 isochronous pulses (a uniform cymbal). Each rhythm had two different cue interonset intervals. We expected that many of the rhythms would be difficult to tap, because of ambiguous non-isochronous beats and syncopations, and that complexity and asymmetry would predict performance. 111 participants tapped 91 rhythms each heard over 129 pulses, starting as soon as they could. Whereas tap-cue concordance in prior studies was generally >> 90%, here only 52.2% of cues received a temporally congruent tap, and only 63% of taps coincided with a cue. Only −2 ms mean tap asynchrony was observed (whereas for non-musicians this value is usually c. −50 ms). Performances improved as rhythms progressed and were repeated, but precision varied substantially between participants and rhythms. Performances were autoregressive and mixed effects cross-sectional time series analyses retaining the integrity of all the individual time series revealed that performance worsened as complexity features K, N, and cue inter-onset interval entropy increased. Performance worsened with increasing R, the Long: short (L: s) cue interval ratio of each rhythm (indexing both complexity and asymmetry). Rhythm evenness and balance, and whether N was divisible by 2 or 3, were not useful predictors. Tap velocities positively predicted cue fulfilment. Our data indicate that study of a greater diversity of rhythms can broaden our impression of rhythm cognition.
{"title":"On the Roles of Complexity and Symmetry in Cued Tapping of Well-formed Complex Rhythms","authors":"R. Dean, D. Bulger, A. Milne","doi":"10.1525/mp.2021.39.2.202","DOIUrl":"https://doi.org/10.1525/mp.2021.39.2.202","url":null,"abstract":"Production of relatively few rhythms with non-isochronous beats has been studied. So we assess reproduction of most well-formed looped rhythms comprising K=2-11 cues (a uniform piano tone, indicating where participants should tap) and N=3-13 isochronous pulses (a uniform cymbal). Each rhythm had two different cue interonset intervals. We expected that many of the rhythms would be difficult to tap, because of ambiguous non-isochronous beats and syncopations, and that complexity and asymmetry would predict performance. 111 participants tapped 91 rhythms each heard over 129 pulses, starting as soon as they could. Whereas tap-cue concordance in prior studies was generally >> 90%, here only 52.2% of cues received a temporally congruent tap, and only 63% of taps coincided with a cue. Only −2 ms mean tap asynchrony was observed (whereas for non-musicians this value is usually c. −50 ms). Performances improved as rhythms progressed and were repeated, but precision varied substantially between participants and rhythms. Performances were autoregressive and mixed effects cross-sectional time series analyses retaining the integrity of all the individual time series revealed that performance worsened as complexity features K, N, and cue inter-onset interval entropy increased. Performance worsened with increasing R, the Long: short (L: s) cue interval ratio of each rhythm (indexing both complexity and asymmetry). Rhythm evenness and balance, and whether N was divisible by 2 or 3, were not useful predictors. Tap velocities positively predicted cue fulfilment. Our data indicate that study of a greater diversity of rhythms can broaden our impression of rhythm cognition.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48348547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1525/mp.2021.39.2.160
H. H. Mangelsdorf, Jason D. Listman, Anabel Maler
This study investigated how signed performances express musical meaning and emotions. Deaf, Hard-of-Hearing (HoH), and hearing participants watched eight translated signed songs and eight signed lyrics with no influence of music. The participants rated these videos on several emotional and movement dimensions. Even though the videos did not have audible sounds, hearing participants perceived the signed songs as more musical than the signed lyrics. Deaf/HoH participants perceived both types of videos as equally musical, suggesting a different conception of what it means for movement to be musical. We also found that participants’ ratings of spatial height, vertical direction, size, tempo, and fluency related to the performer’s intended emotion and participants’ ratings of valence/arousal. For Deaf/HoH participants, accuracy at identifying emotional intentions was predicted by focusing more on facial expressions than arm movements. Together, these findings add to our understanding of how audience members attend to and derive meaning from different characteristics of movement in performative contexts.
{"title":"Perception of Musicality and Emotion in Signed Songs","authors":"H. H. Mangelsdorf, Jason D. Listman, Anabel Maler","doi":"10.1525/mp.2021.39.2.160","DOIUrl":"https://doi.org/10.1525/mp.2021.39.2.160","url":null,"abstract":"This study investigated how signed performances express musical meaning and emotions. Deaf, Hard-of-Hearing (HoH), and hearing participants watched eight translated signed songs and eight signed lyrics with no influence of music. The participants rated these videos on several emotional and movement dimensions. Even though the videos did not have audible sounds, hearing participants perceived the signed songs as more musical than the signed lyrics. Deaf/HoH participants perceived both types of videos as equally musical, suggesting a different conception of what it means for movement to be musical. We also found that participants’ ratings of spatial height, vertical direction, size, tempo, and fluency related to the performer’s intended emotion and participants’ ratings of valence/arousal. For Deaf/HoH participants, accuracy at identifying emotional intentions was predicted by focusing more on facial expressions than arm movements. Together, these findings add to our understanding of how audience members attend to and derive meaning from different characteristics of movement in performative contexts.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43148455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1525/mp.2021.39.2.145
Laure-Hélène Canette, P. Lalitte, B. Tillmann, E. Bigand
Conceptual priming studies have shown that listening to musical primes triggers semantic activation. The present study further investigated with a free semantic evocation task, 1) how rhythmic vs. textural structures affect the amount of words evoked after a musical sequence, and 2) whether both features also affect the content of the semantic activation. Rhythmic sequences were composed of various percussion sounds with a strong underlying beat and metrical structure. Textural sound sequences consisted of blended timbres and sound sources evolving over time without identifiable pulse. Participants were asked to verbalize the concepts evoked by the musical sequences. We measured the number of words and lemmas produced after having listened to musical sequences of each condition, and we analyzed whether specific concepts were associated with each sequence type. Results showed that more words and lemmas were produced for textural sound sequences than for rhythmic sequences and that some concepts were specifically associated with each musical condition. Our findings suggest that listening to musical excerpts emphasizing different features influences semantic activation in different ways and extent. This might possibly be instantiated via cognitive mechanisms triggered by the acoustic characteristics of the excerpts as well as the perceived emotions.
{"title":"Influence of Regular Rhythmic Versus Textural Sound Sequences on Semantic and Conceptual Processing","authors":"Laure-Hélène Canette, P. Lalitte, B. Tillmann, E. Bigand","doi":"10.1525/mp.2021.39.2.145","DOIUrl":"https://doi.org/10.1525/mp.2021.39.2.145","url":null,"abstract":"Conceptual priming studies have shown that listening to musical primes triggers semantic activation. The present study further investigated with a free semantic evocation task, 1) how rhythmic vs. textural structures affect the amount of words evoked after a musical sequence, and 2) whether both features also affect the content of the semantic activation. Rhythmic sequences were composed of various percussion sounds with a strong underlying beat and metrical structure. Textural sound sequences consisted of blended timbres and sound sources evolving over time without identifiable pulse. Participants were asked to verbalize the concepts evoked by the musical sequences. We measured the number of words and lemmas produced after having listened to musical sequences of each condition, and we analyzed whether specific concepts were associated with each sequence type. Results showed that more words and lemmas were produced for textural sound sequences than for rhythmic sequences and that some concepts were specifically associated with each musical condition. Our findings suggest that listening to musical excerpts emphasizing different features influences semantic activation in different ways and extent. This might possibly be instantiated via cognitive mechanisms triggered by the acoustic characteristics of the excerpts as well as the perceived emotions.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44160311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1525/mp.2021.39.2.181
Emily Carlson, I. Cross
Although the fields of music psychology and music therapy share many common interests, research collaboration between the two fields is still somewhat rare. Previous work has identified that disciplinary identities and attitudes towards those in other disciplines are challenges to effective interdisciplinary research. The current study explores such attitudes in music therapy and music psychology. A sample of 123 music therapists and music psychologists answered an online survey regarding their attitudes towards potential interdisciplinary work between the two fields. Analysis of results suggested that participants’ judgements of the attitudes of members of the other discipline were not always accurate. Music therapists indicated a high degree of interest in interdisciplinary research, although in free text answers, both music psychologists and music therapists frequently characterized music therapists as disinterested in science. Music therapists reported seeing significantly greater relevance of music psychology to their own work than did music psychologists of music therapists. Participants’ attitudes were modestly related to their reported personality traits and held values. Results overall indicated interest in, and positive expectations of, interdisciplinary attitudes in both groups, and should be explored in future research.
{"title":"Reopening the Conversation Between Music Psychology and Music Therapy","authors":"Emily Carlson, I. Cross","doi":"10.1525/mp.2021.39.2.181","DOIUrl":"https://doi.org/10.1525/mp.2021.39.2.181","url":null,"abstract":"Although the fields of music psychology and music therapy share many common interests, research collaboration between the two fields is still somewhat rare. Previous work has identified that disciplinary identities and attitudes towards those in other disciplines are challenges to effective interdisciplinary research. The current study explores such attitudes in music therapy and music psychology. A sample of 123 music therapists and music psychologists answered an online survey regarding their attitudes towards potential interdisciplinary work between the two fields. Analysis of results suggested that participants’ judgements of the attitudes of members of the other discipline were not always accurate. Music therapists indicated a high degree of interest in interdisciplinary research, although in free text answers, both music psychologists and music therapists frequently characterized music therapists as disinterested in science. Music therapists reported seeing significantly greater relevance of music psychology to their own work than did music psychologists of music therapists. Participants’ attitudes were modestly related to their reported personality traits and held values. Results overall indicated interest in, and positive expectations of, interdisciplinary attitudes in both groups, and should be explored in future research.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47330640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Historical listening has long been a topic of interest for musicologists. Yet, little attention has been given to the systematic study of historical listening practices before the common practice era (c. 1700–present). In the first study of its kind, this research compared a model of medieval perceptions of “sweetness” based on writings of medieval music theorists with modern day listeners’ aesthetic responses. Responses were collected through two experiments. In an implicit associations experiment, participants were primed with a more or less consonant musical excerpt, then presented with a sweet or bitter target word, or a non-word, on which to make lexical decisions. In the explicit associations experiment, participants were asked to rate on a three-point Likert scale perceived sweetness of short musical excerpts that varied in consonance and sound quality (male, female, organ). The results from these experiments were compared to predictions from a medieval perception model to investigate whether early and modern listeners have similar aesthetic responses. Results from the implicit association test were not consistent with the predictions of the model, however, results from the explicit associations experiment were. These findings indicate the metaphor of sweetness may be useful for comparing the aesthetic responses of medieval and modern listeners.
{"title":"The Metaphor of Sweetness in Medieval and Modern Music Listening","authors":"J. Stoessel, K. Spreadborough, Inés Antón-Méndez","doi":"10.1525/mp.2021.39.1.63","DOIUrl":"https://doi.org/10.1525/mp.2021.39.1.63","url":null,"abstract":"Historical listening has long been a topic of interest for musicologists. Yet, little attention has been given to the systematic study of historical listening practices before the common practice era (c. 1700–present). In the first study of its kind, this research compared a model of medieval perceptions of “sweetness” based on writings of medieval music theorists with modern day listeners’ aesthetic responses. Responses were collected through two experiments. In an implicit associations experiment, participants were primed with a more or less consonant musical excerpt, then presented with a sweet or bitter target word, or a non-word, on which to make lexical decisions. In the explicit associations experiment, participants were asked to rate on a three-point Likert scale perceived sweetness of short musical excerpts that varied in consonance and sound quality (male, female, organ). The results from these experiments were compared to predictions from a medieval perception model to investigate whether early and modern listeners have similar aesthetic responses. Results from the implicit association test were not consistent with the predictions of the model, however, results from the explicit associations experiment were. These findings indicate the metaphor of sweetness may be useful for comparing the aesthetic responses of medieval and modern listeners.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49634030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Musical timbre is often described using terms from non-auditory senses, mainly vision and touch; but it is not clear whether crossmodality in timbre semantics reflects multisensory processing or simply linguistic convention. If multisensory processing is involved in timbre perception, the mechanism governing the interaction remains unknown. To investigate whether timbres commonly perceived as “bright-dark” facilitate or interfere with visual perception (darkness-brightness), we designed two speeded classification experiments. Participants were presented consecutive images of slightly varying (or the same) brightness along with task-irrelevant auditory primes (“bright” or “dark” tones) and asked to quickly identify whether the second image was brighter/darker than the first. Incongruent prime-stimulus combinations produced significantly more response errors compared to congruent combinations but choice reaction time was unaffected. Furthermore, responses in a deceptive identical-image condition indicated subtle semantically congruent response bias. Additionally, in Experiment 2 (which also incorporated a spatial texture task), measures of reaction time (RT) and accuracy were used to construct speed-accuracy tradeoff functions (SATFs) in order to critically compare two hypothesized mechanisms for timbre-based crossmodal interactions, sensory response change vs. shift in response criterion. Results of the SATF analysis are largely consistent with the response criterion hypothesis, although without conclusively ruling out sensory change.
{"title":"Does Timbre Modulate Visual Perception? Exploring Crossmodal Interactions","authors":"Zachary Wallmark, L. Nghiem, L. Marks","doi":"10.1525/mp.2021.39.1.1","DOIUrl":"https://doi.org/10.1525/mp.2021.39.1.1","url":null,"abstract":"Musical timbre is often described using terms from non-auditory senses, mainly vision and touch; but it is not clear whether crossmodality in timbre semantics reflects multisensory processing or simply linguistic convention. If multisensory processing is involved in timbre perception, the mechanism governing the interaction remains unknown. To investigate whether timbres commonly perceived as “bright-dark” facilitate or interfere with visual perception (darkness-brightness), we designed two speeded classification experiments. Participants were presented consecutive images of slightly varying (or the same) brightness along with task-irrelevant auditory primes (“bright” or “dark” tones) and asked to quickly identify whether the second image was brighter/darker than the first. Incongruent prime-stimulus combinations produced significantly more response errors compared to congruent combinations but choice reaction time was unaffected. Furthermore, responses in a deceptive identical-image condition indicated subtle semantically congruent response bias. Additionally, in Experiment 2 (which also incorporated a spatial texture task), measures of reaction time (RT) and accuracy were used to construct speed-accuracy tradeoff functions (SATFs) in order to critically compare two hypothesized mechanisms for timbre-based crossmodal interactions, sensory response change vs. shift in response criterion. Results of the SATF analysis are largely consistent with the response criterion hypothesis, although without conclusively ruling out sensory change.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48990722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study tests the respective roles of pitch-class content and bass patterns within harmonic expectation using a mix of behavioral and computational experiments. In our first two experiments, participants heard a paradigmatic chord progression derived from music theory textbooks and were asked to rate how well different target endings completed that progression. The completion included the progression’s paradigmatic target, different inversions of that chord (i.e., different members of the harmony were heard in the lowest voice), and a “mismatched” target, a triad that shared its lowest pitch with the paradigmatic ending but altered other pitch-class content. Participants generally rated the paradigmatic target most highly, followed by other inversions of that chord, with lowest ratings generally elicited by the mismatched target. This suggests that listeners’ harmonic expectations are sensitive to both bass patterns and pitch-class content. However, these results did not hold in all cases. A final computational experiment was run to determine whether variations in behavioral responses could be explained by corpus statistics. To this end, n-gram chord-transition models and frequency measurements were compiled for each progression. Our findings suggest that listeners rate highly and have stronger expectations about chord progressions that occur frequently and behave consistently within tonal corpora.
{"title":"Effects of Chord Inversion and Bass Patterns on Harmonic Expectancy in Musicians","authors":"E. Schwitzgebel, C. White","doi":"10.1525/mp.2021.39.1.41","DOIUrl":"https://doi.org/10.1525/mp.2021.39.1.41","url":null,"abstract":"This study tests the respective roles of pitch-class content and bass patterns within harmonic expectation using a mix of behavioral and computational experiments. In our first two experiments, participants heard a paradigmatic chord progression derived from music theory textbooks and were asked to rate how well different target endings completed that progression. The completion included the progression’s paradigmatic target, different inversions of that chord (i.e., different members of the harmony were heard in the lowest voice), and a “mismatched” target, a triad that shared its lowest pitch with the paradigmatic ending but altered other pitch-class content. Participants generally rated the paradigmatic target most highly, followed by other inversions of that chord, with lowest ratings generally elicited by the mismatched target. This suggests that listeners’ harmonic expectations are sensitive to both bass patterns and pitch-class content. However, these results did not hold in all cases. A final computational experiment was run to determine whether variations in behavioral responses could be explained by corpus statistics. To this end, n-gram chord-transition models and frequency measurements were compiled for each progression. Our findings suggest that listeners rate highly and have stronger expectations about chord progressions that occur frequently and behave consistently within tonal corpora.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44485786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}