Pub Date : 2021-02-01DOI: 10.1525/MP.2021.38.3.293
J. Vibell, Ahnate Lim, S. Sinnett
Considerable evidence converges on the plasticity of attention and the possibility that it can be modulated through regular training. Music training, for instance, has been correlated with modulations of early perceptual and attentional processes. However, the extent to which music training can modulate mechanisms involved in processing information (i.e., perception and attention) is still widely unknown, particularly between sensory modalities. If training in one sensory modality can lead to concomitant enhancements in different sensory modalities, then this could be taken as evidence of a supramodal attentional system. Additionally, if trained musicians exhibit improved perceptual skills outside of the domain of music, this could be taken as evidence for the notion of far-transfer, where training in one domain can lead to improvements in another. To investigate this further, we evaluated the effects of music training using tasks designed to measure simultaneity perception and temporal acuity, and how these are influenced by music training in auditory, visual, and audio-visual conditions. Trained musicians showed significant enhancements for simultaneity perception in the visual modality, as well as generally improved temporal acuity, although not in all conditions. Visual cues directing attention influenced simultaneity perception for musicians for visual discrimination and temporal accuracy in auditory discrimination, suggesting that musicians have selective enhancements in temporal discrimination, arguably due to increased attentional efficiency when compared to nonmusicians. Implications for theory and future training studies are discussed.
{"title":"Temporal Perception and Attention in Trained Musicians","authors":"J. Vibell, Ahnate Lim, S. Sinnett","doi":"10.1525/MP.2021.38.3.293","DOIUrl":"https://doi.org/10.1525/MP.2021.38.3.293","url":null,"abstract":"Considerable evidence converges on the plasticity of attention and the possibility that it can be modulated through regular training. Music training, for instance, has been correlated with modulations of early perceptual and attentional processes. However, the extent to which music training can modulate mechanisms involved in processing information (i.e., perception and attention) is still widely unknown, particularly between sensory modalities. If training in one sensory modality can lead to concomitant enhancements in different sensory modalities, then this could be taken as evidence of a supramodal attentional system. Additionally, if trained musicians exhibit improved perceptual skills outside of the domain of music, this could be taken as evidence for the notion of far-transfer, where training in one domain can lead to improvements in another. To investigate this further, we evaluated the effects of music training using tasks designed to measure simultaneity perception and temporal acuity, and how these are influenced by music training in auditory, visual, and audio-visual conditions. Trained musicians showed significant enhancements for simultaneity perception in the visual modality, as well as generally improved temporal acuity, although not in all conditions. Visual cues directing attention influenced simultaneity perception for musicians for visual discrimination and temporal accuracy in auditory discrimination, suggesting that musicians have selective enhancements in temporal discrimination, arguably due to increased attentional efficiency when compared to nonmusicians. Implications for theory and future training studies are discussed.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"293-312"},"PeriodicalIF":2.3,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47261949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1525/MP.2021.38.3.282
A. Almeida, Emery Schubert, J. Wolfe
In music, vibrato consists of cyclic variations in pitch, loudness, or spectral envelope (hereafter, “timbre vibrato”—TV) or combinations of these. Here, stimuli with TV were compared with those having loudness vibrato (LV). In Experiment 1, participants chose from tones with different vibrato depth to match a reference vibrato tone. When matching to tones with the same vibrato type, 70% of the variance was explained by linear matching of depth. Less variance (40%) was explained when matching dissimilar vibrato types. Fluctuations in loudness were perceived as approximately the same depth as fluctuations in spectral envelope (i.e., about 1.3 times deeper than fluctuations in spectral centroid). In Experiment 2, participants matched a reference with test stimuli of varying depths and types. When the depths of the test and reference tones were similar, the same type was usually selected, over the range of vibrato depths. For very disparate depths, matches were made by type only about 50% of the time. The study revealed good, fairly linear sensitivity to vibrato depth regardless of vibrato type, but also some poorly understood findings between physical signal and perception of TV, suggesting that more research is needed in TV perception.
{"title":"Timbre Vibrato Perception and Description","authors":"A. Almeida, Emery Schubert, J. Wolfe","doi":"10.1525/MP.2021.38.3.282","DOIUrl":"https://doi.org/10.1525/MP.2021.38.3.282","url":null,"abstract":"In music, vibrato consists of cyclic variations in pitch, loudness, or spectral envelope (hereafter, “timbre vibrato”—TV) or combinations of these. Here, stimuli with TV were compared with those having loudness vibrato (LV). In Experiment 1, participants chose from tones with different vibrato depth to match a reference vibrato tone. When matching to tones with the same vibrato type, 70% of the variance was explained by linear matching of depth. Less variance (40%) was explained when matching dissimilar vibrato types. Fluctuations in loudness were perceived as approximately the same depth as fluctuations in spectral envelope (i.e., about 1.3 times deeper than fluctuations in spectral centroid). In Experiment 2, participants matched a reference with test stimuli of varying depths and types. When the depths of the test and reference tones were similar, the same type was usually selected, over the range of vibrato depths. For very disparate depths, matches were made by type only about 50% of the time. The study revealed good, fairly linear sensitivity to vibrato depth regardless of vibrato type, but also some poorly understood findings between physical signal and perception of TV, suggesting that more research is needed in TV perception.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"282-292"},"PeriodicalIF":2.3,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48928925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-25DOI: 10.1525/mp.2020.38.2.103
D. Baker, Amy M. Belfi, Sarah C. Creel, Jessica A. Grahn, E. Hannon, P. Loui, E. Margulis, Adena Schachner, Michael Schutz, D. Shanahan, Dominique Vuvan
{"title":"Embracing Anti-Racist Practices in the Music Perception and Cognition Community","authors":"D. Baker, Amy M. Belfi, Sarah C. Creel, Jessica A. Grahn, E. Hannon, P. Loui, E. Margulis, Adena Schachner, Michael Schutz, D. Shanahan, Dominique Vuvan","doi":"10.1525/mp.2020.38.2.103","DOIUrl":"https://doi.org/10.1525/mp.2020.38.2.103","url":null,"abstract":"","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"103-105"},"PeriodicalIF":2.3,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44652764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-25DOI: 10.1525/mp.2020.38.2.106
Sarah C. Creel
What factors influence listeners’ perception of meter in a musical piece or a musical style? Many cues are available in the musical “surface,” i.e., the pattern of sounds physically present during listening. Models of meter processing focus on the musical surface. However, percepts of meter and other musical features may also be shaped by reactivation of previously heard music, consistent with exemplar accounts of memory. The current study explores a phenomenon that is here termed metrical restoration: listeners who hear melodies with ambiguous meters report meter preferences that match previous listening experiences in the lab, suggesting reactivation of those experiences. Previous studies suggested that timbre and brief rhythmic patterns may influence metrical restoration. However, variations in the magnitude of effects in different experiments suggest that other factors are at work. Experiments reported here explore variation in metrical restoration as a function of: melodic diversity in timbre and tempo, associations of rhythmic patterns with particular melodies and meters, and associations of meter with overall melodic form. Rhythmic patterns and overall melodic form, but not timbre, had strong influences. Results are discussed with respect to style-specific or culture-specific musical processing, and everyday listening experiences. Implications for models of musical memory are also addressed.
{"title":"Metrical Restoration From Local and Global Melodic Cues","authors":"Sarah C. Creel","doi":"10.1525/mp.2020.38.2.106","DOIUrl":"https://doi.org/10.1525/mp.2020.38.2.106","url":null,"abstract":"What factors influence listeners’ perception of meter in a musical piece or a musical style? Many cues are available in the musical “surface,” i.e., the pattern of sounds physically present during listening. Models of meter processing focus on the musical surface. However, percepts of meter and other musical features may also be shaped by reactivation of previously heard music, consistent with exemplar accounts of memory. The current study explores a phenomenon that is here termed metrical restoration: listeners who hear melodies with ambiguous meters report meter preferences that match previous listening experiences in the lab, suggesting reactivation of those experiences. Previous studies suggested that timbre and brief rhythmic patterns may influence metrical restoration. However, variations in the magnitude of effects in different experiments suggest that other factors are at work. Experiments reported here explore variation in metrical restoration as a function of: melodic diversity in timbre and tempo, associations of rhythmic patterns with particular melodies and meters, and associations of meter with overall melodic form. Rhythmic patterns and overall melodic form, but not timbre, had strong influences. Results are discussed with respect to style-specific or culture-specific musical processing, and everyday listening experiences. Implications for models of musical memory are also addressed.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"1 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44294994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Clayton, T. Eerola, Simone Tarsitani, Richard C. Jankowsky, Luis Jure, A. Poole, Martín Rocamora, Kelly Jakubowski
{"title":"Interpersonal Entrainment in Music Performance","authors":"M. Clayton, T. Eerola, Simone Tarsitani, Richard C. Jankowsky, Luis Jure, A. Poole, Martín Rocamora, Kelly Jakubowski","doi":"10.17605/OSF.IO/37FWS","DOIUrl":"https://doi.org/10.17605/OSF.IO/37FWS","url":null,"abstract":"","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"136-194"},"PeriodicalIF":2.3,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47150828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-25DOI: 10.1525/mp.2020.38.2.136
Martin Clayton,Kelly Jakubowski,Tuomas Eerola,Peter E. Keller,Antonio Camurri,Gualtiero Volpe,Paolo Alborno
Interpersonal musical entrainment—temporal synchronization and coordination between individuals in musical contexts—is a ubiquitous phenomenon related to music’s social functions of promoting group bonding and cohesion. Mechanisms other than sensorimotor synchronization are rarely discussed, while little is known about cultural variability or about how and why entrainment has social effects. In order to close these gaps, we propose a new model that distinguishes between different components of interpersonal entrainment: sensorimotor synchronization—a largely automatic process manifested especially with rhythms based on periodicities in the 100–2000 ms timescale—and coordination, extending over longer timescales and more accessible to conscious control. We review the state of the art in measuring these processes, mostly from the perspective of action production, and in so doing present the first cross-cultural comparisons between interpersonal entrainment in natural musical performances, with an exploratory analysis that identifies factors that may influence interpersonal synchronization in music. Building on this analysis we advance hypotheses regarding the relationship of these features to neurophysiological, social, and cultural processes. We propose a model encompassing both synchronization and coordination processes and the relationship between them, the role of culturally shared knowledge, and of connections between entrainment and social processes.
{"title":"Interpersonal Entrainment in Music Performance","authors":"Martin Clayton,Kelly Jakubowski,Tuomas Eerola,Peter E. Keller,Antonio Camurri,Gualtiero Volpe,Paolo Alborno","doi":"10.1525/mp.2020.38.2.136","DOIUrl":"https://doi.org/10.1525/mp.2020.38.2.136","url":null,"abstract":"Interpersonal musical entrainment—temporal synchronization and coordination between individuals in musical contexts—is a ubiquitous phenomenon related to music’s social functions of promoting group bonding and cohesion. Mechanisms other than sensorimotor synchronization are rarely discussed, while little is known about cultural variability or about how and why entrainment has social effects. In order to close these gaps, we propose a new model that distinguishes between different components of interpersonal entrainment: sensorimotor synchronization—a largely automatic process manifested especially with rhythms based on periodicities in the 100–2000 ms timescale—and coordination, extending over longer timescales and more accessible to conscious control. We review the state of the art in measuring these processes, mostly from the perspective of action production, and in so doing present the first cross-cultural comparisons between interpersonal entrainment in natural musical performances, with an exploratory analysis that identifies factors that may influence interpersonal synchronization in music. Building on this analysis we advance hypotheses regarding the relationship of these features to neurophysiological, social, and cultural processes. We propose a model encompassing both synchronization and coordination processes and the relationship between them, the role of culturally shared knowledge, and of connections between entrainment and social processes.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"13 1","pages":"136-194"},"PeriodicalIF":2.3,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138516931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-25DOI: 10.1525/MP.2020.38.2.195
M. Küssner, E. V. Dyck, Birgitta Burger, D. Moelants, P. Vansteenkiste
Duo musicians exhibit a broad variety of bodily gestures, but it is unclear how soloists’ and accompanists’ movements differ and to what extent they attract observers’ visual attention. In Experiment 1, seven musical duos’ body movements were tracked while they performed two pieces in two different conditions. In a congruent condition, soloist and accompanist behaved according to their expected musical roles; in an incongruent condition, the soloist behaved as accompanist and vice versa. Results revealed that behaving as soloist, regardless of the condition, led to more, smoother, and faster head and shoulder movements over a larger area than behaving as accompanist. Moreover, accompanists in the incongruent condition moved more than soloists in the congruent condition. In Experiment 2, observers watched videos of the duo performances with and without audio, while eye movements were tracked. Observers looked longer at musicians behaving as soloists compared to musicians behaving as accompanists, independent of their respective musical role. This suggests that visual attention was allocated to the most salient visuo-kinematic cues (i.e., expressive bodily gestures) rather than the most salient musical cues (i.e., the solo part). Findings are discussed regarding auditory-motor couplings and theories of motor control as well as auditory-visual integration and attention.
{"title":"All Eyes on Me","authors":"M. Küssner, E. V. Dyck, Birgitta Burger, D. Moelants, P. Vansteenkiste","doi":"10.1525/MP.2020.38.2.195","DOIUrl":"https://doi.org/10.1525/MP.2020.38.2.195","url":null,"abstract":"Duo musicians exhibit a broad variety of bodily gestures, but it is unclear how soloists’ and accompanists’ movements differ and to what extent they attract observers’ visual attention. In Experiment 1, seven musical duos’ body movements were tracked while they performed two pieces in two different conditions. In a congruent condition, soloist and accompanist behaved according to their expected musical roles; in an incongruent condition, the soloist behaved as accompanist and vice versa. Results revealed that behaving as soloist, regardless of the condition, led to more, smoother, and faster head and shoulder movements over a larger area than behaving as accompanist. Moreover, accompanists in the incongruent condition moved more than soloists in the congruent condition. In Experiment 2, observers watched videos of the duo performances with and without audio, while eye movements were tracked. Observers looked longer at musicians behaving as soloists compared to musicians behaving as accompanists, independent of their respective musical role. This suggests that visual attention was allocated to the most salient visuo-kinematic cues (i.e., expressive bodily gestures) rather than the most salient musical cues (i.e., the solo part). Findings are discussed regarding auditory-motor couplings and theories of motor control as well as auditory-visual integration and attention.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46666728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-25DOI: 10.1525/mp.2020.38.2.214
Jason Noble, E. Thoret, Max Henry, S. McAdams
WE COMBINE PERCEPTUAL RESEARCH AND ACOUStic analysis to probe the messy, pluralistic world of musical semantics, focusing on sound mass music. Composers and scholars describe sound mass with many semantic associations. We designed an experiment to evaluate to what extent these associations are experienced by other listeners. Thirty-eight participants heard 40 excerpts of sound mass music and related contemporary genres and rated them along batteries of semantic scales. Participants also described their rating strategies for some categories. A combination of qualitative stimulus analyses, Cronbach’s alpha tests, and principal component analyses suggest that crossdomain mappings between semantic categories and musical properties are statistically coherent between participants, implying non-arbitrary relations. Some aspects of participants’ descriptions of their rating strategies appear to be reflected in their numerical ratings. We sought quantitative bases for these associations in the acoustic signals. After attempts to correlate semantic ratings with classical audio descriptors failed, we pursued a neuromimetic representation called spectrotemporal modulations (STMs), which explains much more of the variance in semantic ratings. This result suggests that semantic interpretations of music may involve qualities or attributes that are objectively present in the music, since computer simulation can use sound signals to partially reconstruct human semantic ratings.
{"title":"Semantic Dimensions of Sound Mass Music","authors":"Jason Noble, E. Thoret, Max Henry, S. McAdams","doi":"10.1525/mp.2020.38.2.214","DOIUrl":"https://doi.org/10.1525/mp.2020.38.2.214","url":null,"abstract":"WE COMBINE PERCEPTUAL RESEARCH AND ACOUStic analysis to probe the messy, pluralistic world of musical semantics, focusing on sound mass music. Composers and scholars describe sound mass with many semantic associations. We designed an experiment to evaluate to what extent these associations are experienced by other listeners. Thirty-eight participants heard 40 excerpts of sound mass music and related contemporary genres and rated them along batteries of semantic scales. Participants also described their rating strategies for some categories. A combination of qualitative stimulus analyses, Cronbach’s alpha tests, and principal component analyses suggest that crossdomain mappings between semantic categories and musical properties are statistically coherent between participants, implying non-arbitrary relations. Some aspects of participants’ descriptions of their rating strategies appear to be reflected in their numerical ratings. We sought quantitative bases for these associations in the acoustic signals. After attempts to correlate semantic ratings with classical audio descriptors failed, we pursued a neuromimetic representation called spectrotemporal modulations (STMs), which explains much more of the variance in semantic ratings. This result suggests that semantic interpretations of music may involve qualities or attributes that are objectively present in the music, since computer simulation can use sound signals to partially reconstruct human semantic ratings.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"214-242"},"PeriodicalIF":2.3,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47332230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The parsing of undifferentiated tone sequences into groups of qualitatively distinct elements is one of the earliest rhythmic phenomena to have been investigated experimentally (Bolton, 1894). The present study aimed to replicate and extend these findings through online experimentation using a spontaneous grouping paradigm with forced-choice response (from 1 to 12 tones per group). Two types of isochronous sequences were used: equitone sequences, which varied only with respect to signal rate (200, 550, or 950 ms interonset intervals), and accented sequences, in which accents were added every two or three tones to test the effect of induced grouping (duple vs. triple) and accent type (intensity, duration, or pitch). In equitone sequences, participants’ grouping percepts (N = 4,194) were asymmetrical and tempo-dependent, with “no grouping” and groups of four being most frequently reported. In accented sequences, slower rate, induced triple grouping, and intensity accents correlated with increases in group length. Furthermore, the probability of observing a mixed metric type—that is, grouping percepts divisible by both two and three (6 and 12)—was found to be highest in faster sequences with induced triple grouping. These findings suggest that lower-level triple grouping gives rise to binary grouping percepts at higher metrical levels.
{"title":"The Influence of Rate and Accentuation on Subjective Rhythmization","authors":"Ève Poudrier","doi":"10.1525/mp.2020.38.1.27","DOIUrl":"https://doi.org/10.1525/mp.2020.38.1.27","url":null,"abstract":"The parsing of undifferentiated tone sequences into groups of qualitatively distinct elements is one of the earliest rhythmic phenomena to have been investigated experimentally (Bolton, 1894). The present study aimed to replicate and extend these findings through online experimentation using a spontaneous grouping paradigm with forced-choice response (from 1 to 12 tones per group). Two types of isochronous sequences were used: equitone sequences, which varied only with respect to signal rate (200, 550, or 950 ms interonset intervals), and accented sequences, in which accents were added every two or three tones to test the effect of induced grouping (duple vs. triple) and accent type (intensity, duration, or pitch). In equitone sequences, participants’ grouping percepts (N = 4,194) were asymmetrical and tempo-dependent, with “no grouping” and groups of four being most frequently reported. In accented sequences, slower rate, induced triple grouping, and intensity accents correlated with increases in group length. Furthermore, the probability of observing a mixed metric type—that is, grouping percepts divisible by both two and three (6 and 12)—was found to be highest in faster sequences with induced triple grouping. These findings suggest that lower-level triple grouping gives rise to binary grouping percepts at higher metrical levels.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"27-45"},"PeriodicalIF":2.3,"publicationDate":"2020-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.38.1.27","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43728344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tanja Linnavalli, J. Ojala, Laura Haveri, V. Putkinen, K. Kostilainen, Sirke Seppänen, M. Tervaniemi
Consonance and dissonance are basic phenomena in the perception of chords that can be discriminated very early in sensory processing. Musical expertise has been shown to facilitate neural processing of various musical stimuli, but it is unclear whether this applies to detecting consonance and dissonance. Our study aimed to determine if sensitivity to increasing levels of dissonance differs between musicians and nonmusicians, using a combination of neural (electroencephalographic mismatch negativity, MMN) and behavioral measurements (conscious discrimination). Furthermore, we wanted to see if focusing attention to the sounds modulated the neural processing. We used chords comprised of either highly consonant or highly dissonant intervals and further manipulated the degree of dissonance to create two levels of dissonant chords. Both groups discriminated dissonant chords from consonant ones neurally and behaviorally. The magnitude of the MMN differed only marginally between the more dissonant and the less dissonant chords. The musicians outperformed the nonmusicians in the behavioral task. As the dissonant chords elicited MMN responses for both groups, sensory dissonance seems to be discriminated in an early sensory level, irrespective of musical expertise, and the facilitating effects of musicianship for this discrimination may arise in later stages of auditory processing, appearing only in the behavioral auditory task.
{"title":"Musical Expertise Facilitates Dissonance Detection On Behavioral, Not On Early Sensory Level","authors":"Tanja Linnavalli, J. Ojala, Laura Haveri, V. Putkinen, K. Kostilainen, Sirke Seppänen, M. Tervaniemi","doi":"10.1525/mp.2020.38.1.78","DOIUrl":"https://doi.org/10.1525/mp.2020.38.1.78","url":null,"abstract":"Consonance and dissonance are basic phenomena in the perception of chords that can be discriminated very early in sensory processing. Musical expertise has been shown to facilitate neural processing of various musical stimuli, but it is unclear whether this applies to detecting consonance and dissonance. Our study aimed to determine if sensitivity to increasing levels of dissonance differs between musicians and nonmusicians, using a combination of neural (electroencephalographic mismatch negativity, MMN) and behavioral measurements (conscious discrimination). Furthermore, we wanted to see if focusing attention to the sounds modulated the neural processing. We used chords comprised of either highly consonant or highly dissonant intervals and further manipulated the degree of dissonance to create two levels of dissonant chords. Both groups discriminated dissonant chords from consonant ones neurally and behaviorally. The magnitude of the MMN differed only marginally between the more dissonant and the less dissonant chords. The musicians outperformed the nonmusicians in the behavioral task. As the dissonant chords elicited MMN responses for both groups, sensory dissonance seems to be discriminated in an early sensory level, irrespective of musical expertise, and the facilitating effects of musicianship for this discrimination may arise in later stages of auditory processing, appearing only in the behavioral auditory task.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":"38 1","pages":"78-98"},"PeriodicalIF":2.3,"publicationDate":"2020-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2020.38.1.78","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42058273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}