首页 > 最新文献

Music Perception最新文献

英文 中文
Enjoy The Violence 享受暴力
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2019-12-01 DOI: 10.1525/mp.2019.37.2.95
Rosalie Ollivier, L. Goupil, M. Liuni, J. Aucouturier
Traditional neurobiological theories of musical emotions explain well why extreme music such as punk, hardcore or metal, whose vocal and instrumental characteristics share much similarity with acoustic threat signals, should evoke unpleasant feelings for a large proportion of listeners. Why it doesn't for metal music fans, however, remains a theoretical challenge: metal fans may differ from non-fans in how they process acoustic threat signals at the sub-cortical level, showing deactivated or reconditioned responses that differ from controls. Alternatively, it is also possible that appreciation for metal depends on the inhibition by cortical circuits of a normal low-order response to auditory threat. In a series of three experiments, we show here that, at a sensory level, metal fans actually react equally negatively, equally fast and even more accurately to cues of auditory threat in vocal and instrumental contexts than non-fans. Conversely, cognitive load somewhat appears to reduce fans' appreciation of metal to the level reported by non-fans. Taken together, these results are not compatible with the idea that extreme music lovers do so because of a different low-level response to threat, but rather, highlight a critical contribution of higher-order cognition to the aesthetic experience. These results are discussed in the light of recent higher-order theories of emotional consciousness, which we argue should be generalized to the emotional experience of music across musical genres.
传统的音乐情感神经生物学理论很好地解释了为什么朋克、硬核或金属等极端音乐的声乐和乐器特征与声音威胁信号非常相似,会引起大部分听众的不愉快感觉。然而,为什么对金属乐迷没有影响,这仍然是一个理论上的挑战:金属乐迷与非金属乐迷在皮层下处理声音威胁信号的方式可能不同,表现出与对照组不同的失活或修复反应。另一种可能是,对金属的欣赏取决于皮层回路对听觉威胁的正常低阶反应的抑制。在一系列的三个实验中,我们在这里展示了,在感官层面上,金属乐迷实际上对声音和器乐背景下的听觉威胁的反应同样消极,同样快速,甚至比非乐迷更准确。相反,认知负荷似乎在一定程度上将乐迷对金属的欣赏程度降低到非乐迷所报告的水平。综上所述,这些结果与极端音乐爱好者之所以这样做是因为对威胁的低层次反应不同的观点不一致,而是强调了高阶认知对审美体验的重要贡献。这些结果是根据最近的高阶情感意识理论进行讨论的,我们认为这些理论应该推广到跨音乐类型的音乐情感体验。
{"title":"Enjoy The Violence","authors":"Rosalie Ollivier, L. Goupil, M. Liuni, J. Aucouturier","doi":"10.1525/mp.2019.37.2.95","DOIUrl":"https://doi.org/10.1525/mp.2019.37.2.95","url":null,"abstract":"Traditional neurobiological theories of musical emotions explain well why extreme music such as punk, hardcore or metal, whose vocal and instrumental characteristics share much similarity with acoustic threat signals, should evoke unpleasant feelings for a large proportion of listeners. Why it doesn't for metal music fans, however, remains a theoretical challenge: metal fans may differ from non-fans in how they process acoustic threat signals at the sub-cortical level, showing deactivated or reconditioned responses that differ from controls. Alternatively, it is also possible that appreciation for metal depends on the inhibition by cortical circuits of a normal low-order response to auditory threat. In a series of three experiments, we show here that, at a sensory level, metal fans actually react equally negatively, equally fast and even more accurately to cues of auditory threat in vocal and instrumental contexts than non-fans. Conversely, cognitive load somewhat appears to reduce fans' appreciation of metal to the level reported by non-fans. Taken together, these results are not compatible with the idea that extreme music lovers do so because of a different low-level response to threat, but rather, highlight a critical contribution of higher-order cognition to the aesthetic experience. These results are discussed in the light of recent higher-order theories of emotional consciousness, which we argue should be generalized to the emotional experience of music across musical genres.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2019.37.2.95","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44188090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Perception-Based Classification of Expressive Musical Terms 基于感知的音乐表达术语分类
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2019-12-01 DOI: 10.1525/mp.2019.37.2.147
Aviel Sulem, E. Bodner, N. Amir
Expressive Musical Terms (EMTs) are commonly used by composers as verbal descriptions of musical expressiveness and characters that performers are requested to convey. We suggest a classification of 55 of these terms, based on the perception of professional music performers who were asked to: 1) organize the considered EMTs in a two-dimensional plane in such a way that proximity reflects similarity; and 2) rate these EMTs according to valence, arousal, extraversion, and neuroticism, using 7-level Likert scales. Using a minimization procedure, we found that a satisfactory partition requires these EMTs to be organized in four clusters (whose centroids are associated with tenderness, happiness, anger, and sadness) located in the four quarters of the valence-arousal plane of the circumplex model of affect developed by Russell (1980). In terms of the related positive-negative activation parameters, introduced by Watson and Tellegen (1985), we obtained a significant correlation between positive activation and extraversion and between negative activation and neuroticism. This demonstrates that these relations, previously observed in personality studies by Watson & Clark (1992a), extend to the musical field.
表达性音乐术语(emt)通常被作曲家用来描述音乐表现力和表演者被要求传达的特征。根据专业音乐表演者的感知,我们建议对其中的55个术语进行分类:1)将考虑的emt组织在二维平面上,以一种接近度反映相似性的方式;2)采用7级李克特量表,根据效价、觉醒、外向性和神经质程度对emt进行评分。使用最小化程序,我们发现一个令人满意的划分要求这些急救医生被组织在四个集群中(其质心与温柔、快乐、愤怒和悲伤有关),这些集群位于罗素(1980)开发的情感循环模型的价-觉醒平面的四个季度。根据Watson和Tellegen(1985)引入的相关正负激活参数,我们得到了正激活与外向性、负激活与神经质之间的显著相关性。这表明,沃森和克拉克(1992)之前在人格研究中观察到的这些关系延伸到了音乐领域。
{"title":"Perception-Based Classification of Expressive Musical Terms","authors":"Aviel Sulem, E. Bodner, N. Amir","doi":"10.1525/mp.2019.37.2.147","DOIUrl":"https://doi.org/10.1525/mp.2019.37.2.147","url":null,"abstract":"Expressive Musical Terms (EMTs) are commonly used by composers as verbal descriptions of musical expressiveness and characters that performers are requested to convey. We suggest a classification of 55 of these terms, based on the perception of professional music performers who were asked to: 1) organize the considered EMTs in a two-dimensional plane in such a way that proximity reflects similarity; and 2) rate these EMTs according to valence, arousal, extraversion, and neuroticism, using 7-level Likert scales. Using a minimization procedure, we found that a satisfactory partition requires these EMTs to be organized in four clusters (whose centroids are associated with tenderness, happiness, anger, and sadness) located in the four quarters of the valence-arousal plane of the circumplex model of affect developed by Russell (1980). In terms of the related positive-negative activation parameters, introduced by Watson and Tellegen (1985), we obtained a significant correlation between positive activation and extraversion and between negative activation and neuroticism. This demonstrates that these relations, previously observed in personality studies by Watson & Clark (1992a), extend to the musical field.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2019.37.2.147","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67421551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Whole Brain EEG Analysis of Musicianship 乐感的全脑脑电图分析
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2019-09-01 DOI: 10.1525/mp.2019.37.1.42
Estela Ribeiro, C. Thomaz
The neural activation patterns provoked in response to music listening can reveal whether a subject did or did not receive music training. In the current exploratory study, we have approached this two-group (musicians and nonmusicians) classification problem through a computational framework composed of the following steps: Acoustic features extraction; Acoustic features selection; Trigger selection; EEG signal processing; and Multivariate statistical analysis. We are particularly interested in analyzing the brain data on a global level, considering its activity registered in electroencephalogram (EEG) signals on a given time instant. Our experiment's results—with 26 volunteers (13 musicians and 13 nonmusicians) who listened the classical music Hungarian Dance No. 5 from Johannes Brahms—have shown that is possible to linearly differentiate musicians and nonmusicians with classification accuracies that range from 69.2% (test set) to 93.8% (training set), despite the limited sample sizes available. Additionally, given the whole brain vector navigation method described and implemented here, our results suggest that it is possible to highlight the most expressive and discriminant changes in the participants brain activity patterns depending on the acoustic feature extracted from the audio.
听音乐引起的神经激活模式可以揭示受试者是否接受了音乐训练。在目前的探索性研究中,我们通过一个由以下步骤组成的计算框架来解决这个两组(音乐家和非音乐家)分类问题:声学特征提取;声学特征选择;触发选择;脑电信号处理;多元统计分析。我们特别感兴趣的是在全球范围内分析大脑数据,考虑到在给定时间瞬间脑电图(EEG)信号中记录的大脑活动。我们的实验结果——26名志愿者(13名音乐家和13名非音乐家)听了约翰内斯·勃拉姆斯的古典音乐《匈牙利舞曲第5号》——表明,尽管可用的样本量有限,但线性区分音乐家和非音乐家的分类准确率在69.2%(测试集)到93.8%(训练集)之间是可能的。此外,考虑到本文描述和实现的全脑矢量导航方法,我们的研究结果表明,根据从音频中提取的声学特征,可以突出参与者大脑活动模式中最具表现力和区别性的变化。
{"title":"A Whole Brain EEG Analysis of Musicianship","authors":"Estela Ribeiro, C. Thomaz","doi":"10.1525/mp.2019.37.1.42","DOIUrl":"https://doi.org/10.1525/mp.2019.37.1.42","url":null,"abstract":"The neural activation patterns provoked in response to music listening can reveal whether a subject did or did not receive music training. In the current exploratory study, we have approached this two-group (musicians and nonmusicians) classification problem through a computational framework composed of the following steps: Acoustic features extraction; Acoustic features selection; Trigger selection; EEG signal processing; and Multivariate statistical analysis. We are particularly interested in analyzing the brain data on a global level, considering its activity registered in electroencephalogram (EEG) signals on a given time instant. Our experiment's results—with 26 volunteers (13 musicians and 13 nonmusicians) who listened the classical music Hungarian Dance No. 5 from Johannes Brahms—have shown that is possible to linearly differentiate musicians and nonmusicians with classification accuracies that range from 69.2% (test set) to 93.8% (training set), despite the limited sample sizes available. Additionally, given the whole brain vector navigation method described and implemented here, our results suggest that it is possible to highlight the most expressive and discriminant changes in the participants brain activity patterns depending on the acoustic feature extracted from the audio.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2019.37.1.42","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41294233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Acoustically Expressing Affect 声学表达情感
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2019-09-01 DOI: 10.1525/mp.2019.37.1.66
A. Battcock, Michael Schutz
Composers convey emotion through music by co-varying structural cues. Although the complex interplay provides a rich listening experience, this creates challenges for understanding the contributions of individual cues. Here we investigate how three specific cues (attack rate, mode, and pitch height) work together to convey emotion in Bach's Well Tempered-Clavier (WTC). In three experiments, we explore responses to (1) eight-measure excerpts and (2) musically “resolved” excerpts, and (3) investigate the role of different standard dimensional scales of emotion. In each experiment, thirty nonmusician participants rated perceived emotion along scales of valence and intensity (Experiments 1 & 2) or valence and arousal (Experiment 3) for 48 pieces in the WTC. Responses indicate listeners used attack rate, Mode, and pitch height to make judgements of valence, but only attack rate for intensity/arousal. Commonality analyses revealed mode predicted the most variance for valence ratings, followed by attack rate, with pitch height contributing minimally. In Experiment 2 mode increased in predictive power compared to Experiment 1. For Experiment 3, using “arousal” instead of “intensity” showed similar results to Experiment 1. We discuss how these results complement and extend previous findings of studies with tightly controlled stimuli, providing additional perspective on complex issues of interpersonal communication.
作曲家通过共同变化的结构线索通过音乐传达情感。尽管复杂的相互作用提供了丰富的聆听体验,但这为理解单个线索的贡献带来了挑战。在这里,我们研究了三种特定的线索(攻击率、调式和音高)是如何共同作用来传达巴赫的《好调律键盘舞曲》(WTC)中的情感的。在三个实验中,我们探讨了对(1)八小节片段和(2)音乐“解决”片段的反应,以及(3)调查了不同标准维度的情绪量表的作用。在每个实验中,30名非音乐家参与者对48件世贸中心作品的效价和强度(实验1和2)或效价和唤醒(实验3)的感知情绪进行评分。结果表明,听者使用攻击率、模式和音调高度来判断效价,但仅使用攻击率来判断强度/唤醒。共同性分析显示,模式预测价评级的方差最大,其次是攻击率,与音调高度贡献最小。实验2模式下的预测能力较实验1有所提高。在实验3中,使用“唤醒”代替“强度”显示了与实验1相似的结果。我们讨论了这些结果如何补充和扩展了先前的严格控制刺激的研究结果,为人际沟通的复杂问题提供了额外的视角。
{"title":"Acoustically Expressing Affect","authors":"A. Battcock, Michael Schutz","doi":"10.1525/mp.2019.37.1.66","DOIUrl":"https://doi.org/10.1525/mp.2019.37.1.66","url":null,"abstract":"Composers convey emotion through music by co-varying structural cues. Although the complex interplay provides a rich listening experience, this creates challenges for understanding the contributions of individual cues. Here we investigate how three specific cues (attack rate, mode, and pitch height) work together to convey emotion in Bach's Well Tempered-Clavier (WTC). In three experiments, we explore responses to (1) eight-measure excerpts and (2) musically “resolved” excerpts, and (3) investigate the role of different standard dimensional scales of emotion. In each experiment, thirty nonmusician participants rated perceived emotion along scales of valence and intensity (Experiments 1 & 2) or valence and arousal (Experiment 3) for 48 pieces in the WTC. Responses indicate listeners used attack rate, Mode, and pitch height to make judgements of valence, but only attack rate for intensity/arousal. Commonality analyses revealed mode predicted the most variance for valence ratings, followed by attack rate, with pitch height contributing minimally. In Experiment 2 mode increased in predictive power compared to Experiment 1. For Experiment 3, using “arousal” instead of “intensity” showed similar results to Experiment 1. We discuss how these results complement and extend previous findings of studies with tightly controlled stimuli, providing additional perspective on complex issues of interpersonal communication.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/mp.2019.37.1.66","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42956242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Motown, Disco, and Drumming 摩城、迪斯科和击鼓
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2019-09-01 DOI: 10.1525/mp.2019.37.1.26
Justin M. London, Birgitta Burger, Marc R. Thompson, Molly Hildreth, J. Wilson, Nick Schally, P. Toiviainen
In a study of tempo perception, London, Burger, Thompson, and Toiviainen (2016) presented participants with digitally ‘‘tempo-shifted’’ R&B songs (i.e., sped up or slowed down without otherwise altering their pitch or timbre). They found that while participants’ relative tempo judgments of original versus altered versions were correct, they no longer corresponded to the beat rate of each stimulus. Here we report on three experiments that further probe the relation(s) between beat rate, tempo-shifting, beat salience, melodic structure, and perceived tempo. Experiment 1 is a replication of London et al. (2016) using the original stimuli. Experiment 2 replaces the Motown stimuli with disco music, which has higher beat salience. Experiment 3 uses looped drum patterns, eliminating pitch and other cues from the stimuli and maximizing beat salience. The effect of London et al. (2016) was replicated in Experiment 1, present to a lesser degree in Experiment 2, and absent in Experiment 3. Experiments 2 and 3 also found that participants were able to make tempo judgments in accordance with BPM rates for stimuli that were not tempo-shifted. The roles of beat salience, melodic structure, and memory for tempo are discussed, and the TAE as an example of perceptual sharpening is considered.
在一项关于节奏感知的研究中,London、Burger、Thompson和Toiviainen(2016)向参与者展示了数字化的“节奏转换”R&B歌曲(即在不改变音高或音色的情况下加快或放慢节奏)。他们发现,虽然参与者对原始版本和修改版本的相对节奏判断是正确的,但它们不再与每个刺激的节拍率相对应。在这里,我们报告了三个实验,进一步探讨了节拍率、节奏转移、节拍显著性、旋律结构和感知速度之间的关系。实验1是伦敦等人(2016)使用原始刺激的复制。实验2用节奏显著性更高的迪斯科音乐代替摩城音乐的刺激。实验3使用环鼓模式,消除音高和其他刺激的线索,最大限度地提高节拍的显著性。London等人(2016)的效应在实验1中得到了重复,在实验2中出现的程度较低,在实验3中没有出现。实验2和3还发现,对于没有节奏变化的刺激,参与者能够根据BPM率做出节奏判断。讨论了节奏显著性、旋律结构和节奏记忆的作用,并将TAE作为感知锐化的一个例子加以考虑。
{"title":"Motown, Disco, and Drumming","authors":"Justin M. London, Birgitta Burger, Marc R. Thompson, Molly Hildreth, J. Wilson, Nick Schally, P. Toiviainen","doi":"10.1525/mp.2019.37.1.26","DOIUrl":"https://doi.org/10.1525/mp.2019.37.1.26","url":null,"abstract":"In a study of tempo perception, London, Burger, Thompson, and Toiviainen (2016) presented participants with digitally ‘‘tempo-shifted’’ R&B songs (i.e., sped up or slowed down without otherwise altering their pitch or timbre). They found that while participants’ relative tempo judgments of original versus altered versions were correct, they no longer corresponded to the beat rate of each stimulus. Here we report on three experiments that further probe the relation(s) between beat rate, tempo-shifting, beat salience, melodic structure, and perceived tempo. Experiment 1 is a replication of London et al. (2016) using the original stimuli. Experiment 2 replaces the Motown stimuli with disco music, which has higher beat salience. Experiment 3 uses looped drum patterns, eliminating pitch and other cues from the stimuli and maximizing beat salience. The effect of London et al. (2016) was replicated in Experiment 1, present to a lesser degree in Experiment 2, and absent in Experiment 3. Experiments 2 and 3 also found that participants were able to make tempo judgments in accordance with BPM rates for stimuli that were not tempo-shifted. The roles of beat salience, melodic structure, and memory for tempo are discussed, and the TAE as an example of perceptual sharpening is considered.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45427954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Relationship Between Portuguese Children's Use of Singing Voice and Singing Accuracy when Singing with Text and a Neutral Syllable 葡萄牙语儿童在有文字和中性音节的演唱中对唱腔的使用与演唱准确性的关系
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2019-06-01 DOI: 10.1525/MP.2019.36.5.468
A. Pereira, Helena Rodrigues
The purpose of this study was to investigate the relationship between Portuguese children's use of singing voice and their singing accuracy on the pitches belonging to the Singing Voice Development Measure (SVDM) criterion patterns (Rutkowski, 2015), as well as the influence on singing with a neutral syllable or text on both variables. Children aged 4 to 9 (n = 137) were administered the SVDM individually and three raters evaluated recordings of the children's singing, both for the use of singing voice (i.e., effective use of pitch range and register) and singing accuracy. Prior to data analysis, the validity and reliability of the measure was examined and assured. A significant relationship was found between both variables. Favoring the neutral syllable, significant differences were found in response mode for singing accuracy, but not for use of singing voice, suggesting that the use of neutral syllable in classroom singing activities might be beneficial to improve accuracy. Older children and girls obtained higher scores for the use of singing voice and accuracy. Within a common pitch range, children with higher SVDM scores sang accurately a higher number of pitches, suggesting that expanding children's use of singing voice might also improve singing accuracy.
本研究的目的是调查葡萄牙儿童在歌唱声音发展测量(SVDM)标准模式(Rutkowski, 2015)中音调的使用与歌唱准确性之间的关系,以及用中性音节或文本唱歌对这两个变量的影响。4至9岁的儿童(n = 137)分别接受SVDM,三位评分者评估儿童唱歌的录音,包括唱歌声音的使用(即有效使用音高范围和音域)和唱歌的准确性。在数据分析之前,对测量的有效性和可靠性进行了检验和保证。两个变量之间存在显著的关系。在课堂歌唱活动中使用中性音节有利于提高准确性,而在歌唱声音的使用上则没有显著差异,这表明在课堂歌唱活动中使用中性音节可能有利于提高准确性。年龄较大的儿童和女孩在歌唱声音和准确性方面得分较高。在一个共同的音高范围内,SVDM得分高的儿童准确地唱出了更多的音高,这表明扩大儿童唱歌的使用也可能提高唱歌的准确性。
{"title":"The Relationship Between Portuguese Children's Use of Singing Voice and Singing Accuracy when Singing with Text and a Neutral Syllable","authors":"A. Pereira, Helena Rodrigues","doi":"10.1525/MP.2019.36.5.468","DOIUrl":"https://doi.org/10.1525/MP.2019.36.5.468","url":null,"abstract":"The purpose of this study was to investigate the relationship between Portuguese children's use of singing voice and their singing accuracy on the pitches belonging to the Singing Voice Development Measure (SVDM) criterion patterns (Rutkowski, 2015), as well as the influence on singing with a neutral syllable or text on both variables. Children aged 4 to 9 (n = 137) were administered the SVDM individually and three raters evaluated recordings of the children's singing, both for the use of singing voice (i.e., effective use of pitch range and register) and singing accuracy. Prior to data analysis, the validity and reliability of the measure was examined and assured. A significant relationship was found between both variables. Favoring the neutral syllable, significant differences were found in response mode for singing accuracy, but not for use of singing voice, suggesting that the use of neutral syllable in classroom singing activities might be beneficial to improve accuracy. Older children and girls obtained higher scores for the use of singing voice and accuracy. Within a common pitch range, children with higher SVDM scores sang accurately a higher number of pitches, suggesting that expanding children's use of singing voice might also improve singing accuracy.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/MP.2019.36.5.468","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42147333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Single Item Measure for Identifying Musician and Nonmusician Categories Based on Measures of Musical Sophistication 一种基于音乐成熟度的单项测量方法来识别音乐家和非音乐家类别
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2019-06-01 DOI: 10.1525/MP.2019.36.5.457
J. D. Zhang, Emery Schubert
Musicians are typically identified in research papers by some single item measure (SIM) that focuses on just one component of musicality, such as expertise. Recently, musical sophistication has emerged as a more comprehensive approach by incorporating various components using multiple question items. However, the practice of SIM continues. The aim of this paper was to investigate which SIM in musical sophistication indexes best estimates musical sophistication. The Ollen Musical Sophistication Index (OMSI) and the Goldsmiths Musical Sophistication Index (Gold-MSI) were analyzed. The OMSI musician rank item (“Which title best describes you?”) was observed to be the best SIM for predicting OMSI and Gold-MSI scores. Analysis of the OMSI item indicated three parsimonious musical identity categories (MIC); namely, no musical identity (NMI), musical identity (MI), and strong musical identity (SMI). Further analyses of MIC against common SIMs used in literature showed characteristic profiles. For example, MIC membership according to years of private lessons are: NMI is < 6 years; MI is 6–10 years; and SMI is > 10 years. The finding of the study is that the SIM of musician rank should be used because of its face validity, correlation with musical sophistication, and plausible demarcation into the three MIC levels.
在研究论文中,音乐家通常是通过一些单项测量(SIM)来确定的,这种测量只关注音乐性的一个组成部分,比如专业知识。最近,音乐的复杂性已经成为一种更全面的方法,通过使用多个问题项目结合各种组成部分。然而,SIM的做法仍在继续。本文的目的是研究音乐成熟度指标中哪个SIM最能估计音乐成熟度。分析了奥伦音乐成熟度指数(OMSI)和金匠音乐成熟度指数(Gold-MSI)。据观察,OMSI音乐家排名项目(“哪个头衔最能描述你?”)是预测OMSI和Gold-MSI分数的最佳SIM。对OMSI项目的分析显示了三个简约的音乐认同类别(MIC);即无音乐认同(NMI)、音乐认同(MI)和强烈音乐认同(SMI)。进一步分析MIC与文献中使用的常见SIMs的特征概况。例如,按私教年限划分的MIC会员资格为:NMI < 6年;MI为6-10年;SMI是10年。研究发现,音乐家排名的SIM应该被使用,因为它的表面效度,与音乐成熟度的相关性,以及对三个MIC水平的合理划分。
{"title":"A Single Item Measure for Identifying Musician and Nonmusician Categories Based on Measures of Musical Sophistication","authors":"J. D. Zhang, Emery Schubert","doi":"10.1525/MP.2019.36.5.457","DOIUrl":"https://doi.org/10.1525/MP.2019.36.5.457","url":null,"abstract":"Musicians are typically identified in research papers by some single item measure (SIM) that focuses on just one component of musicality, such as expertise. Recently, musical sophistication has emerged as a more comprehensive approach by incorporating various components using multiple question items. However, the practice of SIM continues. The aim of this paper was to investigate which SIM in musical sophistication indexes best estimates musical sophistication. The Ollen Musical Sophistication Index (OMSI) and the Goldsmiths Musical Sophistication Index (Gold-MSI) were analyzed. The OMSI musician rank item (“Which title best describes you?”) was observed to be the best SIM for predicting OMSI and Gold-MSI scores. Analysis of the OMSI item indicated three parsimonious musical identity categories (MIC); namely, no musical identity (NMI), musical identity (MI), and strong musical identity (SMI). Further analyses of MIC against common SIMs used in literature showed characteristic profiles. For example, MIC membership according to years of private lessons are: NMI is < 6 years; MI is 6–10 years; and SMI is > 10 years. The finding of the study is that the SIM of musician rank should be used because of its face validity, correlation with musical sophistication, and plausible demarcation into the three MIC levels.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/MP.2019.36.5.457","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42155719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Vowel Formant Structure Predicts Metric Position in Hip-hop Lyrics 元音形式结构预测嘻哈歌词中的度量位置
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2019-06-01 DOI: 10.1525/MP.2019.36.5.480
Paolo Ammirante, Fran Copelli
In order to be heard over the low-frequency energy of a loud orchestra, opera singers adjust their vocal tracts to increase high-frequency energy around 3,000 Hz (known as a “singer's formant”). In rap music, rhymes often coincide with the beat and thus may be masked by loud, low-frequency percussion events. How do emcees (i.e., rappers) avoid masking of on-beat rhymes? If emcees exploit formant structure, this may be reflected in the distribution of on- and off-beat vowels. To test this prediction, we used a sample of words from the MCFlow rap lyric corpus (Condit-Schultz, 2016). Frequency of occurrence of on- and off-beat words was compared. Each word contained one of eight vowel nuclei; population estimates of each vowel's first and second formant (F1 and F2) frequencies were obtained from an existing source. A bias was observed: vowels with higher F2, which are less likely to be masked by percussion, were favored for on-beat words. Words with lower F2 vowels, which may be masked, were more likely to deviate from the beat. Bias was most evident among rhyming words but persisted for nonrhyming words. These findings imply that emcees use formant structure to implicitly or explicitly target the intelligibility of salient lyric events.
为了在嘈杂的管弦乐队的低频能量上被听到,歌剧歌手调整他们的声道,以增加3000赫兹左右的高频能量(被称为“歌手的共振峰”)。在说唱音乐中,韵律通常与节拍一致,因此可能会被响亮、低频的打击乐事件所掩盖。主持人(即说唱歌手)如何避免掩盖节拍押韵?如果emcees利用共振峰结构,这可能反映在节拍上和节拍外元音的分布上。为了验证这一预测,我们使用了MCFlow说唱歌词语料库中的单词样本(Condit Schultz,2016)。比较了节拍词和非节拍词的出现频率。每个单词包含八个元音核中的一个;每个元音的第一和第二共振峰(F1和F2)频率的总体估计是从现有来源获得的。观察到了一种偏差:F2较高的元音不太可能被打击乐掩盖,更倾向于节拍词。F2元音较低的单词(可能被掩盖)更有可能偏离节拍。偏倚在押韵词中最为明显,但在非押韵词则持续存在。这些发现表明,主持人使用共振峰结构来隐含或明确地针对显著歌词事件的可理解性。
{"title":"Vowel Formant Structure Predicts Metric Position in Hip-hop Lyrics","authors":"Paolo Ammirante, Fran Copelli","doi":"10.1525/MP.2019.36.5.480","DOIUrl":"https://doi.org/10.1525/MP.2019.36.5.480","url":null,"abstract":"In order to be heard over the low-frequency energy of a loud orchestra, opera singers adjust their vocal tracts to increase high-frequency energy around 3,000 Hz (known as a “singer's formant”). In rap music, rhymes often coincide with the beat and thus may be masked by loud, low-frequency percussion events. How do emcees (i.e., rappers) avoid masking of on-beat rhymes? If emcees exploit formant structure, this may be reflected in the distribution of on- and off-beat vowels. To test this prediction, we used a sample of words from the MCFlow rap lyric corpus (Condit-Schultz, 2016). Frequency of occurrence of on- and off-beat words was compared. Each word contained one of eight vowel nuclei; population estimates of each vowel's first and second formant (F1 and F2) frequencies were obtained from an existing source. A bias was observed: vowels with higher F2, which are less likely to be masked by percussion, were favored for on-beat words. Words with lower F2 vowels, which may be masked, were more likely to deviate from the beat. Bias was most evident among rhyming words but persisted for nonrhyming words. These findings imply that emcees use formant structure to implicitly or explicitly target the intelligibility of salient lyric events.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/MP.2019.36.5.480","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47113180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Electrophysiological Correlates of Key and Harmony Processing in 3-year-old Children 3岁儿童按键与和声处理的电生理相关性
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2019-06-01 DOI: 10.1525/MP.2019.36.5.435
Kathleen A. Corrigall, L. Trainor
Infants and children are able to track statistical regularities in perceptual input, which allows them to acquire structural aspects of language and music, such as syntax. However, much more is known about the development of linguistic compared to musical syntax. In the present study, we examined 3.5-year-olds’ implicit knowledge of Western musical pitch structure using electroencephalography (EEG). Event-related potentials (ERPs) were measured while children listened to chord sequences that either 1) followed Western harmony rules, 2) ended on a chord that went outside the key, or 3) ended on an in-key but less expected chord harmonically. Whereas adults tend to show an early right anterior negativity (ERAN) in response to unexpected chords (Koelsch, 2009), 3.5-year-olds in our study showed an immature response that was positive rather than negative in polarity. Our results suggest that very young children exhibit implicit knowledge of the pitch structure of Western music years before they have been shown to demonstrate that knowledge in behavioral tasks.
婴儿和儿童能够跟踪感知输入的统计规律,这使他们能够获得语言和音乐的结构方面,如语法。然而,与音乐语法相比,人们对语言学的发展了解得更多。在本研究中,我们使用脑电图(EEG)检查了3.5岁儿童对西方音高结构的内隐知识。当孩子们听和弦序列时,测量了事件相关电位(ERP),这些和弦序列要么1)遵循西方和声规则,要么2)在键外的和弦上结束,要么3)在键内但不太期望的和弦上和谐地结束。尽管成年人倾向于在对意外和弦的反应中表现出早期的右前负性(ERAN)(Koelsch,2009),但在我们的研究中,3.5岁的儿童表现出不成熟的反应,其极性是积极的,而不是消极的。我们的研究结果表明,很小的孩子在被证明在行为任务中表现出对西方音乐音高结构的内隐知识之前几年就表现出了这种知识。
{"title":"Electrophysiological Correlates of Key and Harmony Processing in 3-year-old Children","authors":"Kathleen A. Corrigall, L. Trainor","doi":"10.1525/MP.2019.36.5.435","DOIUrl":"https://doi.org/10.1525/MP.2019.36.5.435","url":null,"abstract":"Infants and children are able to track statistical regularities in perceptual input, which allows them to acquire structural aspects of language and music, such as syntax. However, much more is known about the development of linguistic compared to musical syntax. In the present study, we examined 3.5-year-olds’ implicit knowledge of Western musical pitch structure using electroencephalography (EEG). Event-related potentials (ERPs) were measured while children listened to chord sequences that either 1) followed Western harmony rules, 2) ended on a chord that went outside the key, or 3) ended on an in-key but less expected chord harmonically. Whereas adults tend to show an early right anterior negativity (ERAN) in response to unexpected chords (Koelsch, 2009), 3.5-year-olds in our study showed an immature response that was positive rather than negative in polarity. Our results suggest that very young children exhibit implicit knowledge of the pitch structure of Western music years before they have been shown to demonstrate that knowledge in behavioral tasks.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/MP.2019.36.5.435","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49402679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Reinvestigation of the Source Dilemma Hypothesis 来源困境假说的再研究
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2019-06-01 DOI: 10.1525/MP.2019.36.5.448
Douglas A. Kowalewski, R. Friedman, Stan zavoyskiy, W. Neill
In a recent article, Bonin, Trainor, Belyk, and Andrews (2016) proposed a novel way in which basic processes of auditory perception may influence affective responses to music. According to their source dilemma hypothesis (SDH), the relative fluency of a particular aspect of musical processing—the parsing of the music into distinct audio streams—is hedonically marked: Efficient stream segregation elicits pleasant affective experience whereas inefficient segregation results in unpleasant affective experience, thereby contributing to (dis)preference for a musical stimulus. Bonin et al. (2016) conducted two experiments, the results of which were ostensibly consistent with the SDH. However, their research designs introduced major confounds that undermined the ability of these initial studies to offer unequivocal evidence for their hypothesis. To address this, we conducted a large-scale (N = 311) constructive replication of Bonin et al. (2016; Experiment 2), significantly modifying the design to rectify these methodological shortfalls and thereby better assess the validity of the SDH. Results successfully replicated those of Bonin et al. (2016), although they indicated that source dilemma effects on music preference may be more modest than their original findings would suggest. Unresolved issues and directions for future investigation of the SDH are discussed.
在最近的一篇文章中,Bonin, Trainor, Belyk和Andrews(2016)提出了一种新的方式,即听觉感知的基本过程可能影响对音乐的情感反应。根据他们的来源困境假说(SDH),音乐处理的一个特定方面——将音乐解析成不同的音频流——的相对流畅性是快乐标记的:有效的音频流分离引发愉快的情感体验,而低效的音频流分离导致不愉快的情感体验,从而导致对音乐刺激的(不)偏好。Bonin et al.(2016)进行了两次实验,其结果表面上与SDH一致。然而,他们的研究设计引入了重大的混淆,破坏了这些初步研究为他们的假设提供明确证据的能力。为了解决这个问题,我们进行了大规模(N = 311)建设性复制Bonin等人(2016;实验2),显著修改设计,以纠正这些方法上的不足,从而更好地评估SDH的有效性。结果成功地复制了Bonin等人(2016)的研究结果,尽管他们指出来源困境对音乐偏好的影响可能比他们最初的研究结果所表明的要温和得多。讨论了SDH研究中尚未解决的问题和未来研究的方向。
{"title":"A Reinvestigation of the Source Dilemma Hypothesis","authors":"Douglas A. Kowalewski, R. Friedman, Stan zavoyskiy, W. Neill","doi":"10.1525/MP.2019.36.5.448","DOIUrl":"https://doi.org/10.1525/MP.2019.36.5.448","url":null,"abstract":"In a recent article, Bonin, Trainor, Belyk, and Andrews (2016) proposed a novel way in which basic processes of auditory perception may influence affective responses to music. According to their source dilemma hypothesis (SDH), the relative fluency of a particular aspect of musical processing—the parsing of the music into distinct audio streams—is hedonically marked: Efficient stream segregation elicits pleasant affective experience whereas inefficient segregation results in unpleasant affective experience, thereby contributing to (dis)preference for a musical stimulus. Bonin et al. (2016) conducted two experiments, the results of which were ostensibly consistent with the SDH. However, their research designs introduced major confounds that undermined the ability of these initial studies to offer unequivocal evidence for their hypothesis. To address this, we conducted a large-scale (N = 311) constructive replication of Bonin et al. (2016; Experiment 2), significantly modifying the design to rectify these methodological shortfalls and thereby better assess the validity of the SDH. Results successfully replicated those of Bonin et al. (2016), although they indicated that source dilemma effects on music preference may be more modest than their original findings would suggest. Unresolved issues and directions for future investigation of the SDH are discussed.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1525/MP.2019.36.5.448","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48938484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Music Perception
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1