首页 > 最新文献

Music Perception最新文献

英文 中文
Measuring Children’s Harmonic Knowledge with Implicit and Explicit Tests 内隐和外显测试测量儿童和谐知识
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2022-04-01 DOI: 10.1525/mp.2022.39.4.361
Kathleen A. Corrigall, B. Tillmann, E. Schellenberg
We used implicit and explicit tasks to measure knowledge of Western harmony in musically trained and untrained Canadian children. Younger children were 6–7 years of age; older children were 10–11. On each trial, participants heard a sequence of five piano chords. The first four chords established a major-key context. The final chord was the standard, expected tonic of the context or one of two deviant endings: the highly unexpected flat supertonic or the moderately unexpected subdominant. In the implicit task, children identified the timbre of the final chord (guitar or piano) as quickly as possible. Response times were faster for the tonic ending than for either deviant ending, but the magnitude of the priming effect was similar for the two deviants, and the effect did not vary as a function of age or music training. In the explicit task, children rated how good each chord sequence sounded. Ratings were highest for sequences with the tonic ending, intermediate for the subdominant, and lowest for the flat supertonic. Moreover, the difference between the tonic and deviant sequences was larger for older children with music training. Thus, the explicit task provided a more nuanced picture of musical knowledge than did the implicit task.
我们使用内隐和外显任务来衡量受过音乐训练和未受过音乐训练的加拿大儿童对西方和谐的了解。年龄较小的儿童为6-7岁;年龄较大的孩子是10-11岁。在每次试验中,参与者都会听到一系列五个钢琴和弦。前四个和弦建立了一个主要的关键上下文。最后的和弦是上下文中标准的、预期的主音,或者是两个异常的结尾之一:高度出乎意料的平主音或适度出乎意料的次主音。在隐含任务中,孩子们尽快识别出最后一个和弦(吉他或钢琴)的音色。主音结尾的反应时间比任何一个异常结尾都快,但两个异常结尾的启动效应的大小相似,并且这种效应没有随着年龄或音乐训练的变化而变化。在明确的任务中,孩子们评定每个和弦序列听起来有多好。主音结尾的序列评分最高,次主音为中等,平主音为最低。此外,对于接受音乐训练的年龄较大的儿童来说,主音序列和异常序列之间的差异更大。因此,显性任务比隐性任务提供了更细致的音乐知识。
{"title":"Measuring Children’s Harmonic Knowledge with Implicit and Explicit Tests","authors":"Kathleen A. Corrigall, B. Tillmann, E. Schellenberg","doi":"10.1525/mp.2022.39.4.361","DOIUrl":"https://doi.org/10.1525/mp.2022.39.4.361","url":null,"abstract":"We used implicit and explicit tasks to measure knowledge of Western harmony in musically trained and untrained Canadian children. Younger children were 6–7 years of age; older children were 10–11. On each trial, participants heard a sequence of five piano chords. The first four chords established a major-key context. The final chord was the standard, expected tonic of the context or one of two deviant endings: the highly unexpected flat supertonic or the moderately unexpected subdominant. In the implicit task, children identified the timbre of the final chord (guitar or piano) as quickly as possible. Response times were faster for the tonic ending than for either deviant ending, but the magnitude of the priming effect was similar for the two deviants, and the effect did not vary as a function of age or music training. In the explicit task, children rated how good each chord sequence sounded. Ratings were highest for sequences with the tonic ending, intermediate for the subdominant, and lowest for the flat supertonic. Moreover, the difference between the tonic and deviant sequences was larger for older children with music training. Thus, the explicit task provided a more nuanced picture of musical knowledge than did the implicit task.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41452186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Song Imitation in Congenital Amusia 先天性失音症的歌曲模仿
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2022-04-01 DOI: 10.1525/mp.2022.39.4.341
Ariadne Loutrari, Cunmei Jiang, Fang Liu
Congenital amusia is a neurogenetic disorder of pitch perception that may also compromise pitch production. Despite amusics’ long documented difficulties with pitch, previous evidence suggests that familiar music may have an implicit facilitative effect on their performance. It remains, however, unknown whether vocal imitation of song in amusia is influenced by melody familiarity and the presence of lyrics. To address this issue, thirteen Mandarin speaking amusics and 13 matched controls imitated novel song segments with lyrics and on the syllable /la/. Eleven out of these participants in each group also imitated segments of a familiar song. Subsequent acoustic analysis was conducted to measure pitch and timing matching accuracy based on eight acoustic measures. While amusics showed worse imitation performance than controls across seven out of the eight pitch and timing measures, melody familiarity was found to have a favorable effect on their performance on three pitch-related acoustic measures. The presence of lyrics did not affect either group’s performance substantially. Correlations were observed between amusics’ performance on the Montreal Battery of Evaluation of Amusia and imitation of the novel song. We discuss implications in terms of music familiarity, memory demands, the relevance of lexical information, and the link between perception and production.
先天性失音症是一种音高感知的神经遗传性疾病,也可能影响音高的产生。尽管音乐在音高方面长期存在困难,但之前的证据表明,熟悉的音乐可能对他们的表现有隐性的促进作用。然而,尚不清楚的是,失音症患者对歌曲的声乐模仿是否受到旋律熟悉度和歌词的影响。为了解决这个问题,13个说普通话的音乐组和13个匹配的对照组模仿了带有歌词和音节/la/的新歌曲片段。每组中有11名参与者还模仿了一首熟悉歌曲的片段。随后进行声学分析,测量基于八项声学测量的基音和时序匹配精度。虽然在8项音高和时间测量中,音乐组在7项上的模仿表现比对照组差,但在3项音高相关的声学测量中,旋律熟悉度对他们的表现有有利影响。歌词的出现对两组的表现都没有实质性的影响。研究发现,乐手在《蒙特利尔失音评价》中的表现与对新歌的模仿之间存在相关性。我们从音乐熟悉度、记忆需求、词汇信息的相关性以及感知和生产之间的联系等方面讨论了影响。
{"title":"Song Imitation in Congenital Amusia","authors":"Ariadne Loutrari, Cunmei Jiang, Fang Liu","doi":"10.1525/mp.2022.39.4.341","DOIUrl":"https://doi.org/10.1525/mp.2022.39.4.341","url":null,"abstract":"Congenital amusia is a neurogenetic disorder of pitch perception that may also compromise pitch production. Despite amusics’ long documented difficulties with pitch, previous evidence suggests that familiar music may have an implicit facilitative effect on their performance. It remains, however, unknown whether vocal imitation of song in amusia is influenced by melody familiarity and the presence of lyrics. To address this issue, thirteen Mandarin speaking amusics and 13 matched controls imitated novel song segments with lyrics and on the syllable /la/. Eleven out of these participants in each group also imitated segments of a familiar song. Subsequent acoustic analysis was conducted to measure pitch and timing matching accuracy based on eight acoustic measures. While amusics showed worse imitation performance than controls across seven out of the eight pitch and timing measures, melody familiarity was found to have a favorable effect on their performance on three pitch-related acoustic measures. The presence of lyrics did not affect either group’s performance substantially. Correlations were observed between amusics’ performance on the Montreal Battery of Evaluation of Amusia and imitation of the novel song. We discuss implications in terms of music familiarity, memory demands, the relevance of lexical information, and the link between perception and production.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46151830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Associations Between Music Training, Musical Working Memory, and Visuospatial Working Memory 音乐训练、音乐工作记忆和视觉空间工作记忆之间的关系
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2022-04-01 DOI: 10.1525/mp.2022.39.4.401
Sebastian Silas, Daniel Müllensiefen, R. Gelding, K. Frieler, Peter M. C. Harrison
Prior research studying the relationship between music training (MT) and more general cognitive faculties, such as visuospatial working memory (VSWM), often fails to include tests of musical memory. This may result in causal pathways between MT and other such variables being misrepresented, potentially explaining certain ambiguous findings in the literature concerning the relationship between MT and executive functions. Here we address this problem using latent variable modeling and causal modeling to study a triplet of variables related to working memory: MT, musical working memory (MWM), and VSWM. The triplet framing allows for the potential application of d-separation (similar to mediation analysis) and V-structure search, which is particularly useful since, in the absence of expensive randomized control trials, it can test causal hypotheses using cross-sectional data. We collected data from 148 participants using a battery of MWM and VSWM tasks as well as a MT questionnaire. Our results suggest: 1) VSWM and MT are unrelated, conditional on MWM; and 2) by implication, there is no far transfer between MT and VSWM without near transfer. However, the data are unable to distinguish an unambiguous causal structure. We conclude by discussing the possibility of extending these models to incorporate more complex or cyclic effects.
先前研究音乐训练(MT)与更一般的认知能力(如视觉空间工作记忆(VSWM))之间的关系,通常没有包括音乐记忆测试。这可能导致MT和其他此类变量之间的因果途径被歪曲,从而可能解释文献中关于MT和执行功能之间关系的某些模糊发现。在这里,我们使用潜在变量建模和因果建模来研究与工作记忆相关的三个变量:MT、音乐工作记忆(MWM)和VSWM。三元组框架允许d分离(类似于中介分析)和V结构搜索的潜在应用,这特别有用,因为在没有昂贵的随机对照试验的情况下,它可以使用横断面数据来测试因果假设。我们使用MWM和VSWM任务以及MT问卷收集了148名参与者的数据。我们的研究结果表明:1)VSWM和MT是不相关的,条件是MWM;以及2)隐含地,在没有近转移的情况下,MT和VSWM之间不存在远转移。然而,数据无法区分明确的因果结构。最后,我们讨论了将这些模型扩展到包含更复杂或循环效应的可能性。
{"title":"The Associations Between Music Training, Musical Working Memory, and Visuospatial Working Memory","authors":"Sebastian Silas, Daniel Müllensiefen, R. Gelding, K. Frieler, Peter M. C. Harrison","doi":"10.1525/mp.2022.39.4.401","DOIUrl":"https://doi.org/10.1525/mp.2022.39.4.401","url":null,"abstract":"Prior research studying the relationship between music training (MT) and more general cognitive faculties, such as visuospatial working memory (VSWM), often fails to include tests of musical memory. This may result in causal pathways between MT and other such variables being misrepresented, potentially explaining certain ambiguous findings in the literature concerning the relationship between MT and executive functions. Here we address this problem using latent variable modeling and causal modeling to study a triplet of variables related to working memory: MT, musical working memory (MWM), and VSWM. The triplet framing allows for the potential application of d-separation (similar to mediation analysis) and V-structure search, which is particularly useful since, in the absence of expensive randomized control trials, it can test causal hypotheses using cross-sectional data. We collected data from 148 participants using a battery of MWM and VSWM tasks as well as a MT questionnaire. Our results suggest: 1) VSWM and MT are unrelated, conditional on MWM; and 2) by implication, there is no far transfer between MT and VSWM without near transfer. However, the data are unable to distinguish an unambiguous causal structure. We conclude by discussing the possibility of extending these models to incorporate more complex or cyclic effects.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46574703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Can the Intended Messages of Mismatched Lexical Tone in Igbo Music Be Understood? A Test for Listeners’ Perception of the Matched Versus Mismatched Compositions 伊博族音乐中不匹配的词汇语调所要传递的信息能被理解吗?听者对匹配与不匹配作文感知的测试
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2022-04-01 DOI: 10.1525/mp.2022.39.4.371
Sunday Ofuani
In tone languages, alteration of lexical tone changes the intended meaning. This implies that composers should equally match lexical tone in their music for intelligible communication of the intended textual messages, a compositional approach termed Lexical Tone Determinants (LTD) in this study. Yet, in the Ìgbò language setting, some composers creatively disregard/mismatch lexical tone, which is branded as Musical/Creative Determinants (M/CD). It is believed that mismatched lexical tone in Ìgbò music alters listeners’ comprehension of the intended messages; on the other hand, it is argued that thorough match of lexical tone constrains musical creativity. Listeners’ perception of textual messages in LTD and M/CD music has not been empirically tested (side-by-side) to verify whether comprehension is lost or not, at least, in the Ìgbò language context. This empirical void gap is verified in this particular study to substantiate the propositions/findings using comparative measures to collect data through listeners’ perception in live-performance of newly composed LTD and M/CD pieces. Specifically, it examines whether mismatched lexical tone in Ìgbò music alters message comprehension or not. The data were collated, presented, and analyzed statistically with chi-square deployed to evaluate their difference in message comprehension.
在声调语言中,词汇声调的变化会改变其原意。这意味着作曲家应该在他们的音乐中平等地匹配词汇音调,以实现预期文本信息的可理解交流,这是一种在本研究中被称为词汇音调决定因素(LTD)的作曲方法。然而,在μgbå语言环境中,一些作曲家创造性地忽略/不匹配词汇音调,这被称为音乐/创意决定因素(M/CD)。人们认为,格博音乐中不匹配的词汇音调会改变听众对意图信息的理解;另一方面,有人认为,词调的完全匹配制约了音乐创作。听众对LTD和M/CD音乐中文本信息的感知尚未进行实证测试(并排),以验证理解是否丢失,至少在μgbå语言环境中是这样。这一经验空白在本研究中得到了验证,以证实命题/发现,使用比较措施,通过听众在新创作的LTD和M/CD作品的现场表演中的感知收集数据。具体地说,它考察了宋体音乐中不匹配的词汇音调是否会改变信息理解。对数据进行整理、呈现和统计分析,并使用卡方来评估他们在信息理解方面的差异。
{"title":"Can the Intended Messages of Mismatched Lexical Tone in Igbo Music Be Understood? A Test for Listeners’ Perception of the Matched Versus Mismatched Compositions","authors":"Sunday Ofuani","doi":"10.1525/mp.2022.39.4.371","DOIUrl":"https://doi.org/10.1525/mp.2022.39.4.371","url":null,"abstract":"In tone languages, alteration of lexical tone changes the intended meaning. This implies that composers should equally match lexical tone in their music for intelligible communication of the intended textual messages, a compositional approach termed Lexical Tone Determinants (LTD) in this study. Yet, in the Ìgbò language setting, some composers creatively disregard/mismatch lexical tone, which is branded as Musical/Creative Determinants (M/CD). It is believed that mismatched lexical tone in Ìgbò music alters listeners’ comprehension of the intended messages; on the other hand, it is argued that thorough match of lexical tone constrains musical creativity. Listeners’ perception of textual messages in LTD and M/CD music has not been empirically tested (side-by-side) to verify whether comprehension is lost or not, at least, in the Ìgbò language context. This empirical void gap is verified in this particular study to substantiate the propositions/findings using comparative measures to collect data through listeners’ perception in live-performance of newly composed LTD and M/CD pieces. Specifically, it examines whether mismatched lexical tone in Ìgbò music alters message comprehension or not. The data were collated, presented, and analyzed statistically with chi-square deployed to evaluate their difference in message comprehension.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47638052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Embodied Meter Revisited 体现仪表重新审视
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2022-02-01 DOI: 10.1525/mp.2022.39.3.249
P. Toiviainen, Emily Carlson
Previous research has shown that humans tend to embody musical meter at multiple beat levels during spontaneous dance. This work that been based on identifying typical periodic movement patterns, or eigenmovements, and has relied on time-domain analyses. The current study: 1) presents a novel method of using time-frequency analysis in conjunction with group-level tensor decomposition; 2) compares its results to time-domain analysis, and 3) investigates how the amplitude of eigenmovements depends on musical content and genre. Data comprised three-dimensional motion capture of 72 participants’ spontaneous dance movements to 16 stimuli including eight different genres. Each trial was subjected to a discrete wavelet transform, concatenated into a trial-space-frequency tensor and decomposed using tensor decomposition. Twelve movement primitives, or eigenmovements, were identified, eleven of which were frequency locked with one of four metrical levels. The results suggest that time-frequency decomposition can more efficiently group movement directions together. Furthermore, the employed group-level decomposition allows for a straightforward analysis of interstimulus and interparticipant differences in music-induced movement. Amplitude of eigenmovements was found to depend on the amount of fluctuation in the music in particularly at one- and two-beat levels.
先前的研究表明,在自发舞蹈中,人类倾向于在多个节拍水平上体现节拍。这项工作基于识别典型的周期性运动模式或本征运动,并依赖于时域分析。目前的研究:1)提出了一种将时频分析与群级张量分解相结合的新方法;2) 将其结果与时域分析进行比较,3)研究了本征运动的幅度如何取决于音乐内容和流派。数据包括72名参与者对包括8种不同类型的16种刺激的自发舞蹈动作的三维运动捕捉。每个试验都经过离散小波变换,连接到试验空间频率张量中,并使用张量分解进行分解。确定了12个运动基元,即本征运动,其中11个与四个韵律水平中的一个频率锁定。结果表明,时频分解可以更有效地将运动方向组合在一起。此外,所采用的组级分解可以直接分析音乐诱导运动中的间隙和参与者之间的差异。本征运动的幅度被发现取决于音乐的波动量,尤其是在一拍和两拍的水平上。
{"title":"Embodied Meter Revisited","authors":"P. Toiviainen, Emily Carlson","doi":"10.1525/mp.2022.39.3.249","DOIUrl":"https://doi.org/10.1525/mp.2022.39.3.249","url":null,"abstract":"Previous research has shown that humans tend to embody musical meter at multiple beat levels during spontaneous dance. This work that been based on identifying typical periodic movement patterns, or eigenmovements, and has relied on time-domain analyses. The current study: 1) presents a novel method of using time-frequency analysis in conjunction with group-level tensor decomposition; 2) compares its results to time-domain analysis, and 3) investigates how the amplitude of eigenmovements depends on musical content and genre. Data comprised three-dimensional motion capture of 72 participants’ spontaneous dance movements to 16 stimuli including eight different genres. Each trial was subjected to a discrete wavelet transform, concatenated into a trial-space-frequency tensor and decomposed using tensor decomposition. Twelve movement primitives, or eigenmovements, were identified, eleven of which were frequency locked with one of four metrical levels. The results suggest that time-frequency decomposition can more efficiently group movement directions together. Furthermore, the employed group-level decomposition allows for a straightforward analysis of interstimulus and interparticipant differences in music-induced movement. Amplitude of eigenmovements was found to depend on the amount of fluctuation in the music in particularly at one- and two-beat levels.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45681023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Idiosyncrasy of Involuntary Musical Imagery Repetition (IMIR) Experiences 非自愿音乐意象重复体验的特殊性
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2022-02-01 DOI: 10.1525/mp.2022.39.3.320
Taylor A. Liptak, D. Omigie, Georgia A. Floridou
Involuntary musical imagery repetition (IMIR), colloquially known as “earworms,” is a form of musical imagery that arises involuntarily and repeatedly in the mind. A growing number of studies, based on retrospective reports, suggest that IMIR experiences are associated with certain musical features, such as fast tempo and the presence of lyrics, and with individual differences in music training and engagement. However, research to date has not directly assessed the effect of such musical features on IMIR and findings about individual differences in music training and engagement are mixed. Using a cross-sectional design (Study 1, n = 263), we examined IMIR content in terms of tempo (fast, slow) and presence of lyrics (instrumental, vocal), and IMIR characteristics (frequency, duration of episode and section) in relation to 1) the musical content (tempo and lyrics) individuals most commonly expose themselves to (music-listening habits), and 2) music training and engagement. We also used an experimental design (Study 2, n = 80) to test the effects of tempo (fast or slow) and the presence of lyrics (instrumental or vocal) on IMIR retrieval and duration. Results from Study 1 showed that the content of music that individuals are typically exposed to with regard to tempo and lyrics predicted and resembled their IMIR content, and that music engagement, but not music training, predicted IMIR frequency. Music training was, however, shown to predict the duration of IMIR episodes. In the experiment (Study 2), tempo did not predict IMIR retrieval, but the presence of lyrics influenced IMIR duration. Taken together, our findings suggest that IMIR is an idiosyncratic experience primed by the music-listening habits and music engagement of the individual.
非自愿音乐意象重复(IMIR),俗称“耳虫”,是一种非自愿地在脑海中反复出现的音乐意象。越来越多的基于回顾性报告的研究表明,IMIR体验与某些音乐特征有关,如快节奏和歌词的存在,以及与音乐训练和参与度的个体差异有关。然而,迄今为止的研究还没有直接评估这种音乐特征对IMIR的影响,关于音乐训练和参与度的个体差异的研究结果喜忧参半。使用横断面设计(研究1,n=263),我们检查了IMIR内容的节奏(快、慢)和歌词(器乐、声乐),以及IMIR特征(频率、集和节的持续时间)与1)个人最常接触的音乐内容(节奏和歌词)(音乐听音习惯)和2)音乐训练和参与度的关系。我们还使用了一个实验设计(研究2,n=80)来测试节奏(快或慢)和歌词(器乐或声乐)对IMIR检索和持续时间的影响。研究1的结果表明,个人通常接触的关于节奏和歌词的音乐内容预测并类似于他们的IMIR内容,并且音乐参与(而不是音乐训练)预测了IMIR频率。然而,音乐训练被证明可以预测IMIR发作的持续时间。在实验(研究2)中,节奏不能预测IMIR的检索,但歌词的存在影响了IMIR的持续时间。总之,我们的研究结果表明,IMIR是一种由个人的音乐收听习惯和音乐参与引发的特殊体验。
{"title":"The Idiosyncrasy of Involuntary Musical Imagery Repetition (IMIR) Experiences","authors":"Taylor A. Liptak, D. Omigie, Georgia A. Floridou","doi":"10.1525/mp.2022.39.3.320","DOIUrl":"https://doi.org/10.1525/mp.2022.39.3.320","url":null,"abstract":"Involuntary musical imagery repetition (IMIR), colloquially known as “earworms,” is a form of musical imagery that arises involuntarily and repeatedly in the mind. A growing number of studies, based on retrospective reports, suggest that IMIR experiences are associated with certain musical features, such as fast tempo and the presence of lyrics, and with individual differences in music training and engagement. However, research to date has not directly assessed the effect of such musical features on IMIR and findings about individual differences in music training and engagement are mixed. Using a cross-sectional design (Study 1, n = 263), we examined IMIR content in terms of tempo (fast, slow) and presence of lyrics (instrumental, vocal), and IMIR characteristics (frequency, duration of episode and section) in relation to 1) the musical content (tempo and lyrics) individuals most commonly expose themselves to (music-listening habits), and 2) music training and engagement. We also used an experimental design (Study 2, n = 80) to test the effects of tempo (fast or slow) and the presence of lyrics (instrumental or vocal) on IMIR retrieval and duration. Results from Study 1 showed that the content of music that individuals are typically exposed to with regard to tempo and lyrics predicted and resembled their IMIR content, and that music engagement, but not music training, predicted IMIR frequency. Music training was, however, shown to predict the duration of IMIR episodes. In the experiment (Study 2), tempo did not predict IMIR retrieval, but the presence of lyrics influenced IMIR duration. Taken together, our findings suggest that IMIR is an idiosyncratic experience primed by the music-listening habits and music engagement of the individual.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44711876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Effect of Subjective Fatigue on Auditory Processing in Musicians and Nonmusicians 主观疲劳对音乐家和非音乐家听觉加工的影响
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2022-02-01 DOI: 10.1525/mp.2022.39.3.309
Saransh Jain, N. P. Nataraja, V. Narne
We assessed fatigue's effect on temporal resolution and speech perception in noise abilities in trained instrumental musicians. In a pretest-posttest quasiexperimental research design, trained instrumental musicians (n = 39) and theater artists as nonmusicians (n = 37) participated. Fatigue was measured using a visual analog scale (VAS) under eight fatigue categories. The temporal release of masking measured the temporal resolution, and auditory stream segregation assessed speech perception in noise. Entire testing was carried out at two time-points: before and after rehearsal. Each participant rehearsed for five to six hours: musicians playing musical instruments and theater artists conducted stage practice. The results revealed significantly lower VAS scores for both musicians and nonmusicians after rehearsal, indicating that both musicians and nonmusicians were fatigued after rehearsal. The musicians had higher scores for temporal release of masking and lower scores for auditory stream segregation abilities than nonmusicians in the pre-fatigue condition, indicating musicians’ edge in auditory processing abilities. However, no such differences in the scores of musicians and nonmusicians were observed in the post-fatigue testing. The results were inferred as the music training related advantage in temporal resolution, and speech perception in noise might have been reduced due to fatigue. In the end, we recommend that musicians consider fatigue a significant factor, as it might affect their performance in auditory processing tasks. Future researchers must also consider fatigue as a variable while measuring auditory processing in musicians. However, we restricted the auditory processing to temporal resolution and speech perception in noise only. Generalizing these results to other auditory processes requires further investigation.
我们评估了疲劳对训练有素的器乐音乐家噪音能力的时间分辨和言语感知的影响。在前测后测准实验研究设计中,训练有素的器乐音乐家(n = 39)和戏剧艺术家作为非音乐家(n = 37)参与。采用视觉模拟量表(VAS)对8个疲劳类别进行疲劳测量。掩蔽的时间释放测量了时间分辨率,听觉流分离评估了噪声下的语音感知。整个测试在两个时间点进行:彩排之前和之后。每个参与者都要排练5到6个小时:音乐家演奏乐器,戏剧艺术家进行舞台练习。结果显示,音乐家和非音乐家在排练后的VAS评分显著降低,表明音乐家和非音乐家在排练后都感到疲劳。在疲劳前状态下,音乐家的掩蔽时间释放得分高于非音乐家,而听觉流分离能力得分低于非音乐家,表明音乐家在听觉加工能力方面具有优势。然而,在疲劳后测试中,音乐家和非音乐家的得分没有这种差异。结果表明,音乐训练在时间分辨率上具有优势,而疲劳可能降低了噪声下的语音感知能力。最后,我们建议音乐家将疲劳视为一个重要因素,因为它可能会影响他们在听觉处理任务中的表现。未来的研究人员在测量音乐家的听觉处理过程时,也必须把疲劳作为一个变量来考虑。然而,我们将听觉处理局限于噪声下的时间分辨率和语音感知。将这些结果推广到其他听觉过程需要进一步的研究。
{"title":"The Effect of Subjective Fatigue on Auditory Processing in Musicians and Nonmusicians","authors":"Saransh Jain, N. P. Nataraja, V. Narne","doi":"10.1525/mp.2022.39.3.309","DOIUrl":"https://doi.org/10.1525/mp.2022.39.3.309","url":null,"abstract":"We assessed fatigue's effect on temporal resolution and speech perception in noise abilities in trained instrumental musicians. In a pretest-posttest quasiexperimental research design, trained instrumental musicians (n = 39) and theater artists as nonmusicians (n = 37) participated. Fatigue was measured using a visual analog scale (VAS) under eight fatigue categories. The temporal release of masking measured the temporal resolution, and auditory stream segregation assessed speech perception in noise. Entire testing was carried out at two time-points: before and after rehearsal. Each participant rehearsed for five to six hours: musicians playing musical instruments and theater artists conducted stage practice. The results revealed significantly lower VAS scores for both musicians and nonmusicians after rehearsal, indicating that both musicians and nonmusicians were fatigued after rehearsal. The musicians had higher scores for temporal release of masking and lower scores for auditory stream segregation abilities than nonmusicians in the pre-fatigue condition, indicating musicians’ edge in auditory processing abilities. However, no such differences in the scores of musicians and nonmusicians were observed in the post-fatigue testing. The results were inferred as the music training related advantage in temporal resolution, and speech perception in noise might have been reduced due to fatigue. In the end, we recommend that musicians consider fatigue a significant factor, as it might affect their performance in auditory processing tasks. Future researchers must also consider fatigue as a variable while measuring auditory processing in musicians. However, we restricted the auditory processing to temporal resolution and speech perception in noise only. Generalizing these results to other auditory processes requires further investigation.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48938598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beat Perception and Production in Musicians and Dancers 音乐家和舞者的节拍感知与创作
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2022-02-01 DOI: 10.1525/mp.2022.39.3.229
Tram T N Nguyen, Riya Sidhu, J. Everling, Miranda C. Wickett, A. Gibbings, Jessica A. Grahn
The ability to perceive and produce a beat is believed to be universal in humans, but individual ability varies. The current study examined four factors that may influence beat perception and production capacity: 1) expertise: music or dance, 2) training style: percussive or nonpercussive, 3) stimulus modality: auditory or visual, and 4) movement type: finger-tap or whole-body bounce. Experiment 1 examined how expertise and training style influenced beat perception and production performance using an auditory beat perception task and a finger-tapping beat production task. Experiment 2 used a similar sample with an audiovisual variant of the beat perception task, and a standing knee-bend (bounce) beat production task to assess whole-body movement. The data showed that: 1) musicians were more accurate in a finger-tapping beat synchronization task compared to dancers and controls, 2) training style did not significantly influence beat perception and production, 3) visual beat information did not benefit any group, and 4) beat synchronization in a full-body movement task was comparable for musicians and dancers; both groups outperformed controls. The current study suggests that the type of task and measured response interacts with expertise, and that expertise effects may be masked by selection of nonoptimal response types.
感知和产生节拍的能力被认为在人类中是普遍的,但个体的能力各不相同。目前的研究考察了四个可能影响节拍感知和产生能力的因素:1)专业知识:音乐或舞蹈,2)训练风格:敲击或非冲击,3)刺激方式:听觉或视觉,4)运动类型:手指敲击或全身弹跳。实验1使用听觉节拍感知任务和手指敲击节拍产生任务,研究了专业知识和训练风格如何影响节拍感知和产生表现。实验2使用了一个类似的样本,该样本具有节拍感知任务的视听变体,以及站立膝盖弯曲(弹跳)节拍产生任务来评估全身运动。数据显示:1)与舞者和对照组相比,音乐家在手指敲击节拍同步任务中更准确,2)训练风格对节拍感知和产生没有显著影响,3)视觉节拍信息对任何一组都没有好处,4)音乐家和舞者在全身运动任务中的节拍同步具有可比性;两组的表现均优于对照组。目前的研究表明,任务类型和测量的反应与专业知识相互作用,专业知识的影响可能会被非最佳反应类型的选择所掩盖。
{"title":"Beat Perception and Production in Musicians and Dancers","authors":"Tram T N Nguyen, Riya Sidhu, J. Everling, Miranda C. Wickett, A. Gibbings, Jessica A. Grahn","doi":"10.1525/mp.2022.39.3.229","DOIUrl":"https://doi.org/10.1525/mp.2022.39.3.229","url":null,"abstract":"The ability to perceive and produce a beat is believed to be universal in humans, but individual ability varies. The current study examined four factors that may influence beat perception and production capacity: 1) expertise: music or dance, 2) training style: percussive or nonpercussive, 3) stimulus modality: auditory or visual, and 4) movement type: finger-tap or whole-body bounce. Experiment 1 examined how expertise and training style influenced beat perception and production performance using an auditory beat perception task and a finger-tapping beat production task. Experiment 2 used a similar sample with an audiovisual variant of the beat perception task, and a standing knee-bend (bounce) beat production task to assess whole-body movement. The data showed that: 1) musicians were more accurate in a finger-tapping beat synchronization task compared to dancers and controls, 2) training style did not significantly influence beat perception and production, 3) visual beat information did not benefit any group, and 4) beat synchronization in a full-body movement task was comparable for musicians and dancers; both groups outperformed controls. The current study suggests that the type of task and measured response interacts with expertise, and that expertise effects may be masked by selection of nonoptimal response types.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45121364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Violinists Employ More Expressive Gesture and Timing Around Global Musical Resolutions 小提琴家使用更多的表达手势和时间围绕全球音乐决议
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2022-02-01 DOI: 10.1525/mp.2022.39.3.268
Aditya Chander, Madeline Huberth, S. Davis, Samantha Silverstein, T. Fujioka
Performers express musical structure using variations in dynamics, timbre, timing, and physical gesture. Previous research on instrumental performance of Western classical music has identified increased nontechnical motion (movement considered supplementary to producing sound) and ritardando at cadences. Cadences typically provide resolution to built-up tension at differing levels of importance according to the hierarchical structure of music. Thus, we hypothesized that performers would embody these differences by employing nontechnical motion and rubato, even when not explicitly asked to express them. Expert violinists performed the Allemande from Bach’s Flute Partita for motion capture and audio recordings in a standing position, then we examined nontechnical motion and rubato in four cadential excerpts (two locally important, two globally important) and four noncadential excerpts. Each excerpt was segmented into the buildup to and departure from the dominant-tonic progression. Increased ritardando as well as nontechnical motion such as side-to-side whole-body swaying and torso rotation in cadential excerpts were found compared to noncadential excerpts. Moreover, violinists used more nontechnical motion and ritardando in the departure segments of the global cadences, while the buildups also showed the global-local contrast. Our results extend previous findings on the expression of cadences by highlighting the hierarchical nature of embodied musical resolution.
表演者通过动态、音色、时间和肢体动作的变化来表达音乐结构。先前对西方古典音乐器乐演奏的研究发现,在节奏中,非技术动作(被认为是发声的补充动作)和慢节奏的增加。根据音乐的层次结构,节奏通常在不同的重要程度上为积累的紧张提供解决方案。因此,我们假设表演者会通过使用非技术动作和手势来体现这些差异,即使没有明确要求表达它们。专家小提琴演奏巴赫长笛Partita中的Allemande动作捕捉和录音在站立的位置,然后我们检查非技术动作和rubato在四个降调节选(两个局部重要,两个全局重要)和四个非降调节选。每个节选都被分割成加入和离开主-主音进程。与非节奏性训练相比,节奏性训练中的节奏性训练和非技术动作(如左右全身摇摆和躯干旋转)都有所增加。此外,小提琴家在全球节奏的离开部分使用了更多的非技术动作和慢速,而在累积部分也表现出全球-局部的对比。我们的结果通过强调体现的音乐分辨率的层次性质,扩展了先前关于节奏表达的发现。
{"title":"Violinists Employ More Expressive Gesture and Timing Around Global Musical Resolutions","authors":"Aditya Chander, Madeline Huberth, S. Davis, Samantha Silverstein, T. Fujioka","doi":"10.1525/mp.2022.39.3.268","DOIUrl":"https://doi.org/10.1525/mp.2022.39.3.268","url":null,"abstract":"Performers express musical structure using variations in dynamics, timbre, timing, and physical gesture. Previous research on instrumental performance of Western classical music has identified increased nontechnical motion (movement considered supplementary to producing sound) and ritardando at cadences. Cadences typically provide resolution to built-up tension at differing levels of importance according to the hierarchical structure of music. Thus, we hypothesized that performers would embody these differences by employing nontechnical motion and rubato, even when not explicitly asked to express them. Expert violinists performed the Allemande from Bach’s Flute Partita for motion capture and audio recordings in a standing position, then we examined nontechnical motion and rubato in four cadential excerpts (two locally important, two globally important) and four noncadential excerpts. Each excerpt was segmented into the buildup to and departure from the dominant-tonic progression. Increased ritardando as well as nontechnical motion such as side-to-side whole-body swaying and torso rotation in cadential excerpts were found compared to noncadential excerpts. Moreover, violinists used more nontechnical motion and ritardando in the departure segments of the global cadences, while the buildups also showed the global-local contrast. Our results extend previous findings on the expression of cadences by highlighting the hierarchical nature of embodied musical resolution.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44110503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Roles of Absolute Pitch and Timbre in Plink Perception 绝对音高和音色在Plink知觉中的作用
IF 2.3 2区 心理学 Q1 Arts and Humanities Pub Date : 2022-02-01 DOI: 10.1525/mp.2022.39.3.289
Rebecca N. Faubion-Trejo, James T. Mantell
Listeners can recognize musical excerpts less than one second in duration (plinks). We investigated the roles of timbre and implicit absolute pitch for plink identification, and the time course associated with processing these cues, by measuring listeners’ recognition, response time, and recall of original, mistuned, reversed, and temporally shuffled plinks that were extracted from popular song recordings. We hypothesized that performance would be best for the original plinks because their acoustic contents were encoded in long-term memory, but that listeners would also be able to identify the manipulated plinks by extracting dynamic and average spectral content. In accordance with our hypotheses, participants responded most rapidly and accurately for the original plinks, although notably, were capable of recognition and recall across all conditions. Our observation of plink recall in the shuffled condition suggests that temporal orderliness is not necessary for plink perception and instead provides evidence for the role of average spectral content. We interpret our results to suggest that listeners process acoustic absolute pitch and timbre information to identify plinks and we explore the implications for local and global acoustic feature processing.
听众可以在不到1秒的时间内识别音乐节选(叮当声)。我们通过测量听众对从流行歌曲录音中提取的原始、未经调整、反转和时间混乱的plink的识别、反应时间和回忆,研究了音色和隐含绝对音高在plink识别中的作用,以及与处理这些线索相关的时程。我们假设原始plink的性能最好,因为它们的声学内容是在长期记忆中编码的,但听众也能够通过提取动态和平均频谱内容来识别被操纵的plink。根据我们的假设,参与者对原始plink的反应最快、最准确,尽管值得注意的是,他们能够在所有条件下识别和回忆。我们对混洗条件下的plink回忆的观察表明,时间有序性对于plink感知不是必要的,而是为平均光谱含量的作用提供了证据。我们解释了我们的结果,表明听众处理声学绝对音高和音色信息来识别plinks,并探索了对局部和全局声学特征处理的影响。
{"title":"The Roles of Absolute Pitch and Timbre in Plink Perception","authors":"Rebecca N. Faubion-Trejo, James T. Mantell","doi":"10.1525/mp.2022.39.3.289","DOIUrl":"https://doi.org/10.1525/mp.2022.39.3.289","url":null,"abstract":"Listeners can recognize musical excerpts less than one second in duration (plinks). We investigated the roles of timbre and implicit absolute pitch for plink identification, and the time course associated with processing these cues, by measuring listeners’ recognition, response time, and recall of original, mistuned, reversed, and temporally shuffled plinks that were extracted from popular song recordings. We hypothesized that performance would be best for the original plinks because their acoustic contents were encoded in long-term memory, but that listeners would also be able to identify the manipulated plinks by extracting dynamic and average spectral content. In accordance with our hypotheses, participants responded most rapidly and accurately for the original plinks, although notably, were capable of recognition and recall across all conditions. Our observation of plink recall in the shuffled condition suggests that temporal orderliness is not necessary for plink perception and instead provides evidence for the role of average spectral content. We interpret our results to suggest that listeners process acoustic absolute pitch and timbre information to identify plinks and we explore the implications for local and global acoustic feature processing.","PeriodicalId":47786,"journal":{"name":"Music Perception","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49584880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Music Perception
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1