首页 > 最新文献

Language and Speech最新文献

英文 中文
Form-Meaning Relations in Russian Confirmative and Surprise Declarative Questions.
IF 1.1 2区 文学 Q3 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-29 DOI: 10.1177/00238309251314862
Andrei Munteanu, Angelika Kiss

Declarative questions (DQs) are declarative sentences used as questions. As declaratives, they differ from information-seeking polar questions (ISQs) in their syntax, and as biased questions, they differ from polar questions because they can convey various epistemic stances: a request for confirmation, surprise, or incredulity. Most studies on their intonation typically compare just one subtype to ISQs. In this paper, we present a production study where participants pronounced ISQs, confirmative and surprise DQs, and assertions in Russian. We analyzed the pitch and duration of the target utterances, as these prosodic cues proved to be important in the formal markedness of various biased question types across languages. A principal component analysis (PCA) on the pitch contours shows that DQs bear the same rise-fall contour as ISQs, but its peak falls on the stressed syllable of the last word of the sentence instead of the verb. The intonation of surprise DQs differs from that of confirmative ones in that they also exhibit a slight peak on the subject. Pitch alone is thus enough to distinguish the four utterance types tested. The PCA analysis was also used to identify higher-level trends in the data (principal components), two of which appear to correspond to core semantic properties, namely belief change and commitment. In addition to intonation, speaker commitment also correlates with utterance duration.

{"title":"Form-Meaning Relations in Russian Confirmative and Surprise Declarative Questions.","authors":"Andrei Munteanu, Angelika Kiss","doi":"10.1177/00238309251314862","DOIUrl":"https://doi.org/10.1177/00238309251314862","url":null,"abstract":"<p><p>Declarative questions (DQs) are declarative sentences used as questions. As declaratives, they differ from information-seeking polar questions (ISQs) in their syntax, and as biased questions, they differ from polar questions because they can convey various epistemic stances: a request for confirmation, surprise, or incredulity. Most studies on their intonation typically compare just one subtype to ISQs. In this paper, we present a production study where participants pronounced ISQs, confirmative and surprise DQs, and assertions in Russian. We analyzed the pitch and duration of the target utterances, as these prosodic cues proved to be important in the formal markedness of various biased question types across languages. A principal component analysis (PCA) on the pitch contours shows that DQs bear the same rise-fall contour as ISQs, but its peak falls on the stressed syllable of the last word of the sentence instead of the verb. The intonation of surprise DQs differs from that of confirmative ones in that they also exhibit a slight peak on the subject. Pitch alone is thus enough to distinguish the four utterance types tested. The PCA analysis was also used to identify higher-level trends in the data (principal components), two of which appear to correspond to core semantic properties, namely belief change and commitment. In addition to intonation, speaker commitment also correlates with utterance duration.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"238309251314862"},"PeriodicalIF":1.1,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143743905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sleep Soundly! Sleep Deprivation Impairs Perception of Spoken Sentences in Challenging Listening Conditions.
IF 1.1 2区 文学 Q3 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-25 DOI: 10.1177/00238309251322954
Boaz M Ben-David, Michal Icht, Gil Zukerman, Nir Fink, Leah Fostick

Speech perception, a daily task crucial for social interaction, is often performed after sleep deprivation (SD). However, there is only scant research on the effects of SD on real-life speech tasks. Speech-processing models (FUEL, ELU) suggest that challenging listening conditions require a greater allocation of cognitive resources, while ideal listening conditions (speech in quiet) require minimal resources. Therefore, SD, which reduces cognitive reserve, may adversely affect speech perception under challenging, but not ideal, conditions. The goal of this study was to test this, manipulating the extent of available resources (with/without SD) and task difficulty in three conditions: sentences presented in (a) quiet, (b) background noise, and (c) with emotional prosody, where participants identified the emotions conveyed by the speaker. The performance of young adults (n = 41) was assessed twice, after nocturnal sleep and after a 24-hr SD in three tasks: (a) sentence repetition in quiet, and (b) noise, and (c) emotion identification of spoken sentences. Results partially supported our hypotheses. The perception of spoken sentences was impaired by SD, but noise-level did not interact with SD effect. Results suggest that 24-hr SD reduces cognitive resources, which in turn impairs listeners' ability (or motivation) to perform daily functions of speech perception. Theoretically, findings directly relate SD to speech perception, supporting current theoretical speech models. Clinically, we suggest that SD should be considered in daily clinical settings, e.g., hearing tests. Finally, professions that require shift work, such as health care, should consider the negative effects of SD on spoken communication.

{"title":"Sleep Soundly! Sleep Deprivation Impairs Perception of Spoken Sentences in Challenging Listening Conditions.","authors":"Boaz M Ben-David, Michal Icht, Gil Zukerman, Nir Fink, Leah Fostick","doi":"10.1177/00238309251322954","DOIUrl":"https://doi.org/10.1177/00238309251322954","url":null,"abstract":"<p><p>Speech perception, a daily task crucial for social interaction, is often performed after sleep deprivation (SD). However, there is only scant research on the effects of SD on real-life speech tasks. Speech-processing models (FUEL, ELU) suggest that challenging listening conditions require a greater allocation of cognitive resources, while ideal listening conditions (speech in quiet) require minimal resources. Therefore, SD, which reduces cognitive reserve, may adversely affect speech perception under challenging, but not ideal, conditions. The goal of this study was to test this, manipulating the extent of available resources (with/without SD) and task difficulty in three conditions: sentences presented in (a) quiet, (b) background noise, and (c) with emotional prosody, where participants identified the emotions conveyed by the speaker. The performance of young adults (<i>n</i> = 41) was assessed twice, after nocturnal sleep and after a 24-hr SD in three tasks: (a) sentence repetition in quiet, and (b) noise, and (c) emotion identification of spoken sentences. Results partially supported our hypotheses. The perception of spoken sentences was impaired by SD, but noise-level did not interact with SD effect. Results suggest that 24-hr SD reduces cognitive resources, which in turn impairs listeners' ability (or motivation) to perform daily functions of speech perception. Theoretically, findings directly relate SD to speech perception, supporting current theoretical speech models. Clinically, we suggest that SD should be considered in daily clinical settings, e.g., hearing tests. Finally, professions that require shift work, such as health care, should consider the negative effects of SD on spoken communication.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"238309251322954"},"PeriodicalIF":1.1,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dimensionality Reduction in Lingual Articulation of Vowels: Evidence From Lax Vowels in Northern Anglo-English.
IF 1.1 2区 文学 Q3 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-25 DOI: 10.1177/00238309251320581
Patrycja Strycharczuk, Sam Kirkham, Emily Gorman, Takayuki Nagamine

There is a long-standing debate on the relevant articulatory dimensions for describing vowel production. In the absence of a theoretical or methodological consensus, different articulatory studies of vowels rely on different measures, which leads to lack of comparability between different sets of results. This paper addresses the problem of how to parametrise the tongue measurements relevant to vowels, obtained from midsagittal articulatory imaging. We focus on the lax vowels subsystem in Northern Anglo-English. A range of measures quantifying tongue position, height, and shape are extracted from an ultrasound dataset representing 40 speakers. These measures are compared, based on how well they capture the lingual contrast between different vowels, how stable they are across different speakers, and how intercorrelated they are. The results suggest that different measures are preferred for different vowels, which supports a multi-dimensional approach in quantifying vowel articulation.

{"title":"Dimensionality Reduction in Lingual Articulation of Vowels: Evidence From Lax Vowels in Northern Anglo-English.","authors":"Patrycja Strycharczuk, Sam Kirkham, Emily Gorman, Takayuki Nagamine","doi":"10.1177/00238309251320581","DOIUrl":"https://doi.org/10.1177/00238309251320581","url":null,"abstract":"<p><p>There is a long-standing debate on the relevant articulatory dimensions for describing vowel production. In the absence of a theoretical or methodological consensus, different articulatory studies of vowels rely on different measures, which leads to lack of comparability between different sets of results. This paper addresses the problem of how to parametrise the tongue measurements relevant to vowels, obtained from midsagittal articulatory imaging. We focus on the lax vowels subsystem in Northern Anglo-English. A range of measures quantifying tongue position, height, and shape are extracted from an ultrasound dataset representing 40 speakers. These measures are compared, based on how well they capture the lingual contrast between different vowels, how stable they are across different speakers, and how intercorrelated they are. The results suggest that different measures are preferred for different vowels, which supports a multi-dimensional approach in quantifying vowel articulation.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"238309251320581"},"PeriodicalIF":1.1,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech Fluency Production and Perception in L1 (Slovak) and L2 (English) Read Speech. 第一语言(斯洛伐克语)和第二语言(英语)朗读中的语音流畅度生成和感知。
IF 1.1 2区 文学 Q3 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-01 Epub Date: 2024-03-05 DOI: 10.1177/00238309241230899
Lucia Mareková, Štefan Beňuš

Research on fluency in native (L1) and non-native (L2) speech production and perception helps us understand how individual L1 speaking style might affect perceived L2 fluency and how this relationship might be reflected in L1 versus L2 oral assessment. While the relationship between production and perception of fluency in spontaneous speech has been studied, the information provided by reading has been overlooked. We argue that reading provides a direct and controlled way to assess language proficiency that might complement information gained from spontaneous speaking. This work analyzes the relationship between speech fluency production and perception in passages of L1 (Slovak) and L2 (English) read by 57 undergraduate Slovak students of English and rated for fluency by 15 English teachers who are Slovak natives. We compare acoustic production measures between L1 and L2 and analyze how their effect on perceived fluency differs for the two languages. Our main finding is that the articulation rate, the overall number of pauses, and the number of between-clause and mid-clause pauses predict ratings differently in L1 Slovak versus L2 English. The speech rate and durations of pauses predict ratings similarly in both languages. The contribution of our results to understanding fluency aspects of spontaneous and read speech, the relationship between L1 and L2, the relationship between production and perception, and to the teaching of L2 English are discussed.

对母语(L1)和非母语(L2)说话流畅性的研究有助于我们了解个人的 L1 说话风格如何影响 L2 的流畅性,以及这种关系如何反映在 L1 和 L2 的口语评估中。虽然人们已经研究了自发言语流利性的产生与感知之间的关系,但却忽视了阅读所提供的信息。我们认为,阅读提供了一种直接、可控的语言能力评估方法,可以补充从即兴口语中获得的信息。本研究分析了由 57 名斯洛伐克英语本科生朗读的 L1(斯洛伐克语)和 L2(英语)段落中语音流利度的产生与感知之间的关系,并由 15 名斯洛伐克本土英语教师对流利度进行评分。我们比较了 L1 和 L2 的发音量,并分析了这两种语言的发音量对感知流利程度的影响有何不同。我们的主要发现是,发音率、停顿总数以及句间停顿和句中停顿的数量对斯洛伐克语第一语言和英语第二语言的评分预测不同。在这两种语言中,语速和停顿持续时间对评分的预测效果相似。我们的研究结果有助于理解自发语音和朗读语音的流畅性、L1 和 L2 之间的关系、生产和感知之间的关系以及 L2 英语教学。
{"title":"Speech Fluency Production and Perception in L1 (Slovak) and L2 (English) Read Speech.","authors":"Lucia Mareková, Štefan Beňuš","doi":"10.1177/00238309241230899","DOIUrl":"10.1177/00238309241230899","url":null,"abstract":"<p><p>Research on fluency in native (L1) and non-native (L2) speech production and perception helps us understand how individual L1 speaking style might affect perceived L2 fluency and how this relationship might be reflected in L1 versus L2 oral assessment. While the relationship between production and perception of fluency in spontaneous speech has been studied, the information provided by reading has been overlooked. We argue that reading provides a direct and controlled way to assess language proficiency that might complement information gained from spontaneous speaking. This work analyzes the relationship between speech fluency production and perception in passages of L1 (Slovak) and L2 (English) read by 57 undergraduate Slovak students of English and rated for fluency by 15 English teachers who are Slovak natives. We compare acoustic production measures between L1 and L2 and analyze how their effect on perceived fluency differs for the two languages. Our main finding is that the articulation rate, the overall number of pauses, and the number of between-clause and mid-clause pauses predict ratings differently in L1 Slovak versus L2 English. The speech rate and durations of pauses predict ratings similarly in both languages. The contribution of our results to understanding fluency aspects of spontaneous and read speech, the relationship between L1 and L2, the relationship between production and perception, and to the teaching of L2 English are discussed.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"36-62"},"PeriodicalIF":1.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incorporating Frequency Effects in the Lexical Access of Mandarin Tone 3 Sandhi. 将频率效应纳入普通话声调 3 Sandhi 的词汇访问。
IF 1.1 2区 文学 Q3 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-01 Epub Date: 2024-07-25 DOI: 10.1177/00238309241260062
Feier Gao, Chien-Jer Charles Lin

Mandarin tone 3 sandhi refers to the phenomenon whereby a tone 3 syllable changes to a tone 2 when followed by another tone 3. This phonological process creates a deviation between the tonal forms realized at morphemic (/tone3-tone3/) and word ([tone2-tone3]) levels, posing questions in terms of how disyllabic tone 3 sandhi words are represented and accessed. The current study conducted three cross-modal lexical decision priming experiments to investigate this issue. Experiment 1 manipulated the frequencies of the initial morpheme and whole word, showing that the higher initial-character frequency against the whole word gives stronger activation to the underlying representation and the lower frequency of the initial character leads to stronger activation of the surface tone. Experiments 2 and 3 operationalized the relative frequency of the initial tone 3 morpheme's realization as a sandhi tone, finding that the competition between the two tonal realizations also influences how T3 sandhi words are accessed. Specifically, the more frequently the T3 morpheme surfaces as a T2 allomorph, the less activated the underlying representation becomes in the mental lexicon. Our results indicate a complex interplay between morpheme, word, and the associated tonal representations in the mental lexicon and that these factors co-determine the lexical access of tone 3 sandhi.

普通话的声调 3 sandhi 是指声调 3 的音节在后面跟上另一个声调 3 时变为声调 2 的现象。这一语音过程在词素(/声调3-声调3/)和词([声调2-声调3])层面实现的声调形式之间产生了偏差,从而提出了如何表征和获取双音节声调3沙希词的问题。本研究进行了三个跨模态词汇决策引物实验来研究这个问题。实验一操纵了词首语素和整个词的频率,结果表明,词首字符频率越高,对整个词的底层表征激活越强;词首字符频率越低,对表面声调的激活越强。实验 2 和 3 将首音 3 词素实现为沙地音的相对频率进行了操作化,发现两种音调实现之间的竞争也会影响 T3 沙地词的存取方式。具体来说,T3 词素作为 T2 异构体出现的频率越高,其在心理词典中的基础表征就越不活跃。我们的研究结果表明,词素、词和心理词典中的相关音调表征之间存在着复杂的相互作用,这些因素共同决定了音调 3 sandhi 的词汇访问。
{"title":"Incorporating Frequency Effects in the Lexical Access of Mandarin Tone 3 Sandhi.","authors":"Feier Gao, Chien-Jer Charles Lin","doi":"10.1177/00238309241260062","DOIUrl":"10.1177/00238309241260062","url":null,"abstract":"<p><p>Mandarin tone 3 sandhi refers to the phenomenon whereby a tone 3 syllable changes to a tone 2 when followed by another tone 3. This phonological process creates a deviation between the tonal forms realized at morphemic (/tone3-tone3/) and word ([tone2-tone3]) levels, posing questions in terms of how disyllabic tone 3 sandhi words are represented and accessed. The current study conducted three cross-modal lexical decision priming experiments to investigate this issue. Experiment 1 manipulated the frequencies of the initial morpheme and whole word, showing that the higher initial-character frequency against the whole word gives stronger activation to the underlying representation and the lower frequency of the initial character leads to stronger activation of the surface tone. Experiments 2 and 3 operationalized the relative frequency of the initial tone 3 morpheme's realization as a sandhi tone, finding that the competition between the two tonal realizations also influences how T3 sandhi words are accessed. Specifically, the more frequently the T3 morpheme surfaces as a T2 allomorph, the less activated the underlying representation becomes in the mental lexicon. Our results indicate a complex interplay between morpheme, word, and the associated tonal representations in the mental lexicon and that these factors co-determine the lexical access of tone 3 sandhi.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"204-228"},"PeriodicalIF":1.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141762454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Linguistic Alignment in Collaborative Dialogue: A Study of Syntactic and Lexical Patterns in Middle School Students. 调查合作对话中的语言一致性:中学生句法和词汇模式研究。
IF 1.1 2区 文学 Q3 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-01 Epub Date: 2024-03-28 DOI: 10.1177/00238309241234565
Xiaoyi Tian, Amanda E Griffith, Zane Price, Kristy Elizabeth Boyer, Kevin Tang

Linguistic alignment, the tendency of speakers to share common linguistic features during conversations, has emerged as a key area of research in computer-supported collaborative learning. While previous studies have shown that linguistic alignment can have a significant impact on collaborative outcomes, there is limited research exploring its role in K-12 learning contexts. This study investigates syntactic and lexical linguistic alignments in a collaborative computer science-learning corpus from 24 pairs (48 individuals) of middle school students (aged 11-13). The results show stronger effects of self-alignment than partner alignment on both syntactic and lexical levels, with students often diverging from their partners on task-relevant words. Furthermore, student self-alignment on the syntactic level is negatively correlated with partner satisfaction ratings, while self-alignment on lexical level is positively correlated with their partner's satisfaction.

语言一致性是指说话者在会话过程中分享共同语言特征的趋势,它已成为计算机支持的协作学习的一个关键研究领域。虽然以往的研究表明,语言一致性对协作结果有重大影响,但探索其在 K-12 学习环境中的作用的研究却很有限。本研究调查了计算机科学协作学习语料库中 24 对(48 人)初中生(11-13 岁)的句法和词汇语言对齐情况。结果表明,在句法和词法层面上,自我对齐的效果比伙伴对齐的效果更强,在任务相关的词上,学生经常与伙伴产生分歧。此外,学生在句法层面的自我对齐与伙伴的满意度评分呈负相关,而在词法层面的自我对齐与伙伴的满意度评分呈正相关。
{"title":"Investigating Linguistic Alignment in Collaborative Dialogue: A Study of Syntactic and Lexical Patterns in Middle School Students.","authors":"Xiaoyi Tian, Amanda E Griffith, Zane Price, Kristy Elizabeth Boyer, Kevin Tang","doi":"10.1177/00238309241234565","DOIUrl":"10.1177/00238309241234565","url":null,"abstract":"<p><p>Linguistic alignment, the tendency of speakers to share common linguistic features during conversations, has emerged as a key area of research in computer-supported collaborative learning. While previous studies have shown that linguistic alignment can have a significant impact on collaborative outcomes, there is limited research exploring its role in K-12 learning contexts. This study investigates syntactic and lexical linguistic alignments in a collaborative computer science-learning corpus from 24 pairs (48 individuals) of middle school students (aged 11-13). The results show stronger effects of self-alignment than partner alignment on both syntactic and lexical levels, with students often diverging from their partners on task-relevant words. Furthermore, student self-alignment on the syntactic level is negatively correlated with partner satisfaction ratings, while self-alignment on lexical level is positively correlated with their partner's satisfaction.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"63-86"},"PeriodicalIF":1.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11831868/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140307734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Speaking Rate Changes on Speech Motor Variability in Adults. 说话速度变化对成人言语运动变异性的影响
IF 1.1 2区 文学 Q3 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-01 Epub Date: 2024-05-23 DOI: 10.1177/00238309241252983
Emily W Wang, Maria I Grigos

The relationship between speaking rate and speech motor variability was examined in three groups of neurotypical adults, n = 40; 15 young adults (18-30 years), 13 adults (31-40 years), and 12 middle-aged adults (41-50 years). Participants completed a connected speech task at three speaking rates (habitual, fast, and slow) where kinematic (lower lip movement) and acoustic data were obtained. Duration and variability were measured at each speaking rate. Findings revealed a complex relationship between speaking rate and variability. Adults from the middle age range (31-40 years) demonstrated shorter acoustic and kinematic durations compared with the oldest age group (41-50 years) during the habitual speaking rate condition. All adults demonstrated the greatest variability in the slow speaking rate condition, with no significant differences in variability between habitual and fast speaking rates. Interestingly, lip aperture variability was significantly lower in the youngest age group (18-30 years) compared with the two older groups during the fast speaking rate condition. Differences in measures of acoustic variability were not observed across the age levels. Strong negative correlations between kinematic/acoustic duration and lip aperture/acoustic variability in the youngest age group were revealed. Therefore, while a slow speaking rate does result in greater variability compared with habitual and fast speaking rates, longer durations of productions by the different age groups were not linked to higher spatiotemporal index (STI) values, suggesting that timing influences speech motor variability, but is not the sole contributor.

我们对三组神经正常的成年人(n=40)进行了研究,其中包括 15 名年轻成年人(18-30 岁)、13 名成年人(31-40 岁)和 12 名中年成年人(41-50 岁),研究了说话速度与言语运动变异性之间的关系。参与者以三种语速(习惯语速、快语速和慢语速)完成连贯言语任务,并在此过程中获得运动学(下唇运动)和声学数据。测量了每种语速下的持续时间和变异性。研究结果表明,说话速度和变异性之间存在复杂的关系。与最大年龄组(41-50 岁)相比,中年成人(31-40 岁)在习惯说话速度条件下的声音和运动持续时间较短。所有成年人在慢速说话条件下的变异性最大,而在习惯说话速度和快速说话速度之间的变异性没有显著差异。有趣的是,在快语速条件下,最小年龄组(18-30 岁)的唇孔变异性明显低于两个较大年龄组。不同年龄段的人在声音变异性测量方面没有发现差异。在最年轻的年龄组中,运动学/声学持续时间与唇孔径/声学变异性之间呈强负相关。因此,虽然与习惯语速和快语速相比,慢语速确实会导致更大的变异性,但不同年龄组的发音持续时间较长与较高的时空指数(STI)值并无关联,这表明时间会影响言语运动变异性,但并不是唯一的因素。
{"title":"Effects of Speaking Rate Changes on Speech Motor Variability in Adults.","authors":"Emily W Wang, Maria I Grigos","doi":"10.1177/00238309241252983","DOIUrl":"10.1177/00238309241252983","url":null,"abstract":"<p><p>The relationship between speaking rate and speech motor variability was examined in three groups of neurotypical adults, <i>n</i> = 40; 15 young adults (18-30 years), 13 adults (31-40 years), and 12 middle-aged adults (41-50 years). Participants completed a connected speech task at three speaking rates (habitual, fast, and slow) where kinematic (lower lip movement) and acoustic data were obtained. Duration and variability were measured at each speaking rate. Findings revealed a complex relationship between speaking rate and variability. Adults from the middle age range (31-40 years) demonstrated shorter acoustic and kinematic durations compared with the oldest age group (41-50 years) during the habitual speaking rate condition. All adults demonstrated the greatest variability in the slow speaking rate condition, with no significant differences in variability between habitual and fast speaking rates. Interestingly, lip aperture variability was significantly lower in the youngest age group (18-30 years) compared with the two older groups during the fast speaking rate condition. Differences in measures of acoustic variability were not observed across the age levels. Strong negative correlations between kinematic/acoustic duration and lip aperture/acoustic variability in the youngest age group were revealed. Therefore, while a slow speaking rate does result in greater variability compared with habitual and fast speaking rates, longer durations of productions by the different age groups were not linked to higher spatiotemporal index (STI) values, suggesting that timing influences speech motor variability, but is not the sole contributor.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"141-161"},"PeriodicalIF":1.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141086564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Audiovisual Perception of Lexical Stress: Beat Gestures and Articulatory Cues. 词汇重音的视听感知:节拍手势和发音线索。
IF 1.1 2区 文学 Q3 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-01 Epub Date: 2024-06-14 DOI: 10.1177/00238309241258162
Ronny Bujok, Antje S Meyer, Hans Rutger Bosker

Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating VOORnaam or voorNAAM). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts.

人类交流本身就是多模态的。听觉语言和视觉线索都可以用来理解另一个说话者。大多数关于视听语音感知的研究都集中在对语音片段(即语音)的感知上。然而,人们对视觉信息对词汇重音等语音超片段感知的影响却知之甚少。在两项实验中,我们研究了不同视觉线索(如面部发音线索和节拍手势)对词汇重音视听感知的影响。我们将双音节荷兰语重音对的听觉词汇重音连续音与说话者在第一个或第二个音节上发出重音的视频(例如,发音 VOORnaam 或 voorNAAM)一起呈现。此外,我们还将在任一音节上发出词性重音的说话者的脸部与在第一或第二个音节上发出节拍手势的肢体结合起来,并将其完全交叉。结果表明,在静音视频中,人们成功地使用了视觉发音线索来表示重音。然而,在视听条件下,我们未能发现视觉发音线索的影响。与此相反,我们发现节拍手势与语音的时间一致性有力地影响了参与者对词汇重音的感知。这些结果凸显了在多模态语境中考虑语言超语段方面的重要性。
{"title":"Audiovisual Perception of Lexical Stress: Beat Gestures and Articulatory Cues.","authors":"Ronny Bujok, Antje S Meyer, Hans Rutger Bosker","doi":"10.1177/00238309241258162","DOIUrl":"10.1177/00238309241258162","url":null,"abstract":"<p><p>Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating <i>VOORnaam</i> or <i>voorNAAM</i>). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"181-203"},"PeriodicalIF":1.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11831865/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141321984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Lexical Tones for Speaker Discrimination. 为辨别说话人建立词汇音调模型
IF 1.1 2区 文学 Q3 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-01 Epub Date: 2024-07-27 DOI: 10.1177/00238309241261702
Ricky K W Chan, Bruce Xiao Wang

Fundamental frequency (F0) has been widely studied and used in the context of speaker discrimination and forensic voice comparison casework, but most previous studies focused on long-term F0 statistics. Lexical tone, the linguistically structured and dynamic aspects of F0, has received much less research attention. A main methodological issue lies on how tonal F0 should be parameterized for the best speaker discrimination performance. This paper compares the speaker discriminatory performance of three approaches with lexical tone modeling: discrete cosine transform (DCT), polynomial curve fitting, and quantitative target approximation (qTA). Results show that using parameters based on DCT and polynomials led to similarly promising performance, whereas those based on qTA generally yielded relatively poor performance. Implications modeling surface tonal F0 and the underlying articulatory processes for speaker discrimination are discussed.

基频(F0)已被广泛研究并用于说话人辨别和法医语音比对案例工作中,但以前的研究大多集中于长期 F0 统计。词调,即 F0 的语言结构和动态方面,受到的研究关注要少得多。一个主要的方法论问题在于如何对音调 F0 进行参数化,以获得最佳的说话者辨别性能。本文比较了离散余弦变换 (DCT)、多项式曲线拟合和定量目标逼近 (qTA) 这三种词调建模方法的说话人辨别性能。结果表明,使用基于离散余弦变换和多项式的参数可获得类似的性能,而基于 qTA 的参数一般性能相对较差。本文讨论了表面音调 F0 建模和说话人辨别的基本发音过程的意义。
{"title":"Modeling Lexical Tones for Speaker Discrimination.","authors":"Ricky K W Chan, Bruce Xiao Wang","doi":"10.1177/00238309241261702","DOIUrl":"10.1177/00238309241261702","url":null,"abstract":"<p><p>Fundamental frequency (F0) has been widely studied and used in the context of speaker discrimination and forensic voice comparison casework, but most previous studies focused on long-term F0 statistics. Lexical tone, the linguistically structured and dynamic aspects of F0, has received much less research attention. A main methodological issue lies on how tonal F0 should be parameterized for the best speaker discrimination performance. This paper compares the speaker discriminatory performance of three approaches with lexical tone modeling: discrete cosine transform (DCT), polynomial curve fitting, and quantitative target approximation (qTA). Results show that using parameters based on DCT and polynomials led to similarly promising performance, whereas those based on qTA generally yielded relatively poor performance. Implications modeling surface tonal F0 and the underlying articulatory processes for speaker discrimination are discussed.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"229-243"},"PeriodicalIF":1.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141768005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
English Speakers' Perception of Non-native Vowel Contrasts in Adverse Listening Conditions: A Discrimination Study on the German Front Rounded Vowels /y/ and /ø/. 英语使用者在不利听力条件下对非母语元音对比的感知:对德国前元音/y/和/ø/的辨别研究。
IF 1.1 2区 文学 Q3 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-01 Epub Date: 2024-06-10 DOI: 10.1177/00238309241254350
Stephanie Kaucke, Marcel Schlechtweg

Previous research has shown that it is difficult for English speakers to distinguish the front rounded vowels /y/ and /ø/ from the back rounded vowels /u/ and /o/. In this study, we examine the effect of noise on this perceptual difficulty. In an Oddity Discrimination Task, English speakers without any knowledge of German were asked to discriminate between German-sounding pseudowords varying in the vowel both in quiet and in white noise at two signal-to-noise ratios (8 and 0 dB). In test trials, vowels of the same height were contrasted with each other, whereas a contrast with /a/ served as a control trial. Results revealed that a contrast with /a/ remained stable in every listening condition for both high and mid vowels. When contrasting vowels of the same height, however, there was a perceptual shift along the F2 dimension as the noise level increased. Although the /ø/-/o/ and particularly /y/-/u/ contrasts were the most difficult in quiet, accuracy on /i/-/y/ and /e/-/ø/ trials decreased immensely when the speech signal was masked. The German control group showed the same pattern, albeit less severe than the non-native group, suggesting that even in low-level tasks with pseudowords, there is a native advantage in speech perception in noise.

以往的研究表明,英语使用者很难将前圆元音/y/和/ø/与后圆元音/u/和/o/区分开来。在本研究中,我们考察了噪音对这一知觉困难的影响。在 "怪音辨别任务 "中,我们要求不懂德语的英语使用者在两种信噪比(8 dB 和 0 dB)下,辨别在安静和白噪声中元音不同的德语发音假词。在测试试验中,相同高度的元音相互对比,而与 /a/ 的对比则作为对照试验。结果表明,无论是高元音还是中元音,与 /a/ 的对比在各种听力条件下都保持稳定。然而,当对同一高度的元音进行对比时,随着噪音水平的增加,F2 维度会出现知觉偏移。虽然/ø/-/o/,尤其是/y/-/u/的对比在安静时最难,但当语音信号被掩盖时,/i/-/y/和/e/-/ø/的准确率会大大降低。德国对照组也出现了同样的情况,尽管没有非母语组严重,这表明即使在低级的假词任务中,母语者在噪声中的语音感知能力也有优势。
{"title":"English Speakers' Perception of Non-native Vowel Contrasts in Adverse Listening Conditions: A Discrimination Study on the German Front Rounded Vowels /y/ and /ø/.","authors":"Stephanie Kaucke, Marcel Schlechtweg","doi":"10.1177/00238309241254350","DOIUrl":"10.1177/00238309241254350","url":null,"abstract":"<p><p>Previous research has shown that it is difficult for English speakers to distinguish the front rounded vowels /y/ and /ø/ from the back rounded vowels /u/ and /o/. In this study, we examine the effect of noise on this perceptual difficulty. In an Oddity Discrimination Task, English speakers without any knowledge of German were asked to discriminate between German-sounding pseudowords varying in the vowel both in quiet and in white noise at two signal-to-noise ratios (8 and 0 dB). In test trials, vowels of the same height were contrasted with each other, whereas a contrast with /a/ served as a control trial. Results revealed that a contrast with /a/ remained stable in every listening condition for both high and mid vowels. When contrasting vowels of the same height, however, there was a perceptual shift along the F2 dimension as the noise level increased. Although the /ø/-/o/ and particularly /y/-/u/ contrasts were the most difficult in quiet, accuracy on /i/-/y/ and /e/-/ø/ trials decreased immensely when the speech signal was masked. The German control group showed the same pattern, albeit less severe than the non-native group, suggesting that even in low-level tasks with pseudowords, there is a native advantage in speech perception in noise.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"162-180"},"PeriodicalIF":1.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11831862/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141297207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Language and Speech
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1