Pub Date : 2025-04-29DOI: 10.1177/00238309251330878
Cynthia S Q Siew, Jonas Fine W Z Tan
The goal of the present study was to investigate if cognitive traces of the network structure of the phonological language network, where phonological word-form neighbors are connected to each other, could be uncovered in word substitution errors. The phonological network has a set of macro-level (i.e., features characterizing global structure of the lexicon) and meso-level (i.e., features characterizing intermediate structure or subgroups within the lexicon) structural features that should be observable in speech error data if such features play a role in production and retrieval processes. A total of 1,067 single-word substitution errors, which included 965 production errors (i.e., slips of the tongue) and 102 perception errors (i.e., slips of the ear), were analyzed in the present study. Results indicated evidence of both macro- and meso-level lexicon structures in word substitution errors, providing converging evidence that structural features of the phonological network have implications for language-related processes.
{"title":"Production and Perception Errors From Speech Error Corpora Reflect Macro- and Meso-Level Structure of the Phonological Language Network.","authors":"Cynthia S Q Siew, Jonas Fine W Z Tan","doi":"10.1177/00238309251330878","DOIUrl":"https://doi.org/10.1177/00238309251330878","url":null,"abstract":"<p><p>The goal of the present study was to investigate if cognitive traces of the network structure of the phonological language network, where phonological word-form neighbors are connected to each other, could be uncovered in word substitution errors. The phonological network has a set of macro-level (i.e., features characterizing global structure of the lexicon) and meso-level (i.e., features characterizing intermediate structure or subgroups within the lexicon) structural features that should be observable in speech error data if such features play a role in production and retrieval processes. A total of 1,067 single-word substitution errors, which included 965 production errors (i.e., slips of the tongue) and 102 perception errors (i.e., slips of the ear), were analyzed in the present study. Results indicated evidence of both macro- and meso-level lexicon structures in word substitution errors, providing converging evidence that structural features of the phonological network have implications for language-related processes.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"238309251330878"},"PeriodicalIF":1.1,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144007603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-29DOI: 10.1177/00238309251327209
Adele Vaks, Virve-Anneli Vihman
In this study, we investigate whether two structurally distinct languages, Norwegian and Russian, influence the use of Estonian morphosyntax by bilingual 5 to 7-year-olds. Using a sentence-repetition task, we tested the acquisition and use of Estonian morphosyntax by children acquiring Estonian alongside Norwegian and Russian, which differ in their use of morphological marking. We tested 69 children aged 4;9 to 7;10 (24 Estonian-Norwegian and 24 Russian-Estonian bilinguals, 21 Estonian monolinguals), using three sentence structures that vary across the languages (copula clauses, experiencer clauses, and complex conditional sentences). Quantitative results showed no significant differences between groups. Both groups were at ceiling for copula clauses, but they performed in opposite directions with the other two structures, suggesting possible effects of the other language. An error analysis revealed small differences in children's use of experiencer and conditional constructions. Contrary to expectations, Norwegian-speaking bilinguals did not produce more errors of omission than of commission in either sentence type. Rather, they used a wider array of cases in the experiencer clauses than Russian-speaking children. In the conditional items, both groups exhibited a tendency to use indicative past in place of conditional present, transferring the use of past forms for conditional meanings from Norwegian or Russian. Other differences are discussed in light of language structure, Estonian exposure, and study design.
{"title":"Bilingual Acquisition of Morphology: Norwegian and Russian Influence on Children's Sentence Repetition in Estonian.","authors":"Adele Vaks, Virve-Anneli Vihman","doi":"10.1177/00238309251327209","DOIUrl":"https://doi.org/10.1177/00238309251327209","url":null,"abstract":"<p><p>In this study, we investigate whether two structurally distinct languages, Norwegian and Russian, influence the use of Estonian morphosyntax by bilingual 5 to 7-year-olds. Using a sentence-repetition task, we tested the acquisition and use of Estonian morphosyntax by children acquiring Estonian alongside Norwegian and Russian, which differ in their use of morphological marking. We tested 69 children aged 4;9 to 7;10 (24 Estonian-Norwegian and 24 Russian-Estonian bilinguals, 21 Estonian monolinguals), using three sentence structures that vary across the languages (copula clauses, experiencer clauses, and complex conditional sentences). Quantitative results showed no significant differences between groups. Both groups were at ceiling for copula clauses, but they performed in opposite directions with the other two structures, suggesting possible effects of the other language. An error analysis revealed small differences in children's use of experiencer and conditional constructions. Contrary to expectations, Norwegian-speaking bilinguals did not produce more errors of omission than of commission in either sentence type. Rather, they used a wider array of cases in the experiencer clauses than Russian-speaking children. In the conditional items, both groups exhibited a tendency to use indicative past in place of conditional present, transferring the use of past forms for conditional meanings from Norwegian or Russian. Other differences are discussed in light of language structure, Estonian exposure, and study design.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"238309251327209"},"PeriodicalIF":1.1,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144024372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigates cross-cultural vocal emotion recognition in a corpus with an affectively and linguistically balanced design. It has two main goals, one theoretical and the other methodological. First, it aims to explore the recognition of emotions in two typologically different languages, Dutch and Korean, within and across cultures. Second, it aims to contribute to the methodological development of the study of cross-cultural vocal emotion recognition by presenting a new corpus for Dutch and Korean emotional speech (the Demo/Koremo corpus), containing portrayals of eight emotions differing in arousal, valence, and basicness (joy, pride, tenderness, relief, anger, fear, sadness, irritation) produced by Dutch and Korean actors, and communicated in a single pseudo phrase which was viable in both languages. Dutch and Korean participants listened to recordings of all emotions produced by the Dutch and Korean actors and indicated for each one which emotion they thought it expressed. Both groups of listeners recognized emotions significantly above chance in both languages, but more accurately in their native language, in line with the Dialect Theory of emotion. Low-arousal emotions, negative emotions, and basic emotions were recognized more accurately than their counterparts. While some of these results replicate earlier findings, others-the effect of arousal and the within-cultural effect of valence and basicness-had not been previously investigated. This study provides new insights in cross-cultural vocal emotion recognition and contributes to the methodological toolkit of intercultural emotion recognition research.
{"title":"Investigating Cross-Cultural Vocal Emotion Recognition With an Affectively and Linguistically Balanced Design.","authors":"Yachan Liang, Martijn Goudbeek, Agnieszka Konopka, Jiyoun Choi, Mirjam Broersma","doi":"10.1177/00238309251318730","DOIUrl":"https://doi.org/10.1177/00238309251318730","url":null,"abstract":"<p><p>This study investigates cross-cultural vocal emotion recognition in a corpus with an affectively and linguistically balanced design. It has two main goals, one theoretical and the other methodological. First, it aims to explore the recognition of emotions in two typologically different languages, Dutch and Korean, within and across cultures. Second, it aims to contribute to the methodological development of the study of cross-cultural vocal emotion recognition by presenting a new corpus for Dutch and Korean emotional speech (the Demo/Koremo corpus), containing portrayals of eight emotions differing in arousal, valence, and basicness (joy, pride, tenderness, relief, anger, fear, sadness, irritation) produced by Dutch and Korean actors, and communicated in a single pseudo phrase which was viable in both languages. Dutch and Korean participants listened to recordings of all emotions produced by the Dutch and Korean actors and indicated for each one which emotion they thought it expressed. Both groups of listeners recognized emotions significantly above chance in both languages, but more accurately in their native language, in line with the Dialect Theory of emotion. Low-arousal emotions, negative emotions, and basic emotions were recognized more accurately than their counterparts. While some of these results replicate earlier findings, others-the effect of arousal and the within-cultural effect of valence and basicness-had not been previously investigated. This study provides new insights in cross-cultural vocal emotion recognition and contributes to the methodological toolkit of intercultural emotion recognition research.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"238309251318730"},"PeriodicalIF":1.1,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-15DOI: 10.1177/00238309251325270
Ezequiel M Durand-López, Vicente Iranzo
Studies exploring gender agreement processing in late bilinguals whose first language (L1) lacks the gender feature suggest that advanced second language (L2) learners can detect gender agreement violations in the L2. Importantly, these studies have mainly included gender canonical nouns (e.g., la silla, el libro). However, the specific mechanisms L2 learners use while processing L2 gender agreement are unclear: Do learners rely on morphophonological cues (i.e., gender suffix) or on their gender assignment? In this study, English advanced L2 learners of Spanish and Spanish monolinguals completed a moving window task containing sentences with canonical and deceptive nouns engaged in noun-adjective gender (dis)agreement relations (e.g., casa antigua/*o, mano rosada/*o). Results revealed that Spanish monolinguals and advanced L2 learners were sensitive to violations with canonical nouns. However, native speakers were significantly slower at computing gender disagreement than agreement with deceptive nouns, while advanced L2 learners exhibited the opposite processing pattern (i.e., they took longer to process gender agreement than disagreement with deceptive nouns). The findings suggest that native speakers seem to rely on their gender assignment, while L2 learners focus more on suffix matching patterns (i.e., if -o in the noun, -o in the adjective).
对母语缺乏性别特征的晚期双语者的性别协议加工的研究表明,高级第二语言学习者可以发现第二语言中违反性别协议的行为。重要的是,这些研究主要包括性别规范名词(如la silla, el libro)。然而,二语学习者在处理二语性别认同时使用的具体机制尚不清楚:学习者是依赖词音线索(即性别后缀)还是依赖他们的性别分配?在本研究中,英语高级二语学习者西班牙语和西班牙语单语学习者完成了一个移动窗口任务,该任务包含具有名词-形容词性别(不)一致关系的规范名词和欺骗性名词(如casa antigua/*o, mano rosada/*o)。结果表明,西班牙语单语者和高级二语学习者对规范名词的违反行为较为敏感。然而,母语人士在处理性别不一致上明显慢于对欺骗性名词的同意,而高级二语学习者则表现出相反的处理模式(即,他们在处理性别一致上比在处理欺骗性名词上的不同意上花的时间更长)。研究结果表明,母语人士似乎依赖于他们的性别分配,而二语学习者更关注后缀匹配模式(例如,名词中的-o,形容词中的-o)。
{"title":"L1 Versus L2 Gender Agreement Processing: Reliance on Gender Assignment or Morphophonological Cue Matching?","authors":"Ezequiel M Durand-López, Vicente Iranzo","doi":"10.1177/00238309251325270","DOIUrl":"https://doi.org/10.1177/00238309251325270","url":null,"abstract":"<p><p>Studies exploring gender agreement processing in late bilinguals whose first language (L1) lacks the gender feature suggest that advanced second language (L2) learners can detect gender agreement violations in the L2. Importantly, these studies have mainly included gender canonical nouns (e.g., <i>la silla, el libro</i>). However, the specific mechanisms L2 learners use while processing L2 gender agreement are unclear: Do learners rely on morphophonological cues (i.e., gender suffix) or on their gender assignment? In this study, English advanced L2 learners of Spanish and Spanish monolinguals completed a moving window task containing sentences with canonical and deceptive nouns engaged in noun-adjective gender (dis)agreement relations (e.g., <i>casa antigua</i>/*<i>o, mano rosada</i>/*<i>o</i>). Results revealed that Spanish monolinguals and advanced L2 learners were sensitive to violations with canonical nouns. However, native speakers were significantly slower at computing gender disagreement than agreement with deceptive nouns, while advanced L2 learners exhibited the opposite processing pattern (i.e., they took longer to process gender agreement than disagreement with deceptive nouns). The findings suggest that native speakers seem to rely on their gender assignment, while L2 learners focus more on suffix matching patterns (i.e., if -<i>o</i> in the noun, -<i>o</i> in the adjective).</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"238309251325270"},"PeriodicalIF":1.1,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-29DOI: 10.1177/00238309251314862
Andrei Munteanu, Angelika Kiss
Declarative questions (DQs) are declarative sentences used as questions. As declaratives, they differ from information-seeking polar questions (ISQs) in their syntax, and as biased questions, they differ from polar questions because they can convey various epistemic stances: a request for confirmation, surprise, or incredulity. Most studies on their intonation typically compare just one subtype to ISQs. In this paper, we present a production study where participants pronounced ISQs, confirmative and surprise DQs, and assertions in Russian. We analyzed the pitch and duration of the target utterances, as these prosodic cues proved to be important in the formal markedness of various biased question types across languages. A principal component analysis (PCA) on the pitch contours shows that DQs bear the same rise-fall contour as ISQs, but its peak falls on the stressed syllable of the last word of the sentence instead of the verb. The intonation of surprise DQs differs from that of confirmative ones in that they also exhibit a slight peak on the subject. Pitch alone is thus enough to distinguish the four utterance types tested. The PCA analysis was also used to identify higher-level trends in the data (principal components), two of which appear to correspond to core semantic properties, namely belief change and commitment. In addition to intonation, speaker commitment also correlates with utterance duration.
{"title":"Form-Meaning Relations in Russian Confirmative and Surprise Declarative Questions.","authors":"Andrei Munteanu, Angelika Kiss","doi":"10.1177/00238309251314862","DOIUrl":"https://doi.org/10.1177/00238309251314862","url":null,"abstract":"<p><p>Declarative questions (DQs) are declarative sentences used as questions. As declaratives, they differ from information-seeking polar questions (ISQs) in their syntax, and as biased questions, they differ from polar questions because they can convey various epistemic stances: a request for confirmation, surprise, or incredulity. Most studies on their intonation typically compare just one subtype to ISQs. In this paper, we present a production study where participants pronounced ISQs, confirmative and surprise DQs, and assertions in Russian. We analyzed the pitch and duration of the target utterances, as these prosodic cues proved to be important in the formal markedness of various biased question types across languages. A principal component analysis (PCA) on the pitch contours shows that DQs bear the same rise-fall contour as ISQs, but its peak falls on the stressed syllable of the last word of the sentence instead of the verb. The intonation of surprise DQs differs from that of confirmative ones in that they also exhibit a slight peak on the subject. Pitch alone is thus enough to distinguish the four utterance types tested. The PCA analysis was also used to identify higher-level trends in the data (principal components), two of which appear to correspond to core semantic properties, namely belief change and commitment. In addition to intonation, speaker commitment also correlates with utterance duration.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"238309251314862"},"PeriodicalIF":1.1,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143743905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-25DOI: 10.1177/00238309251322954
Boaz M Ben-David, Michal Icht, Gil Zukerman, Nir Fink, Leah Fostick
Speech perception, a daily task crucial for social interaction, is often performed after sleep deprivation (SD). However, there is only scant research on the effects of SD on real-life speech tasks. Speech-processing models (FUEL, ELU) suggest that challenging listening conditions require a greater allocation of cognitive resources, while ideal listening conditions (speech in quiet) require minimal resources. Therefore, SD, which reduces cognitive reserve, may adversely affect speech perception under challenging, but not ideal, conditions. The goal of this study was to test this, manipulating the extent of available resources (with/without SD) and task difficulty in three conditions: sentences presented in (a) quiet, (b) background noise, and (c) with emotional prosody, where participants identified the emotions conveyed by the speaker. The performance of young adults (n = 41) was assessed twice, after nocturnal sleep and after a 24-hr SD in three tasks: (a) sentence repetition in quiet, and (b) noise, and (c) emotion identification of spoken sentences. Results partially supported our hypotheses. The perception of spoken sentences was impaired by SD, but noise-level did not interact with SD effect. Results suggest that 24-hr SD reduces cognitive resources, which in turn impairs listeners' ability (or motivation) to perform daily functions of speech perception. Theoretically, findings directly relate SD to speech perception, supporting current theoretical speech models. Clinically, we suggest that SD should be considered in daily clinical settings, e.g., hearing tests. Finally, professions that require shift work, such as health care, should consider the negative effects of SD on spoken communication.
{"title":"Sleep Soundly! Sleep Deprivation Impairs Perception of Spoken Sentences in Challenging Listening Conditions.","authors":"Boaz M Ben-David, Michal Icht, Gil Zukerman, Nir Fink, Leah Fostick","doi":"10.1177/00238309251322954","DOIUrl":"https://doi.org/10.1177/00238309251322954","url":null,"abstract":"<p><p>Speech perception, a daily task crucial for social interaction, is often performed after sleep deprivation (SD). However, there is only scant research on the effects of SD on real-life speech tasks. Speech-processing models (FUEL, ELU) suggest that challenging listening conditions require a greater allocation of cognitive resources, while ideal listening conditions (speech in quiet) require minimal resources. Therefore, SD, which reduces cognitive reserve, may adversely affect speech perception under challenging, but not ideal, conditions. The goal of this study was to test this, manipulating the extent of available resources (with/without SD) and task difficulty in three conditions: sentences presented in (a) quiet, (b) background noise, and (c) with emotional prosody, where participants identified the emotions conveyed by the speaker. The performance of young adults (<i>n</i> = 41) was assessed twice, after nocturnal sleep and after a 24-hr SD in three tasks: (a) sentence repetition in quiet, and (b) noise, and (c) emotion identification of spoken sentences. Results partially supported our hypotheses. The perception of spoken sentences was impaired by SD, but noise-level did not interact with SD effect. Results suggest that 24-hr SD reduces cognitive resources, which in turn impairs listeners' ability (or motivation) to perform daily functions of speech perception. Theoretically, findings directly relate SD to speech perception, supporting current theoretical speech models. Clinically, we suggest that SD should be considered in daily clinical settings, e.g., hearing tests. Finally, professions that require shift work, such as health care, should consider the negative effects of SD on spoken communication.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"238309251322954"},"PeriodicalIF":1.1,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-03-05DOI: 10.1177/00238309241230899
Lucia Mareková, Štefan Beňuš
Research on fluency in native (L1) and non-native (L2) speech production and perception helps us understand how individual L1 speaking style might affect perceived L2 fluency and how this relationship might be reflected in L1 versus L2 oral assessment. While the relationship between production and perception of fluency in spontaneous speech has been studied, the information provided by reading has been overlooked. We argue that reading provides a direct and controlled way to assess language proficiency that might complement information gained from spontaneous speaking. This work analyzes the relationship between speech fluency production and perception in passages of L1 (Slovak) and L2 (English) read by 57 undergraduate Slovak students of English and rated for fluency by 15 English teachers who are Slovak natives. We compare acoustic production measures between L1 and L2 and analyze how their effect on perceived fluency differs for the two languages. Our main finding is that the articulation rate, the overall number of pauses, and the number of between-clause and mid-clause pauses predict ratings differently in L1 Slovak versus L2 English. The speech rate and durations of pauses predict ratings similarly in both languages. The contribution of our results to understanding fluency aspects of spontaneous and read speech, the relationship between L1 and L2, the relationship between production and perception, and to the teaching of L2 English are discussed.
{"title":"Speech Fluency Production and Perception in L1 (Slovak) and L2 (English) Read Speech.","authors":"Lucia Mareková, Štefan Beňuš","doi":"10.1177/00238309241230899","DOIUrl":"10.1177/00238309241230899","url":null,"abstract":"<p><p>Research on fluency in native (L1) and non-native (L2) speech production and perception helps us understand how individual L1 speaking style might affect perceived L2 fluency and how this relationship might be reflected in L1 versus L2 oral assessment. While the relationship between production and perception of fluency in spontaneous speech has been studied, the information provided by reading has been overlooked. We argue that reading provides a direct and controlled way to assess language proficiency that might complement information gained from spontaneous speaking. This work analyzes the relationship between speech fluency production and perception in passages of L1 (Slovak) and L2 (English) read by 57 undergraduate Slovak students of English and rated for fluency by 15 English teachers who are Slovak natives. We compare acoustic production measures between L1 and L2 and analyze how their effect on perceived fluency differs for the two languages. Our main finding is that the articulation rate, the overall number of pauses, and the number of between-clause and mid-clause pauses predict ratings differently in L1 Slovak versus L2 English. The speech rate and durations of pauses predict ratings similarly in both languages. The contribution of our results to understanding fluency aspects of spontaneous and read speech, the relationship between L1 and L2, the relationship between production and perception, and to the teaching of L2 English are discussed.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"36-62"},"PeriodicalIF":1.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-07-25DOI: 10.1177/00238309241260062
Feier Gao, Chien-Jer Charles Lin
Mandarin tone 3 sandhi refers to the phenomenon whereby a tone 3 syllable changes to a tone 2 when followed by another tone 3. This phonological process creates a deviation between the tonal forms realized at morphemic (/tone3-tone3/) and word ([tone2-tone3]) levels, posing questions in terms of how disyllabic tone 3 sandhi words are represented and accessed. The current study conducted three cross-modal lexical decision priming experiments to investigate this issue. Experiment 1 manipulated the frequencies of the initial morpheme and whole word, showing that the higher initial-character frequency against the whole word gives stronger activation to the underlying representation and the lower frequency of the initial character leads to stronger activation of the surface tone. Experiments 2 and 3 operationalized the relative frequency of the initial tone 3 morpheme's realization as a sandhi tone, finding that the competition between the two tonal realizations also influences how T3 sandhi words are accessed. Specifically, the more frequently the T3 morpheme surfaces as a T2 allomorph, the less activated the underlying representation becomes in the mental lexicon. Our results indicate a complex interplay between morpheme, word, and the associated tonal representations in the mental lexicon and that these factors co-determine the lexical access of tone 3 sandhi.
{"title":"Incorporating Frequency Effects in the Lexical Access of Mandarin Tone 3 Sandhi.","authors":"Feier Gao, Chien-Jer Charles Lin","doi":"10.1177/00238309241260062","DOIUrl":"10.1177/00238309241260062","url":null,"abstract":"<p><p>Mandarin tone 3 sandhi refers to the phenomenon whereby a tone 3 syllable changes to a tone 2 when followed by another tone 3. This phonological process creates a deviation between the tonal forms realized at morphemic (/tone3-tone3/) and word ([tone2-tone3]) levels, posing questions in terms of how disyllabic tone 3 sandhi words are represented and accessed. The current study conducted three cross-modal lexical decision priming experiments to investigate this issue. Experiment 1 manipulated the frequencies of the initial morpheme and whole word, showing that the higher initial-character frequency against the whole word gives stronger activation to the underlying representation and the lower frequency of the initial character leads to stronger activation of the surface tone. Experiments 2 and 3 operationalized the relative frequency of the initial tone 3 morpheme's realization as a sandhi tone, finding that the competition between the two tonal realizations also influences how T3 sandhi words are accessed. Specifically, the more frequently the T3 morpheme surfaces as a T2 allomorph, the less activated the underlying representation becomes in the mental lexicon. Our results indicate a complex interplay between morpheme, word, and the associated tonal representations in the mental lexicon and that these factors co-determine the lexical access of tone 3 sandhi.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"204-228"},"PeriodicalIF":1.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141762454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-03-28DOI: 10.1177/00238309241234565
Xiaoyi Tian, Amanda E Griffith, Zane Price, Kristy Elizabeth Boyer, Kevin Tang
Linguistic alignment, the tendency of speakers to share common linguistic features during conversations, has emerged as a key area of research in computer-supported collaborative learning. While previous studies have shown that linguistic alignment can have a significant impact on collaborative outcomes, there is limited research exploring its role in K-12 learning contexts. This study investigates syntactic and lexical linguistic alignments in a collaborative computer science-learning corpus from 24 pairs (48 individuals) of middle school students (aged 11-13). The results show stronger effects of self-alignment than partner alignment on both syntactic and lexical levels, with students often diverging from their partners on task-relevant words. Furthermore, student self-alignment on the syntactic level is negatively correlated with partner satisfaction ratings, while self-alignment on lexical level is positively correlated with their partner's satisfaction.
{"title":"Investigating Linguistic Alignment in Collaborative Dialogue: A Study of Syntactic and Lexical Patterns in Middle School Students.","authors":"Xiaoyi Tian, Amanda E Griffith, Zane Price, Kristy Elizabeth Boyer, Kevin Tang","doi":"10.1177/00238309241234565","DOIUrl":"10.1177/00238309241234565","url":null,"abstract":"<p><p>Linguistic alignment, the tendency of speakers to share common linguistic features during conversations, has emerged as a key area of research in computer-supported collaborative learning. While previous studies have shown that linguistic alignment can have a significant impact on collaborative outcomes, there is limited research exploring its role in K-12 learning contexts. This study investigates syntactic and lexical linguistic alignments in a collaborative computer science-learning corpus from 24 pairs (48 individuals) of middle school students (aged 11-13). The results show stronger effects of self-alignment than partner alignment on both syntactic and lexical levels, with students often diverging from their partners on task-relevant words. Furthermore, student self-alignment on the syntactic level is negatively correlated with partner satisfaction ratings, while self-alignment on lexical level is positively correlated with their partner's satisfaction.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"63-86"},"PeriodicalIF":1.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11831868/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140307734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-05-23DOI: 10.1177/00238309241252983
Emily W Wang, Maria I Grigos
The relationship between speaking rate and speech motor variability was examined in three groups of neurotypical adults, n = 40; 15 young adults (18-30 years), 13 adults (31-40 years), and 12 middle-aged adults (41-50 years). Participants completed a connected speech task at three speaking rates (habitual, fast, and slow) where kinematic (lower lip movement) and acoustic data were obtained. Duration and variability were measured at each speaking rate. Findings revealed a complex relationship between speaking rate and variability. Adults from the middle age range (31-40 years) demonstrated shorter acoustic and kinematic durations compared with the oldest age group (41-50 years) during the habitual speaking rate condition. All adults demonstrated the greatest variability in the slow speaking rate condition, with no significant differences in variability between habitual and fast speaking rates. Interestingly, lip aperture variability was significantly lower in the youngest age group (18-30 years) compared with the two older groups during the fast speaking rate condition. Differences in measures of acoustic variability were not observed across the age levels. Strong negative correlations between kinematic/acoustic duration and lip aperture/acoustic variability in the youngest age group were revealed. Therefore, while a slow speaking rate does result in greater variability compared with habitual and fast speaking rates, longer durations of productions by the different age groups were not linked to higher spatiotemporal index (STI) values, suggesting that timing influences speech motor variability, but is not the sole contributor.
{"title":"Effects of Speaking Rate Changes on Speech Motor Variability in Adults.","authors":"Emily W Wang, Maria I Grigos","doi":"10.1177/00238309241252983","DOIUrl":"10.1177/00238309241252983","url":null,"abstract":"<p><p>The relationship between speaking rate and speech motor variability was examined in three groups of neurotypical adults, <i>n</i> = 40; 15 young adults (18-30 years), 13 adults (31-40 years), and 12 middle-aged adults (41-50 years). Participants completed a connected speech task at three speaking rates (habitual, fast, and slow) where kinematic (lower lip movement) and acoustic data were obtained. Duration and variability were measured at each speaking rate. Findings revealed a complex relationship between speaking rate and variability. Adults from the middle age range (31-40 years) demonstrated shorter acoustic and kinematic durations compared with the oldest age group (41-50 years) during the habitual speaking rate condition. All adults demonstrated the greatest variability in the slow speaking rate condition, with no significant differences in variability between habitual and fast speaking rates. Interestingly, lip aperture variability was significantly lower in the youngest age group (18-30 years) compared with the two older groups during the fast speaking rate condition. Differences in measures of acoustic variability were not observed across the age levels. Strong negative correlations between kinematic/acoustic duration and lip aperture/acoustic variability in the youngest age group were revealed. Therefore, while a slow speaking rate does result in greater variability compared with habitual and fast speaking rates, longer durations of productions by the different age groups were not linked to higher spatiotemporal index (STI) values, suggesting that timing influences speech motor variability, but is not the sole contributor.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"141-161"},"PeriodicalIF":1.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141086564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}