Pub Date : 2024-09-01Epub Date: 2023-07-04DOI: 10.1177/00238309231176760
Margarethe McDonald, Margarita Kaushanskaya
Listeners adjust their perception to match that of presented speech through shifting and relaxation of categorical boundaries. This allows for processing of speech variation, but may be detrimental to processing efficiency. Bilingual children are exposed to many types of speech in their linguistic environment, including native and non-native speech. This study examined how first language (L1) Spanish/second language (L2) English bilingual children shifted and relaxed phoneme categorization along the cue of voice onset time (VOT) during English speech processing after three types of language exposure: native English exposure, native Spanish exposure, and Spanish-accented English exposure. After exposure to Spanish-accented English speech, bilingual children shifted categorical boundaries in the direction of native English speech boundaries. After exposure to native Spanish speech, children shifted to a smaller extent in the same direction and relaxed boundaries leading to weaker differentiation between categories. These results suggest that prior exposure can affect processing of a second language in bilingual children, but different mechanisms are used when adapting to different types of speech variation.
{"title":"Bilingual Children Shift and Relax Second-Language Phoneme Categorization in Response to Accented L2 and Native L1 Speech Exposure.","authors":"Margarethe McDonald, Margarita Kaushanskaya","doi":"10.1177/00238309231176760","DOIUrl":"10.1177/00238309231176760","url":null,"abstract":"<p><p>Listeners adjust their perception to match that of presented speech through shifting and relaxation of categorical boundaries. This allows for processing of speech variation, but may be detrimental to processing efficiency. Bilingual children are exposed to many types of speech in their linguistic environment, including native and non-native speech. This study examined how first language (L1) Spanish/second language (L2) English bilingual children shifted and relaxed phoneme categorization along the cue of voice onset time (VOT) during English speech processing after three types of language exposure: native English exposure, native Spanish exposure, and Spanish-accented English exposure. After exposure to Spanish-accented English speech, bilingual children shifted categorical boundaries in the direction of native English speech boundaries. After exposure to native Spanish speech, children shifted to a smaller extent in the same direction and relaxed boundaries leading to weaker differentiation between categories. These results suggest that prior exposure can affect processing of a second language in bilingual children, but different mechanisms are used when adapting to different types of speech variation.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"617-638"},"PeriodicalIF":1.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11367803/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9738176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-07-26DOI: 10.1177/00238309231182363
Shuxiao Gong, Jie Zhang, Robert Fiorentino
This article investigates the role of phonological well-formedness constraints in Mandarin speakers' phonotactic grammar and how they affect online speech processing. Mandarin non-words can be categorized into systematic gaps and accidental gaps, depending on whether they violate principled phonotactic constraints based on the Obligatory Contour Principle (OCP). Non-word acceptability judgment experiments have shown that systematic gaps received lower wordlikeness ratings than accidental gaps. Using a lexical decision task, this study found that systematic gaps were rejected significantly faster than accidental gaps, even after lexical statistics were taken into account. These findings thus provide converging evidence for the essential status of the OCP-based phonotactic constraints in Mandarin speakers' phonological knowledge.
{"title":"Phonological Well-Formedness Constraints in Mandarin Phonotactics: Evidence From Lexical Decision.","authors":"Shuxiao Gong, Jie Zhang, Robert Fiorentino","doi":"10.1177/00238309231182363","DOIUrl":"10.1177/00238309231182363","url":null,"abstract":"<p><p>This article investigates the role of phonological well-formedness constraints in Mandarin speakers' phonotactic grammar and how they affect online speech processing. Mandarin non-words can be categorized into systematic gaps and accidental gaps, depending on whether they violate principled phonotactic constraints based on the Obligatory Contour Principle (OCP). Non-word acceptability judgment experiments have shown that systematic gaps received lower wordlikeness ratings than accidental gaps. Using a lexical decision task, this study found that systematic gaps were rejected significantly faster than accidental gaps, even after lexical statistics were taken into account. These findings thus provide converging evidence for the essential status of the OCP-based phonotactic constraints in Mandarin speakers' phonological knowledge.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"676-691"},"PeriodicalIF":1.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9873747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-08-09DOI: 10.1177/00238309231188078
Connie Ting, Yoonjung Kang
This study investigates listeners' ability to track individual speakers' habitual speech rate in a dialogue and adjust their perception of durational contrasts. Previous studies that found such adjustments are inconclusive as adjustments can be attributed to exemplars of target structures in the dialogue rather than perceptual calibration of habitual speech rates. In this study, English listeners were presented with a dialogue between a fast and slow speaker, containing no stressed syllable-initial voiceless stops. Listeners then categorized /pi/-/bi/ syllables differing along a voice onset time continuum. Results did not show conclusive evidence that listeners' response differed systematically depending on speakers' habitual speech rate.
{"title":"The Effect of Habitual Speech Rate on Speaker-Specific Processing in English Stop Voicing Perception.","authors":"Connie Ting, Yoonjung Kang","doi":"10.1177/00238309231188078","DOIUrl":"10.1177/00238309231188078","url":null,"abstract":"<p><p>This study investigates listeners' ability to track individual speakers' habitual speech rate in a dialogue and adjust their perception of durational contrasts. Previous studies that found such adjustments are inconclusive as adjustments can be attributed to exemplars of target structures in the dialogue rather than perceptual calibration of habitual speech rates. In this study, English listeners were presented with a dialogue between a fast and slow speaker, containing no stressed syllable-initial voiceless stops. Listeners then categorized /pi/-/bi/ syllables differing along a voice onset time continuum. Results did not show conclusive evidence that listeners' response differed systematically depending on speakers' habitual speech rate.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"692-701"},"PeriodicalIF":1.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11367799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10316847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-09-29DOI: 10.1177/00238309231199994
Sergei Monakhov
Complex verbs with the same preverb/prefix/particle that is both linguistically productive and analyzable can be compositional as well as non-compositional in meaning. For example, the English on has compositional spatial uses (put a hat on) but also a non-spatial "continuative" use, where its semantic contribution is consistent with multiple verbs (we played / worked / talked on despite the interruption). Comparable examples can be given with German preverbs or Russian prefixes, which are the main data analyzed in the present paper. The preverbs/prefixes/particles that encode non-compositional, construction-specific senses have been extensively studied; however, it is still far from clear how their semantic idiosyncrasies arise. Even when one can identify the contribution of the base, it is counterintuitive to assign the remaining sememes to the preverb/prefix/particle part. Therefore, on one hand, there seems to be an element without meaning, and on the other, there is a word sense that apparently comes from nowhere. In this article, I suggest analyzing compositional and non-compositional complex verbs as instantiations of two different types of constructions: one with an open slot for the preverb/prefix/particle and a fixed base verb and another with a fixed preverb/prefix/particle and an open slot for the base verb. Both experimental and corpus evidence supporting this decision is provided for Russian data. I argue that each construction implies its own meaning-processing model and that the actual choice between the two can be predicted by taking into account the discrepancy in probabilities of transition from preverb/prefix/particle to base and from base to preverb/prefix/particle.
{"title":"How Complex Verbs Acquire Their Idiosyncratic Meanings.","authors":"Sergei Monakhov","doi":"10.1177/00238309231199994","DOIUrl":"10.1177/00238309231199994","url":null,"abstract":"<p><p>Complex verbs with the same preverb/prefix/particle that is both linguistically productive and analyzable can be compositional as well as non-compositional in meaning. For example, the English <i>on</i> has compositional spatial uses (<i>put a hat on</i>) but also a non-spatial \"continuative\" use, where its semantic contribution is consistent with multiple verbs (<i>we played / worked / talked on despite the interruption</i>). Comparable examples can be given with German preverbs or Russian prefixes, which are the main data analyzed in the present paper. The preverbs/prefixes/particles that encode non-compositional, construction-specific senses have been extensively studied; however, it is still far from clear how their semantic idiosyncrasies arise. Even when one can identify the contribution of the base, it is counterintuitive to assign the remaining sememes to the preverb/prefix/particle part. Therefore, on one hand, there seems to be an element without meaning, and on the other, there is a word sense that apparently comes from nowhere. In this article, I suggest analyzing compositional and non-compositional complex verbs as instantiations of two different types of constructions: one with an open slot for the preverb/prefix/particle and a fixed base verb and another with a fixed preverb/prefix/particle and an open slot for the base verb. Both experimental and corpus evidence supporting this decision is provided for Russian data. I argue that each construction implies its own meaning-processing model and that the actual choice between the two can be predicted by taking into account the discrepancy in probabilities of transition from preverb/prefix/particle to base and from base to preverb/prefix/particle.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"793-820"},"PeriodicalIF":1.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11385436/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41177448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-06-15DOI: 10.1177/00238309231176768
Ying Tian, Siyun Liu, Jianying Wang
Daily conversation is usually face-to-face and characterized by rapid and fluent exchange of turns between interlocutors. With the need to communicate across long distances, advances in communication media, online audio communication, and online video communication have become convenient alternatives for an increasing number of people. However, the fluency of turn-taking may be influenced when people communicate using these different modes. In this study, we conducted a corpus analysis of face-to-face, online audio, and online video conversations collected from the internet. The fluency of turn-taking in face-to-face conversations differed from that of online audio and video conversations. Namely, the timing of turn-taking was shorter and with more overlaps in face-to-face conversations compared with online audio and video conversations. This can be explained by the limited ability of online communication modes to transmit non-verbal cues and network latency. In addition, our study could not completely exclude the effect of formality of conversation. The present findings have implications for the rules of turn-taking in human online conversations, in that the traditional rule of no-gap-no-overlap may not be fully applicable to online conversations.
{"title":"A Corpus Study on the Difference of Turn-Taking in Online Audio, Online Video, and Face-to-Face Conversation.","authors":"Ying Tian, Siyun Liu, Jianying Wang","doi":"10.1177/00238309231176768","DOIUrl":"10.1177/00238309231176768","url":null,"abstract":"<p><p>Daily conversation is usually face-to-face and characterized by rapid and fluent exchange of turns between interlocutors. With the need to communicate across long distances, advances in communication media, online audio communication, and online video communication have become convenient alternatives for an increasing number of people. However, the fluency of turn-taking may be influenced when people communicate using these different modes. In this study, we conducted a corpus analysis of face-to-face, online audio, and online video conversations collected from the internet. The fluency of turn-taking in face-to-face conversations differed from that of online audio and video conversations. Namely, the timing of turn-taking was shorter and with more overlaps in face-to-face conversations compared with online audio and video conversations. This can be explained by the limited ability of online communication modes to transmit non-verbal cues and network latency. In addition, our study could not completely exclude the effect of formality of conversation. The present findings have implications for the rules of turn-taking in human online conversations, in that the traditional rule of no-gap-no-overlap may not be fully applicable to online conversations.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"593-616"},"PeriodicalIF":1.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9686811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Auditory feedback plays an important role in the long-term updating and maintenance of speech motor control; thus, the current study explored the unresolved question of how sensorimotor adaptation is predicted by language-specific and domain-general factors in first-language (L1) and second-language (L2) production. Eighteen English-L1 speakers and 22 English-L2 speakers performed the same sensorimotor adaptation experiments and tasks, which measured language-specific and domain-general abilities. The experiment manipulated the language groups (English-L1 and English-L2) and experimental conditions (baseline, early adaptation, late adaptation, and end). Linear mixed-effects model analyses indicated that auditory acuity was significantly associated with sensorimotor adaptation in L1 and L2 speakers. Analysis of vocal responses showed that L1 speakers exhibited significant sensorimotor adaptation under the early adaptation, late adaptation, and end conditions, whereas L2 speakers exhibited significant sensorimotor adaptation only under the late adaptation condition. Furthermore, the domain-general factors of working memory and executive control were not associated with adaptation/aftereffects in either L1 or L2 production, except for the role of working memory in aftereffects in L2 production. Overall, the study empirically supported the hypothesis that sensorimotor adaptation is predicted by language-specific factors such as auditory acuity and language experience, whereas general cognitive abilities do not play a major role in this process.
{"title":"Sensorimotor Adaptation to Formant-Shifted Auditory Feedback Is Predicted by Language-Specific Factors in L1 and L2 Speech Production.","authors":"Xiao Cai, Mingkun Ouyang, Yulong Yin, Qingfang Zhang","doi":"10.1177/00238309231202503","DOIUrl":"10.1177/00238309231202503","url":null,"abstract":"<p><p>Auditory feedback plays an important role in the long-term updating and maintenance of speech motor control; thus, the current study explored the unresolved question of how sensorimotor adaptation is predicted by language-specific and domain-general factors in first-language (L1) and second-language (L2) production. Eighteen English-L1 speakers and 22 English-L2 speakers performed the same sensorimotor adaptation experiments and tasks, which measured language-specific and domain-general abilities. The experiment manipulated the language groups (English-L1 and English-L2) and experimental conditions (baseline, early adaptation, late adaptation, and end). Linear mixed-effects model analyses indicated that auditory acuity was significantly associated with sensorimotor adaptation in L1 and L2 speakers. Analysis of vocal responses showed that L1 speakers exhibited significant sensorimotor adaptation under the early adaptation, late adaptation, and end conditions, whereas L2 speakers exhibited significant sensorimotor adaptation only under the late adaptation condition. Furthermore, the domain-general factors of working memory and executive control were not associated with adaptation/aftereffects in either L1 or L2 production, except for the role of working memory in aftereffects in L2 production. Overall, the study empirically supported the hypothesis that sensorimotor adaptation is predicted by language-specific factors such as auditory acuity and language experience, whereas general cognitive abilities do not play a major role in this process.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"846-869"},"PeriodicalIF":1.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41219702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-07-31DOI: 10.1177/00238309231185308
Olcay Türk, Sasha Calhoun
This study investigates the synchronization of manual gestures with prosody and information structure using Turkish natural speech data. Prosody has long been linked to gesture as a key driver of gesture-speech synchronization. Gesture has a hierarchical phrasal structure similar to prosody. At the lowest level, gesture has been shown to be synchronized with prosody (e.g., apexes and pitch accents). However, less is known about higher levels. Even less is known about timing relationships with information structure, though this is signaled by prosody and linked to gesture. The present study analyzed phrase synchronization in 3 hr of narrations in Turkish annotated for gesture, prosody, and information structure-topics and foci. The analysis of 515 gesture phrases showed that there was no one-to-one synchronization with intermediate phrases, but their onsets and offsets were synchronized. Moreover, information structural units, topics, and foci were closely synchronized with gesture phrase medial stroke + post-hold combinations (i.e., apical areas). In addition, iconic and metaphoric gestures were more likely to be paired with foci, and deictics with topics. Overall, the results confirm synchronization of gesture and prosody at the phrasal level and provide evidence that gesture shows a direct sensitivity to information structure. These show that speech and gesture production are more connected than assumed in existing production models.
{"title":"Phrasal Synchronization of Gesture With Prosody and Information Structure.","authors":"Olcay Türk, Sasha Calhoun","doi":"10.1177/00238309231185308","DOIUrl":"10.1177/00238309231185308","url":null,"abstract":"<p><p>This study investigates the synchronization of manual gestures with prosody and information structure using Turkish natural speech data. Prosody has long been linked to gesture as a key driver of gesture-speech synchronization. Gesture has a hierarchical phrasal structure similar to prosody. At the lowest level, gesture has been shown to be synchronized with prosody (e.g., apexes and pitch accents). However, less is known about higher levels. Even less is known about timing relationships with information structure, though this is signaled by prosody and linked to gesture. The present study analyzed phrase synchronization in 3 hr of narrations in Turkish annotated for gesture, prosody, and information structure-topics and foci. The analysis of 515 gesture phrases showed that there was no one-to-one synchronization with intermediate phrases, but their onsets and offsets were synchronized. Moreover, information structural units, topics, and foci were closely synchronized with gesture phrase medial stroke + post-hold combinations (i.e., apical areas). In addition, iconic and metaphoric gestures were more likely to be paired with foci, and deictics with topics. Overall, the results confirm synchronization of gesture and prosody at the phrasal level and provide evidence that gesture shows a direct sensitivity to information structure. These show that speech and gesture production are more connected than assumed in existing production models.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"702-743"},"PeriodicalIF":1.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9898329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-09-15DOI: 10.1177/00238309231195263
Jiwon Hwang, Yu-An Lu
In Korean, voiced oral stops can occur intervocalically as allophones of their voiceless lenis counterparts; they can also occur initially as variants of nasal stops as a result of initial denasalization (e.g., /motu/→[bodu] "all"). However, neither [ŋ] nor [ɡ] (the denasalized variant of the velar nasal) is allowed in the initial position due to the phonotactic restriction against initial [ŋ] in Korean. Given the distribution of nasal and voiced stops in Korean, this study draws on the idea of cue informativeness, exploring (a) whether Korean listeners' attention to nasality and voicing cues is based on the distributional characteristics of nasal and voiced stops, and (b) whether their attention can be generalized across different places of articulation without such linguistic experience. In a forced-choice identification experiment, Korean listeners were more likely than Taiwanese listeners to perceive items on the voiced oral-to-nasal stop continua as nasal when they occurred in the initial position than in the intervocalic position, with the exception of velar stops. The results demonstrate that the Korean listeners attended to the nasality cue more reliably in the medial position than in the initial position, since the nasality cue in this position is less informative due to initial denasalization. Two additional forced-choice identification experiments suggested that upon hearing initial velar nasal [ŋ], Korean listeners variably employed different perceptual strategies (i.e., vowel insertion and place change) to repair the phonotactic illegality. These findings provide support for exemplar models of speech perception in which cue attention is specific to the position of a word, and to segments rather than to features.
{"title":"The Effect of Distributional Restrictions in Speech Perception: A Case Study From Korean and Taiwanese Southern Min.","authors":"Jiwon Hwang, Yu-An Lu","doi":"10.1177/00238309231195263","DOIUrl":"10.1177/00238309231195263","url":null,"abstract":"<p><p>In Korean, voiced oral stops can occur intervocalically as allophones of their voiceless lenis counterparts; they can also occur initially as variants of nasal stops as a result of initial denasalization (e.g., /motu/→[<b>b</b>o<b>d</b>u] \"all\"). However, neither [ŋ] nor [ɡ] (the denasalized variant of the velar nasal) is allowed in the initial position due to the phonotactic restriction against initial [ŋ] in Korean. Given the distribution of nasal and voiced stops in Korean, this study draws on the idea of cue informativeness, exploring (a) whether Korean listeners' attention to nasality and voicing cues is based on the distributional characteristics of nasal and voiced stops, and (b) whether their attention can be generalized across different places of articulation without such linguistic experience. In a forced-choice identification experiment, Korean listeners were more likely than Taiwanese listeners to perceive items on the voiced oral-to-nasal stop continua as nasal when they occurred in the initial position than in the intervocalic position, with the exception of velar stops. The results demonstrate that the Korean listeners attended to the nasality cue more reliably in the medial position than in the initial position, since the nasality cue in this position is less informative due to initial denasalization. Two additional forced-choice identification experiments suggested that upon hearing initial velar nasal [ŋ], Korean listeners variably employed different perceptual strategies (i.e., vowel insertion and place change) to repair the phonotactic illegality. These findings provide support for exemplar models of speech perception in which cue attention is specific to the position of a word, and to segments rather than to features.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"744-771"},"PeriodicalIF":1.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10591760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1177/00238309241267876
Albandary Aldossari, Ryan Andrew Stevenson, Yasaman Rafat
Research has indicated that second-language learners have difficulty producing geminates accurately. Previous studies have also shown an effect of orthography on second-language speech production. We tested whether the existence of a contrast in the first language phonology for length aids the second-language production of the same contrast. Furthermore, we examined the effect of exposure to orthographic input on geminate consonant production in a cross-script context. We tested the production of Arabic geminate-singleton stop consonants [/bː/-/b/, /tː/-/t/, /dː/-/d/, and /kː/-/k/], a nasal stop consonant /mː/-/m/, and an emphatic stop consonant /tˤː/-/tˤ/, as well as the effect of the diacritic used in Arabic to mark gemination in a delayed imitation task and two reading tasks (ortho-with diacritics and ortho-without diacritics). A comparison of the productions of advanced Japanese-speaking learners, English-speaking learners, and an Arabic control group showed that both learner groups were able to produce Arabic geminate stops; however, the Japanese-speaking learners exhibited an advantage over the English-speaking learners in the auditory-only task and in the presence of diacritics, highlighting the fact that orthographic effects may occur in some cross-script contexts.
{"title":"An Investigation of Language-Specific and Orthographic Effects in L2 Arabic geminate production by Advanced Japanese- and English-speaking learners.","authors":"Albandary Aldossari, Ryan Andrew Stevenson, Yasaman Rafat","doi":"10.1177/00238309241267876","DOIUrl":"https://doi.org/10.1177/00238309241267876","url":null,"abstract":"<p><p>Research has indicated that second-language learners have difficulty producing geminates accurately. Previous studies have also shown an effect of orthography on second-language speech production. We tested whether the existence of a contrast in the first language phonology for length aids the second-language production of the same contrast. Furthermore, we examined the effect of exposure to orthographic input on geminate consonant production in a cross-script context. We tested the production of Arabic geminate-singleton stop consonants [/bː/-/b/, /tː/-/t/, /dː/-/d/, and /kː/-/k/], a nasal stop consonant /mː/-/m/, and an emphatic stop consonant /tˤː/-/tˤ/, as well as the effect of the diacritic used in Arabic to mark gemination in a delayed imitation task and two reading tasks (ortho-with diacritics and ortho-without diacritics). A comparison of the productions of advanced Japanese-speaking learners, English-speaking learners, and an Arabic control group showed that both learner groups were able to produce Arabic geminate stops; however, the Japanese-speaking learners exhibited an advantage over the English-speaking learners in the auditory-only task and in the presence of diacritics, highlighting the fact that orthographic effects may occur in some cross-script contexts.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"238309241267876"},"PeriodicalIF":1.1,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-27DOI: 10.1177/00238309241270741
Violeta Gómez-Vicente, Gema Esquiva, Carmen Lancho, Kawthar Benzerdjeb, Antonia Angulo Jerez, Eva Ausó
We sought to examine the contribution of visual cues, such as lipreading, in the identification of familiar (words) and unfamiliar (phonemes) words in terms of percent accuracy. For that purpose, in this retrospective study, we presented lists of words and phonemes (adult female healthy voice) in auditory (A) and audiovisual (AV) modalities to 65 Spanish normal-hearing male and female listeners classified in four age groups. Our results showed a remarkable benefit of AV information in word and phoneme recognition. Regarding gender, women exhibited better performance than men in both A and AV modalities, although we only found significant differences for words but not for phonemes. Concerning age, significant differences were detected in word recognition in the A modality between the youngest (18-29 years old) and oldest (⩾50 years old) groups only. We conclude visual information enhances word and phoneme recognition and women are more influenced by visual signals than men in AV speech perception. On the contrary, it seems that, overall, age is not a limiting factor for word recognition, with no significant differences observed in the AV modality.
我们试图研究视觉线索(如唇读)在识别熟悉(单词)和不熟悉(音素)单词时的准确率。为此,在这项回顾性研究中,我们以听觉(A)和视听(AV)模式向 65 名西班牙籍听力正常的男女听者(分为四个年龄组)展示了单词和音素列表(成年女性健康声音)。我们的研究结果表明,视听信息在单词和音素识别方面具有显著优势。在性别方面,女性在 A 和视听模式中的表现均优于男性,但我们只发现在单词而非音素方面存在显著差异。在年龄方面,我们发现只有最年轻(18-29 岁)和最年长(50 岁以上)两组在 A 模式下的单词识别能力上存在显著差异。我们的结论是,视觉信息能增强单词和音素的识别能力,而且在视听言语感知方面,女性比男性受视觉信号的影响更大。相反,总体看来,年龄并不是单词识别的限制因素,在视听模式中也没有观察到显著差异。
{"title":"Importance of Visual Support Through Lipreading in the Identification of Words in Spanish Language.","authors":"Violeta Gómez-Vicente, Gema Esquiva, Carmen Lancho, Kawthar Benzerdjeb, Antonia Angulo Jerez, Eva Ausó","doi":"10.1177/00238309241270741","DOIUrl":"https://doi.org/10.1177/00238309241270741","url":null,"abstract":"<p><p>We sought to examine the contribution of visual cues, such as lipreading, in the identification of familiar (words) and unfamiliar (phonemes) words in terms of percent accuracy. For that purpose, in this retrospective study, we presented lists of words and phonemes (adult female healthy voice) in auditory (A) and audiovisual (AV) modalities to 65 Spanish normal-hearing male and female listeners classified in four age groups. Our results showed a remarkable benefit of AV information in word and phoneme recognition. Regarding gender, women exhibited better performance than men in both A and AV modalities, although we only found significant differences for words but not for phonemes. Concerning age, significant differences were detected in word recognition in the A modality between the youngest (18-29 years old) and oldest (⩾50 years old) groups only. We conclude visual information enhances word and phoneme recognition and women are more influenced by visual signals than men in AV speech perception. On the contrary, it seems that, overall, age is not a limiting factor for word recognition, with no significant differences observed in the AV modality.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"238309241270741"},"PeriodicalIF":1.1,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}