Pub Date : 2024-03-01Epub Date: 2023-06-05DOI: 10.1177/00238309231173337
Daria Seres, Joan Borràs-Comes, M Teresa Espinal
This article focuses on the choice of nominal forms in a language with articles (Catalan) in comparison to a language without articles (Russian). An experimental study (consisting of various naturalness judgment tasks) was run with speakers of these two languages which allowed to show that in bridging contexts native speakers' preferences vary when reference is made to one single individual or to two disjoint referents. In the former case, Catalan speakers chose (in)definite NPs depending on their accessibility to contextual information that guarantees a unique interpretation (or the lack of it) for the entity referred to. Russian speakers chose bare nominals as a default form. When reference is made to two disjoint referents (as encoded by the presence of an additional altre/drugoj "other" NP), speakers prefer an optimal combination of two indefinite NPs (i.e., un NP followed by un altre NP in Catalan; odin "some/a" NP followed by drugoj NP in Russian). This study shows how speakers of the two languages manage to combine grammatical knowledge (related to the meaning of the definite and the indefinite articles and altre in Catalan; and the meaning of bare nominals, odin and drugoj in Russian) with world knowledge activation and accessibility to discourse information.
本文主要研究了有冠词的语言(加泰罗尼亚语)和无冠词的语言(俄语)在选择名词形式时的对比。我们对这两种语言的使用者进行了一项实验研究(包括各种自然度判断任务),结果表明,在桥接语境中,如果指的是一个人或两个不相连的参照物,母语使用者的偏好会有所不同。在前一种情况下,加泰罗尼亚语者选择(非)定语从句取决于他们是否能获得上下文信息,这些信息能保证对所指实体的独特解释(或缺乏解释)。讲俄语的人则选择光名词作为默认形式。当指称两个不相连的参照物时(如附加的 altre/drugoj "其他 "NP 所编码的),说话者更倾向于使用两个不确定 NP 的最佳组合(即加泰罗尼亚语中的 un NP 后接 un altre NP;俄语中的 odin "some/a" NP 后接 drugoj NP)。这项研究显示了这两种语言的使用者是如何将语法知识(在加泰罗尼亚语中与定冠词、不定冠词和altre的意义有关;在俄语中与光名词、odin和drugoj的意义有关)与世界知识的激活和话语信息的可及性结合起来的。
{"title":"Bridging Inferences and Reference Management: Evidence from an Experimental Investigation in Catalan and Russian.","authors":"Daria Seres, Joan Borràs-Comes, M Teresa Espinal","doi":"10.1177/00238309231173337","DOIUrl":"10.1177/00238309231173337","url":null,"abstract":"<p><p>This article focuses on the choice of nominal forms in a language with articles (Catalan) in comparison to a language without articles (Russian). An experimental study (consisting of various naturalness judgment tasks) was run with speakers of these two languages which allowed to show that in bridging contexts native speakers' preferences vary when reference is made to one single individual or to two disjoint referents. In the former case, Catalan speakers chose (in)definite NPs depending on their accessibility to contextual information that guarantees a unique interpretation (or the lack of it) for the entity referred to. Russian speakers chose bare nominals as a default form. When reference is made to two disjoint referents (as encoded by the presence of an additional <i>altre/drugoj</i> \"other\" NP), speakers prefer an optimal combination of two indefinite NPs (i.e., <i>un</i> NP followed by <i>un altre</i> NP in Catalan; <i>odin</i> \"some/a\" NP followed by <i>drugoj</i> NP in Russian). This study shows how speakers of the two languages manage to combine grammatical knowledge (related to the meaning of the definite and the indefinite articles and <i>altre</i> in Catalan; and the meaning of bare nominals, <i>odin</i> and <i>drugoj</i> in Russian) with world knowledge activation and accessibility to discourse information.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"203-227"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9935008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-06-10DOI: 10.1177/00238309231176526
Katherine Chia, Michael P Kaschak
We present two studies examining the factors that lead speakers to produce elliptical responses to requests for information. Following Clark and Levelt and Kelter, experimenters called businesses and asked about their closing time (e.g., Can you tell me what time you close?). Participants provided the requested information in full sentence responses (We close at 9) or elliptical responses (At 9). A reanalysis of data from previous experiments using this paradigm shows that participants are more likely to produce an elliptical response when the question is a direct request for information (What time do you close?) than when the question is an indirect request for information (Can you tell me what time you close?). Participants were less likely to produce an elliptical response when they began their answer by providing a yes/no response (e.g., Sure . . . we close at 9). A new experiment replicated these findings, and further showed that elliptical responses were less likely when (1) irrelevant linguistic content was inserted between the question and the participant's response, and (2) participants verbalized signs of difficulty retrieving the requested information. This latter effect is most prominent in response to questions that are seen as very polite (May I ask you what time you close?). We discuss the role that the recoverability of the intended meaning of the ellipsis, the accessibility of potential antecedents for the ellipsis, pragmatic factors, and memory retrieval play in shaping the production of ellipsis.
{"title":"Elliptical Responses to Direct and Indirect Requests for Information.","authors":"Katherine Chia, Michael P Kaschak","doi":"10.1177/00238309231176526","DOIUrl":"10.1177/00238309231176526","url":null,"abstract":"<p><p>We present two studies examining the factors that lead speakers to produce elliptical responses to requests for information. Following Clark and Levelt and Kelter, experimenters called businesses and asked about their closing time (e.g., <i>Can you tell me what time you close?</i>). Participants provided the requested information in full sentence responses (<i>We close at 9</i>) or elliptical responses (<i>At 9</i>). A reanalysis of data from previous experiments using this paradigm shows that participants are more likely to produce an elliptical response when the question is a direct request for information (<i>What time do you close?</i>) than when the question is an indirect request for information (<i>Can you tell me what time you close?</i>). Participants were less likely to produce an elliptical response when they began their answer by providing a yes/no response (e.g., <i>Sure . . . we close at 9</i>). A new experiment replicated these findings, and further showed that elliptical responses were less likely when (1) irrelevant linguistic content was inserted between the question and the participant's response, and (2) participants verbalized signs of difficulty retrieving the requested information. This latter effect is most prominent in response to questions that are seen as very polite (<i>May I ask you what time you close?</i>). We discuss the role that the recoverability of the intended meaning of the ellipsis, the accessibility of potential antecedents for the ellipsis, pragmatic factors, and memory retrieval play in shaping the production of ellipsis.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"228-254"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9655351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-04-27DOI: 10.1177/00238309231161289
Bror-Magnus S Strand
Prosodic features are some of the most salient features of dialect variation in Norway. It is therefore no wonder that the switch in prosodic systems is what is first recognized by caretakers and scholars when Norwegian children code-switch to something resembling the dialect of the capital (henceforth Urban East Norwegian, UEN) in role-play. With a focus on the system of lexical tonal accents, this paper investigates the spontaneous speech of North Norwegian children engaging in peer social role-play. By investigating F0 contours extracted from a corpus of spontaneous peer play, and comparing them with elicited baseline reference contours, this paper makes the case that children fail to apply the target tonal accent consistent with UEN in compounds in role-play, although the production of tonal accents otherwise seems to be phonetically target like UEN. Put in other words, they perform in accordance with UEN phonetics, but not UEN morpho-phonology.
{"title":"Playing With Fire Compounds: The Tonal Accents of Compounds in (North) Norwegian Preschoolers' Role-Play Register.","authors":"Bror-Magnus S Strand","doi":"10.1177/00238309231161289","DOIUrl":"10.1177/00238309231161289","url":null,"abstract":"<p><p>Prosodic features are some of the most salient features of dialect variation in Norway. It is therefore no wonder that the switch in prosodic systems is what is first recognized by caretakers and scholars when Norwegian children code-switch to something resembling the dialect of the capital (henceforth Urban East Norwegian, UEN) in role-play. With a focus on the system of lexical tonal accents, this paper investigates the spontaneous speech of North Norwegian children engaging in peer social role-play. By investigating F0 contours extracted from a corpus of spontaneous peer play, and comparing them with elicited baseline reference contours, this paper makes the case that children fail to apply the target tonal accent consistent with UEN in compounds in role-play, although the production of tonal accents otherwise seems to be phonetically target like UEN. Put in other words, they perform in accordance with UEN phonetics, but not UEN morpho-phonology.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"113-139"},"PeriodicalIF":1.1,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11420583/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9349828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-04-17DOI: 10.1177/00238309231163170
Samantha Link, Fabian Tomaschek
Glossolalia can be regarded as an instance of speech production in which practitioners produce syllables in seemingly random sequences. However, a closer inspection of glossalalia's statistical properties reveals that sequences show a Zipfian pattern similar to natural languages, with some syllables being more probable than others. It is well established that statistical properties of sequences are implicitly learned, and that these statistical properties correlate with changes in kinematic and speech behavior. For speech, this means that more predictable items are phonetically shorter. Accordingly, we hypothesized for glossolalia that if practitioners have learned a serial pattern in glossolalia in the same manner as in natural languages, its statistical properties should correlate with its phonetic characteristics. Our hypothesis was supported. We find significantly shorter syllables associated with higher syllable probabilities in glossolalia. We discuss this finding in relation to theories about the sources of probability-related changes in the speech signal.
{"title":"Predictability Associated With Reduction in Phonetic Signals Without Semantics-The Case of Glossolalia.","authors":"Samantha Link, Fabian Tomaschek","doi":"10.1177/00238309231163170","DOIUrl":"10.1177/00238309231163170","url":null,"abstract":"<p><p>Glossolalia can be regarded as an instance of speech production in which practitioners produce syllables in seemingly random sequences. However, a closer inspection of glossalalia's statistical properties reveals that sequences show a Zipfian pattern similar to natural languages, with some syllables being more probable than others. It is well established that statistical properties of sequences are implicitly learned, and that these statistical properties correlate with changes in kinematic and speech behavior. For speech, this means that more predictable items are phonetically shorter. Accordingly, we hypothesized for glossolalia that if practitioners have learned a serial pattern in glossolalia in the same manner as in natural languages, its statistical properties should correlate with its phonetic characteristics. Our hypothesis was supported. We find significantly shorter syllables associated with higher syllable probabilities in glossolalia. We discuss this finding in relation to theories about the sources of probability-related changes in the speech signal.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"72-94"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10916350/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9319010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-04-25DOI: 10.1177/00238309231162299
Siri Gjersøe, Bert Remijsen
This paper examines the perceptual threshold in patterns of tonal timing (alignment) of Falling versus Low tones. The results indicate a remarkable sensitivity among the listeners. In a perception experiment with 30 participants, we tested how native speakers of the West Nilotic language Nuer responded to stimuli in which the timing of the F0 fall that distinguishes Low versus Fall following a High target is manipulated. We measured the threshold for the responses to shift tone perception from 25% to 75%. The results show that listeners only needed an average of 19 ms to differentiate between the melodic shapes and as little as 13 ms for one item. Perceptual sensitivity this fine-grained is not expected based on what is known about the Just Noticeable Difference (JND) from previous studies. Results from non-tonal languages report a sensitivity threshold for tonal timing of at least 50 ms at category boundaries. This difference suggests that whether or not subjects speak a tone language may be a determining factor in their JND.
{"title":"Perceptual Sensitivity to Tonal Alignment in Nuer.","authors":"Siri Gjersøe, Bert Remijsen","doi":"10.1177/00238309231162299","DOIUrl":"10.1177/00238309231162299","url":null,"abstract":"<p><p>This paper examines the perceptual threshold in patterns of tonal timing (alignment) of Falling versus Low tones. The results indicate a remarkable sensitivity among the listeners. In a perception experiment with 30 participants, we tested how native speakers of the West Nilotic language Nuer responded to stimuli in which the timing of the F0 fall that distinguishes Low versus Fall following a High target is manipulated. We measured the threshold for the responses to shift tone perception from 25% to 75%. The results show that listeners only needed an average of 19 ms to differentiate between the melodic shapes and as little as 13 ms for one item. Perceptual sensitivity this fine-grained is not expected based on what is known about the Just Noticeable Difference (JND) from previous studies. Results from non-tonal languages report a sensitivity threshold for tonal timing of at least 50 ms at category boundaries. This difference suggests that whether or not subjects speak a tone language may be a determining factor in their JND.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"95-112"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10916342/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9403203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-05-09DOI: 10.1177/00238309231164972
Frances Blanchette, Erin Flannery, Carrie Jackson, Paul Reed
Expanding on psycholinguistic research on linguistic adaptation, the phenomenon whereby speakers change how they comprehend or produce structures as a result of cumulative exposure to less frequent or unfamiliar linguistic structures, this study asked whether speakers can learn semantic and syntactic properties of the American English vernacular negative auxiliary inversion (NAI) structure (e.g., didn't everybody eat, meaning "not everybody ate") during the course of an experiment. Formal theoretical analyses of NAI informed the design of a task in which American English-speaking participants unfamiliar with this structure were exposed to NAI sentences in either semantically ambiguous or unambiguous contexts. Participants rapidly adapted to the interpretive properties of NAI, selecting responses similar to what would be expected of a native speaker after only limited exposure to semantically ambiguous input. On a separate ratings task, participants displayed knowledge of syntactic restrictions on NAI subject type, despite having no previous exposure. We discuss the results in the context of other experimental studies of adaptation and suggest the implementation of top-down strategies via analogy to other familiar structure types as possible explanations for the behaviors observed in this study. The study illustrates the value of integrating insights from formal theoretical research and psycholinguistic methods in research on adaptation and highlights the need for more interdisciplinary and cross-disciplinary work in both experimental and naturalistic contexts to understand this phenomenon.
语言适应是指说话者由于累积接触较少或不熟悉的语言结构而改变其理解或生成结构的方式的现象。本研究扩展了有关语言适应的心理语言学研究,探讨了说话者是否能在实验过程中学习到美式英语白话否定助词倒装(NAI)结构(例如,did't everybody eat,意思是 "不是每个人都吃了")的语义和句法特性。通过对 NAI 的正式理论分析,我们设计了一项任务,让不熟悉这种结构的美式英语参与者在语义模糊或无歧义的语境中接触 NAI 句子。受试者很快就适应了 NAI 的解释特性,在有限地接触语义模糊的输入后,他们选择的回答与母语使用者预期的回答相似。在另一项评分任务中,尽管参赛者以前从未接触过NAI,但他们还是表现出了对NAI主语类型句法限制的了解。我们结合其他适应性实验研究对结果进行了讨论,并建议通过类比其他熟悉的结构类型来实施自上而下的策略,以此作为本研究中观察到的行为的可能解释。本研究说明了在适应性研究中整合正式理论研究和心理语言学方法的价值,并强调了在实验和自然语境中开展更多跨学科和交叉学科工作以理解这一现象的必要性。
{"title":"Adaptation at the Syntax-Semantics Interface: Evidence From a Vernacular Structure.","authors":"Frances Blanchette, Erin Flannery, Carrie Jackson, Paul Reed","doi":"10.1177/00238309231164972","DOIUrl":"10.1177/00238309231164972","url":null,"abstract":"<p><p>Expanding on psycholinguistic research on linguistic adaptation, the phenomenon whereby speakers change how they comprehend or produce structures as a result of cumulative exposure to less frequent or unfamiliar linguistic structures, this study asked whether speakers can learn semantic and syntactic properties of the American English vernacular negative auxiliary inversion (NAI) structure (e.g., <i>didn't everybody eat</i>, meaning \"not everybody ate\") during the course of an experiment. Formal theoretical analyses of NAI informed the design of a task in which American English-speaking participants unfamiliar with this structure were exposed to NAI sentences in either semantically ambiguous or unambiguous contexts. Participants rapidly adapted to the interpretive properties of NAI, selecting responses similar to what would be expected of a native speaker after only limited exposure to semantically ambiguous input. On a separate ratings task, participants displayed knowledge of syntactic restrictions on NAI subject type, despite having no previous exposure. We discuss the results in the context of other experimental studies of adaptation and suggest the implementation of top-down strategies via analogy to other familiar structure types as possible explanations for the behaviors observed in this study. The study illustrates the value of integrating insights from formal theoretical research and psycholinguistic methods in research on adaptation and highlights the need for more interdisciplinary and cross-disciplinary work in both experimental and naturalistic contexts to understand this phenomenon.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"140-165"},"PeriodicalIF":1.1,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10916346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9444192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-03-06DOI: 10.1177/00238309231156918
Yongzhi Miao, Heath Rose, Sepideh Hosseini
Scholars have argued that comprehensibility (i.e., ease of understanding), not nativelike performance, should be prioritized in second language learning, which inspired numerous studies to explore factors affecting comprehensibility. However, most of these studies did not consider potential interaction effects of these factors, resulting in a limited understanding of comprehensibility and less precise implications. This study investigates how pronunciation and lexicogrammar influences the comprehensibility of Mandarin-accented English. A total of 687 listeners were randomly allocated into six groups and rated (a) one baseline and (b) one of six experimental recordings for comprehensibility on a 9-point scale. The baseline recording, a 60 s spontaneous speech by an L1 English speaker with an American accent, was the same across groups. The six 75-s experimental recordings were the same in content but differed in (a) speakers' degree of foreign accent (American, moderate Mandarin, and heavy Mandarin) and (b) lexicogrammar (with errors vs. without errors). The study found that pronunciation and lexicogrammar interacted to influence comprehensibility. That is, whether pronunciation affected comprehensibility depended on speakers' lexicogrammar, and vice versa. The results have implications for theory-building to refine comprehensibility, as well as for pedagogy and testing priorities.
{"title":"The Interaction Effect of Pronunciation and Lexicogrammar on Comprehensibility: A Case of Mandarin-Accented English.","authors":"Yongzhi Miao, Heath Rose, Sepideh Hosseini","doi":"10.1177/00238309231156918","DOIUrl":"10.1177/00238309231156918","url":null,"abstract":"<p><p>Scholars have argued that <i>comprehensibility</i> (i.e., ease of understanding), not nativelike performance, should be prioritized in second language learning, which inspired numerous studies to explore factors affecting comprehensibility. However, most of these studies did not consider potential interaction effects of these factors, resulting in a limited understanding of comprehensibility and less precise implications. This study investigates how pronunciation and lexicogrammar influences the comprehensibility of Mandarin-accented English. A total of 687 listeners were randomly allocated into six groups and rated (a) one baseline and (b) one of six experimental recordings for comprehensibility on a 9-point scale. The baseline recording, a 60 s spontaneous speech by an L1 English speaker with an American accent, was the same across groups. The six 75-s experimental recordings were the same in content but differed in (a) speakers' degree of foreign accent (American, moderate Mandarin, and heavy Mandarin) and (b) lexicogrammar (with errors vs. without errors). The study found that pronunciation and lexicogrammar interacted to influence comprehensibility. That is, whether pronunciation affected comprehensibility depended on speakers' lexicogrammar, and vice versa. The results have implications for theory-building to refine comprehensibility, as well as for pedagogy and testing priorities.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"3-18"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10831757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-05-09DOI: 10.1177/00238309231164982
Jeremy Steffman, Megha Sundara
In six experiments we explored how biphone probability and lexical neighborhood density influence listeners' categorization of vowels embedded in nonword sequences. We found independent effects of each. Listeners shifted categorization of a phonetic continuum to create a higher probability sequence, even when neighborhood density was controlled. Similarly, listeners shifted categorization to create a nonword from a denser neighborhood, even when biphone probability was controlled. Next, using a visual world eye-tracking task, we determined that biphone probability information is used rapidly by listeners in perception. In contrast, task complexity and irrelevant variability in the stimuli interfere with neighborhood density effects. These results support a model in which both biphone probability and neighborhood density independently affect word recognition, but only biphone probability effects are observed early in processing.
{"title":"Disentangling the Role of Biphone Probability From Neighborhood Density in the Perception of Nonwords.","authors":"Jeremy Steffman, Megha Sundara","doi":"10.1177/00238309231164982","DOIUrl":"10.1177/00238309231164982","url":null,"abstract":"<p><p>In six experiments we explored how biphone probability and lexical neighborhood density influence listeners' categorization of vowels embedded in nonword sequences. We found independent effects of each. Listeners shifted categorization of a phonetic continuum to create a higher probability sequence, even when neighborhood density was controlled. Similarly, listeners shifted categorization to create a nonword from a denser neighborhood, even when biphone probability was controlled. Next, using a visual world eye-tracking task, we determined that biphone probability information is used rapidly by listeners in perception. In contrast, task complexity and irrelevant variability in the stimuli interfere with neighborhood density effects. These results support a model in which both biphone probability and neighborhood density independently affect word recognition, but only biphone probability effects are observed early in processing.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"166-202"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9444199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-06-14DOI: 10.1177/00238309231169502
Rose Stamp, David Cohn, Hagit Hel-Or, Wendy Sandler
Just as vocalization proceeds in a continuous stream in speech, so too do movements of the hands, face, and body in sign languages. Here, we use motion-capture technology to distinguish lexical signs in sign language from other common types of expression in the signing stream. One type of expression is constructed action, the enactment of (aspects of) referents and events by (parts of) the body. Another is classifier constructions, the manual representation of analogue and gradient motions and locations simultaneously with specified referent morphemes. The term signing is commonly used for all of these, but we show that not all visual signals in sign languages are of the same type. In this study of Israeli Sign Language, we use motion capture to show that the motion of lexical signs differs significantly along several kinematic parameters from that of the two other modes of expression: constructed action and the classifier forms. In so doing, we show how motion-capture technology can help to define the universal linguistic category "word," and to distinguish it from the expressive gestural elements that are commonly found across sign languages.
{"title":"Kinect-ing the Dots: Using Motion-Capture Technology to Distinguish Sign Language Linguistic From Gestural Expressions.","authors":"Rose Stamp, David Cohn, Hagit Hel-Or, Wendy Sandler","doi":"10.1177/00238309231169502","DOIUrl":"10.1177/00238309231169502","url":null,"abstract":"<p><p>Just as vocalization proceeds in a continuous stream in speech, so too do movements of the hands, face, and body in sign languages. Here, we use motion-capture technology to distinguish lexical signs in sign language from other common types of expression in the signing stream. One type of expression is <i>constructed action</i>, the enactment of (aspects of) referents and events by (parts of) the body. Another is <i>classifier constructions</i>, the manual representation of analogue and gradient motions and locations simultaneously with specified referent morphemes. The term <i>signing</i> is commonly used for all of these, but we show that not all visual signals in sign languages are of the same type. In this study of Israeli Sign Language, we use motion capture to show that the motion of lexical signs differs significantly along several kinematic parameters from that of the two other modes of expression: constructed action and the classifier forms. In so doing, we show how motion-capture technology can help to define the universal linguistic category \"word,\" and to distinguish it from the expressive gestural elements that are commonly found across sign languages.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"255-276"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9776155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-03-26DOI: 10.1177/00238309231156615
Misaki Kato, Melissa M Baese-Berk
Previous research has shown that native listeners benefit from clearly produced speech, as well as from predictable semantic context when these enhancements are delivered in native speech. However, it is unclear whether native listeners benefit from acoustic and semantic enhancements differently when listening to other varieties of speech, including non-native speech. The current study examines to what extent native English listeners benefit from acoustic and semantic cues present in native and non-native English speech. Native English listeners transcribed sentence final words that were of different levels of semantic predictability, produced in plain- or clear-speaking styles by Native English talkers and by native Mandarin talkers of higher- and lower-proficiency in English. The perception results demonstrated that listeners benefited from semantic cues in higher- and lower-proficiency talkers' speech (i.e., transcribed speech more accurately), but not from acoustic cues, even though higher-proficiency talkers did make substantial acoustic enhancements from plain to clear speech. The current results suggest that native listeners benefit more robustly from semantic cues than from acoustic cues when those cues are embedded in non-native speech.
{"title":"The Effects of Acoustic and Semantic Enhancements on Perception of Native and Non-Native Speech.","authors":"Misaki Kato, Melissa M Baese-Berk","doi":"10.1177/00238309231156615","DOIUrl":"10.1177/00238309231156615","url":null,"abstract":"<p><p>Previous research has shown that native listeners benefit from clearly produced speech, as well as from predictable semantic context when these enhancements are delivered in native speech. However, it is unclear whether native listeners benefit from acoustic and semantic enhancements differently when listening to other varieties of speech, including non-native speech. The current study examines to what extent native English listeners benefit from acoustic and semantic cues present in native and non-native English speech. Native English listeners transcribed sentence final words that were of different levels of semantic predictability, produced in plain- or clear-speaking styles by Native English talkers and by native Mandarin talkers of higher- and lower-proficiency in English. The perception results demonstrated that listeners benefited from semantic cues in higher- and lower-proficiency talkers' speech (i.e., transcribed speech more accurately), but not from acoustic cues, even though higher-proficiency talkers did make substantial acoustic enhancements from plain to clear speech. The current results suggest that native listeners benefit more robustly from semantic cues than from acoustic cues when those cues are embedded in non-native speech.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":" ","pages":"40-71"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9177266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}