Pub Date : 2023-09-01DOI: 10.1177/00238309221119185
Anna Tendera, Matthew Rispoli, Ambikaipakan Sethilselvan, Heecheong Chon, Torrey M Loucks
A phenomenon called "repetition reduction" can increase articulation rate in adults by facilitating phonetic and motor processes, which indicates flexibility in the control of articulation rate. Young children, who speak much slower, may not have the same speech motor flexibility resulting in the absence of the repetition reduction effect. In this study, we tested whether spontaneous repetitions of young children are produced with a faster articulation rate than their original utterances. Twelve monolingual English-speaking children were observed at four time points between 2;0 and 3;0 years of age. A significant increase in articulation rate and syllable count was found using multilevel models for all utterances over the 1-year period. At each time point, however, the repeated utterances were produced significantly faster than the original utterances even though the content and syllable count differed minimally. Our findings conform to the pattern of adult studies suggesting that a "naturistic" form of repetition reduction is already present in the speech of children at 2;0 years. Although certain aspects of speech motor control are undergoing rapid development, existing motor capability at 2;0 already supports flexible changes in articulation rate including repetition reduction.
{"title":"It's Mine, . . . It's Mine: Unsolicited Repetitions Are Reduced in Toddlers.","authors":"Anna Tendera, Matthew Rispoli, Ambikaipakan Sethilselvan, Heecheong Chon, Torrey M Loucks","doi":"10.1177/00238309221119185","DOIUrl":"https://doi.org/10.1177/00238309221119185","url":null,"abstract":"<p><p>A phenomenon called \"repetition reduction\" can increase articulation rate in adults by facilitating phonetic and motor processes, which indicates flexibility in the control of articulation rate. Young children, who speak much slower, may not have the same speech motor flexibility resulting in the absence of the repetition reduction effect. In this study, we tested whether spontaneous repetitions of young children are produced with a faster articulation rate than their original utterances. Twelve monolingual English-speaking children were observed at four time points between 2;0 and 3;0 years of age. A significant increase in articulation rate and syllable count was found using multilevel models for all utterances over the 1-year period. At each time point, however, the repeated utterances were produced significantly faster than the original utterances even though the content and syllable count differed minimally. Our findings conform to the pattern of adult studies suggesting that a \"naturistic\" form of repetition reduction is already present in the speech of children at 2;0 years. Although certain aspects of speech motor control are undergoing rapid development, existing motor capability at 2;0 already supports flexible changes in articulation rate including repetition reduction.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":"66 3","pages":"734-755"},"PeriodicalIF":1.8,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/1c/e4/10.1177_00238309221119185.PMC10394958.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10022208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/00238309221111752
Filip Nenadić, Benjamin V Tucker, Louis Ten Bosch
We present an implementation of DIANA, a computational model of spoken word recognition, to model responses collected in the Massive Auditory Lexical Decision (MALD) project. DIANA is an end-to-end model, including an activation and decision component that takes the acoustic signal as input, activates internal word representations, and outputs lexicality judgments and estimated response latencies. Simulation 1 presents the process of creating acoustic models required by DIANA to analyze novel speech input. Simulation 2 investigates DIANA's performance in determining whether the input signal is a word present in the lexicon or a pseudoword. In Simulation 3, we generate estimates of response latency and correlate them with general tendencies in participant responses in MALD data. We find that DIANA performs fairly well in free word recognition and lexical decision. However, the current approach for estimating response latency provides estimates opposite to those found in behavioral data. We discuss these findings and offer suggestions as to what a contemporary model of spoken word recognition should be able to do.
{"title":"Computational Modeling of an Auditory Lexical Decision Experiment Using DIANA.","authors":"Filip Nenadić, Benjamin V Tucker, Louis Ten Bosch","doi":"10.1177/00238309221111752","DOIUrl":"https://doi.org/10.1177/00238309221111752","url":null,"abstract":"<p><p>We present an implementation of DIANA, a computational model of spoken word recognition, to model responses collected in the Massive Auditory Lexical Decision (MALD) project. DIANA is an end-to-end model, including an activation and decision component that takes the acoustic signal as input, activates internal word representations, and outputs lexicality judgments and estimated response latencies. Simulation 1 presents the process of creating acoustic models required by DIANA to analyze novel speech input. Simulation 2 investigates DIANA's performance in determining whether the input signal is a word present in the lexicon or a pseudoword. In Simulation 3, we generate estimates of response latency and correlate them with general tendencies in participant responses in MALD data. We find that DIANA performs fairly well in free word recognition and lexical decision. However, the current approach for estimating response latency provides estimates opposite to those found in behavioral data. We discuss these findings and offer suggestions as to what a contemporary model of spoken word recognition should be able to do.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":"66 3","pages":"564-605"},"PeriodicalIF":1.8,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/36/4f/10.1177_00238309221111752.PMC10394956.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9925601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/00238309221133100
James Turner
This study analyzes the production of native (L1) and foreign (L2) vowels by 42 L1 English learners of French (ELoF) at the start and end of a 6-month residence abroad (RA) in a French-speaking country. Data are also reported from a delayed post-test, which takes place 10 months after a subsection of participants (n = 27) return to the L1 English environment. Results reveal systemic phonetic drift in ELoF's L1 English vowels over the RA, and this accompanies the phonetic development occurring in the participants' L2 French vowel system, a phenomenon we label "tandem drift." This L1-L2 link is also supported by interspeaker variation: the individuals whose L2 French vowels shift the most are also the participants who exhibit the most substantial L1 phonetic drift in the same direction. Results for the L1 re-immersion time point suggest a partial-but not complete-reversal of phonetic drift, whereas no reversal of the L2 gains made over the RA is apparent. Nevertheless, at the individual level, the learners whose L2 gains reverse the most upon L1 re-immersion are also most likely to exhibit reverse phonetic drift in their L1. Overall, these findings indicate a relationship between L2 speech learning and L1 phonetic drift, which we argue is driven by the global phonetic properties of both L2 and L1 becoming linked at a representational level. Although these representations appear malleable, it is clear that recent changes are not guaranteed to reverse despite substantial re-exposure to L1 input. Implications for the distinction between drift and attrition are discussed.
{"title":"Phonetic Development of an L2 Vowel System and Tandem Drift in the L1: A Residence Abroad and L1 Re-Immersion Study.","authors":"James Turner","doi":"10.1177/00238309221133100","DOIUrl":"https://doi.org/10.1177/00238309221133100","url":null,"abstract":"<p><p>This study analyzes the production of native (L1) and foreign (L2) vowels by 42 L1 English learners of French (ELoF) at the start and end of a 6-month residence abroad (RA) in a French-speaking country. Data are also reported from a delayed post-test, which takes place 10 months after a subsection of participants (<i>n</i> = 27) return to the L1 English environment. Results reveal systemic <i>phonetic drift</i> in ELoF's L1 English vowels over the RA, and this accompanies the phonetic development occurring in the participants' L2 French vowel system, a phenomenon we label \"tandem drift.\" This L1-L2 link is also supported by interspeaker variation: the individuals whose L2 French vowels shift the most are also the participants who exhibit the most substantial L1 phonetic drift in the same direction. Results for the L1 re-immersion time point suggest a partial-but not complete-reversal of phonetic drift, whereas no reversal of the L2 gains made over the RA is apparent. Nevertheless, at the individual level, the learners whose L2 gains reverse the most upon L1 re-immersion are also most likely to exhibit reverse phonetic drift in their L1. Overall, these findings indicate a relationship between L2 speech learning and L1 phonetic drift, which we argue is driven by the global phonetic properties of both L2 and L1 becoming linked at a representational level. Although these representations appear malleable, it is clear that recent changes are not guaranteed to reverse despite substantial re-exposure to L1 input. Implications for the distinction between drift and attrition are discussed.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":"66 3","pages":"756-785"},"PeriodicalIF":1.8,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10394973/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9939553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A recent model of sound change posits that the direction of change is determined, at least in part, by the distribution of variation within speech communities. We explore this model in the context of bilingual speech, asking whether the less variable language constrains phonetic variation in the more variable language, using a corpus of spontaneous speech from early Cantonese-English bilinguals. As predicted, given the phonetic distributions of stop obstruents in Cantonese compared with English, intervocalic English /b d g/ were produced with less voicing for Cantonese-English bilinguals and word-final English /t k/ were more likely to be unreleased compared with spontaneous speech from two monolingual English control corpora. Whereas voicing initial obstruents can be gradient in Cantonese, the release of final obstruents is prohibited. Neither Cantonese-English bilingual initial voicing nor word-final stop release patterns were significantly impacted by language mode. These results provide evidence that the phonetic variation in crosslinguistically linked categories in bilingual speech is shaped by the distribution of phonetic variation within each language, thus suggesting a mechanistic account for why some segments are more susceptible to cross-language influence than others.
{"title":"Language Contact Within the Speaker: Phonetic Variation and Crosslinguistic Influence.","authors":"Khia A. Johnson, Molly Babel","doi":"10.31219/osf.io/jhsfc","DOIUrl":"https://doi.org/10.31219/osf.io/jhsfc","url":null,"abstract":"A recent model of sound change posits that the direction of change is determined, at least in part, by the distribution of variation within speech communities. We explore this model in the context of bilingual speech, asking whether the less variable language constrains phonetic variation in the more variable language, using a corpus of spontaneous speech from early Cantonese-English bilinguals. As predicted, given the phonetic distributions of stop obstruents in Cantonese compared with English, intervocalic English /b d g/ were produced with less voicing for Cantonese-English bilinguals and word-final English /t k/ were more likely to be unreleased compared with spontaneous speech from two monolingual English control corpora. Whereas voicing initial obstruents can be gradient in Cantonese, the release of final obstruents is prohibited. Neither Cantonese-English bilingual initial voicing nor word-final stop release patterns were significantly impacted by language mode. These results provide evidence that the phonetic variation in crosslinguistically linked categories in bilingual speech is shaped by the distribution of phonetic variation within each language, thus suggesting a mechanistic account for why some segments are more susceptible to cross-language influence than others.","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":"1 1","pages":"238309231182592"},"PeriodicalIF":1.8,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41932406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1177/00238309221101560
Natalie Braber, Harriet Smith, David Wright, Alexander Hardy, Jeremy Robson
Historically, there has been less research carried out on earwitness than eyewitness testimony. However, in some cases, earwitness evidence might play an important role in securing a conviction. This paper focuses on accent which is a central characteristic of voices in a forensic linguistic context. The paper focuses on two experiments (Experiment 1, n = 41; Experiment 2, n = 57) carried out with participants from a wide range of various locations around the United Kingdom to rate the accuracy and confidence in recognizing accents from voices from England, Scotland, Wales, Northern Ireland, and Ireland as well as looking at specificity of answers given and how this varies for these regions. Our findings show that accuracy is variable and that participants are more likely to be accurate when using vaguer descriptions (such as "Scottish") than being more specific. Furthermore, although participants lack the meta-linguistic ability to describe the features of accents, they are able to name particular words and pronunciations which helped them make their decision.
{"title":"Assessing the Specificity and Accuracy of Accent Judgments by Lay Listeners.","authors":"Natalie Braber, Harriet Smith, David Wright, Alexander Hardy, Jeremy Robson","doi":"10.1177/00238309221101560","DOIUrl":"https://doi.org/10.1177/00238309221101560","url":null,"abstract":"<p><p>Historically, there has been less research carried out on earwitness than eyewitness testimony. However, in some cases, earwitness evidence might play an important role in securing a conviction. This paper focuses on accent which is a central characteristic of voices in a forensic linguistic context. The paper focuses on two experiments (Experiment 1, <i>n</i> = 41; Experiment 2, <i>n</i> = 57) carried out with participants from a wide range of various locations around the United Kingdom to rate the accuracy and confidence in recognizing accents from voices from England, Scotland, Wales, Northern Ireland, and Ireland as well as looking at specificity of answers given and how this varies for these regions. Our findings show that accuracy is variable and that participants are more likely to be accurate when using vaguer descriptions (such as \"Scottish\") than being more specific. Furthermore, although participants lack the meta-linguistic ability to describe the features of accents, they are able to name particular words and pronunciations which helped them make their decision.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":"66 2","pages":"267-290"},"PeriodicalIF":1.8,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10230595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9560845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Second dialect acquisition (SDA) can be defined as the process through which geographically mobile individuals adapt to new dialect features of their first language. Two common methodological approaches in SDA studies could lead to underestimating the phonetic changes that mobile speakers may experience: only large phonetic differences between dialects are considered, and external sources are used to infer what should have been the speakers' original dialect. By contrast, in this study, we carry out a longitudinal analysis to empirically assess the speakers' baseline and shift away from it with no priors as to which features should change or not. Furthermore, we focus on Quebec French, a variety with a relatively crowded vowel space. Using Mahalanobis distances, we measure how acoustic characteristics of vowels produced by 15 mobile speakers change relative to those of a control group of 8 sedentary speakers, with the mobile participants recorded right after they moved to Quebec City, then a year later. Overall, the results show a reduction of Mahalanobis distances over time, indicating convergence toward the control system. Convergence also tends to be greater in denser areas of the vowel space. These results suggest that phonetic changes during SDA could be finer than previously thought. This study calls for the use of methodological approaches that can reveal such trends, and contributes to uncovering the extent of phonetic flexibility during adulthood.
{"title":"Using Mahalanobis Distances to Investigate Second Dialect Acquisition: A Study on Quebec French.","authors":"Josiane Riverin-Coutlée, Johanna-Pascale Roy, Michele Gubian","doi":"10.1177/00238309221097978","DOIUrl":"https://doi.org/10.1177/00238309221097978","url":null,"abstract":"<p><p>Second dialect acquisition (SDA) can be defined as the process through which geographically mobile individuals adapt to new dialect features of their first language. Two common methodological approaches in SDA studies could lead to underestimating the phonetic changes that mobile speakers may experience: only large phonetic differences between dialects are considered, and external sources are used to infer what should have been the speakers' original dialect. By contrast, in this study, we carry out a longitudinal analysis to empirically assess the speakers' baseline and shift away from it with no priors as to which features should change or not. Furthermore, we focus on Quebec French, a variety with a relatively crowded vowel space. Using Mahalanobis distances, we measure how acoustic characteristics of vowels produced by 15 mobile speakers change relative to those of a control group of 8 sedentary speakers, with the mobile participants recorded right after they moved to Quebec City, then a year later. Overall, the results show a reduction of Mahalanobis distances over time, indicating convergence toward the control system. Convergence also tends to be greater in denser areas of the vowel space. These results suggest that phonetic changes during SDA could be finer than previously thought. This study calls for the use of methodological approaches that can reveal such trends, and contributes to uncovering the extent of phonetic flexibility during adulthood.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":"66 2","pages":"291-321"},"PeriodicalIF":1.8,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10230596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9560844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1177/00238309221107002
Heidi Proctor, Kearsy Cormier
Mouth activity forms a key component of all sign languages. This can be divided into mouthings, which originate from words in the ambient spoken language, and mouth gestures, which do not. This study examines the relationship between the distribution of mouthings co-occurring with verb signs in British Sign Language (BSL) and various linguistic and social factors, using the BSL Corpus. We find considerable variation between participants and a lack of homogeneity in mouth actions with particular signs. This accords with previous theories that mouthings constitute code-blending between spoken and signed languages-similar to code-switching or code-mixing in spoken languages-rather than being a phonologically or lexically compulsory part of the sign. We also find a strong association between production of plain verbs (which are body-anchored and cannot be modified spatially) and increased mouthing. In addition, we observe significant effects of region (signers from the south of the United Kingdom mouth more than those from the north), gender (women mouth more than men), and age (signers aged 16-35 years produce fewer mouthings than older participants). We find no significant effect of language background (deaf vs. hearing family). Based on these findings, we argue that the multimodal, multilingual, and simultaneous nature of code-blending in sign languages fits well within the paradigm of translanguaging. We discuss implications of this for concepts of translanguaging, code-switching, code-mixing, and related phenomena, highlighting the need to consider not just modality and linguistic codes but also sequential versus simultaneous patterning.
{"title":"Sociolinguistic Variation in Mouthings in British Sign Language: A Corpus-Based Study.","authors":"Heidi Proctor, Kearsy Cormier","doi":"10.1177/00238309221107002","DOIUrl":"https://doi.org/10.1177/00238309221107002","url":null,"abstract":"<p><p>Mouth activity forms a key component of all sign languages. This can be divided into <i>mouthings</i>, which originate from words in the ambient spoken language, and <i>mouth gestures</i>, which do not. This study examines the relationship between the distribution of mouthings co-occurring with verb signs in British Sign Language (BSL) and various linguistic and social factors, using the BSL Corpus. We find considerable variation between participants and a lack of homogeneity in mouth actions with particular signs. This accords with previous theories that mouthings constitute code-blending between spoken and signed languages-similar to code-switching or code-mixing in spoken languages-rather than being a phonologically or lexically compulsory part of the sign. We also find a strong association between production of plain verbs (which are body-anchored and cannot be modified spatially) and increased mouthing. In addition, we observe significant effects of region (signers from the south of the United Kingdom mouth more than those from the north), gender (women mouth more than men), and age (signers aged 16-35 years produce fewer mouthings than older participants). We find no significant effect of language background (deaf vs. hearing family). Based on these findings, we argue that the multimodal, multilingual, and simultaneous nature of code-blending in sign languages fits well within the paradigm of translanguaging. We discuss implications of this for concepts of translanguaging, code-switching, code-mixing, and related phenomena, highlighting the need to consider not just modality and linguistic codes but also sequential versus simultaneous patterning.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":"66 2","pages":"412-441"},"PeriodicalIF":1.8,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10230597/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9557381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1177/00238309221116130
Seunghun J Lee, Julián Villegas, Mira Oh
This study examines articulatory and acoustic data in order to investigate the non-coalescence of /h/ in South Jeolla. Seoul Korean speakers produce /pap/ "rice" followed by /hana/ "one" as [pa.pha.na] with the coalescence of /p/ and /h/; this is called an aspiration merger. In South Jeolla Korean, this merger may be blocked, as in cases where speakers produce /pap+hana/ as [pa.ba.na]. Electroglottographic (EGG) data indicate the existence of two groups of South Jeolla speakers: one that merges the plosive and /h/ (the merger group), and the other with the canonical South Jeolla Korean pronunciation that does not merge the two consonants (the non-merger group). The production of non-coalesced lenis stops in the non-merger group is phonetically comparable with an underlying lenis stop produced by both of the groups. However, in the non-merger group, the open quotient (OQ) of a vowel following a non-coalesced lenis stop is higher (breathier) than that of an underlying lenis stop. Spectral tilt results display a similarly increased breathiness when the vowel follows a non-coalesced lenis stop. As for the non-merger group of South Jeolla, we argue that speakers display incomplete neutralization such that the non-merger group produces two types of voiced lenis stops differing in the phonation of the following vowel. These findings suggest that previous phonological analyses that posit the /h/-deletion in the non-merger group of South Jeolla Korean need to be revisited.
本研究检查发音和声学数据,以调查在全罗南道/h/的非合并。首尔的韩语发音为/pap/“rice”,然后是/hana/“one”,发音为[pa.pha]。Na]与/p/和/h/的结合;这被称为愿望合并。在全南韩国语中,这种合并可能会被阻止,就像说话者说/pap+hana/ as [pa.ba.na]一样。电声门图(EGG)数据表明,全南有两类人:一类人把爆破音和/h/音合并在一起(合并组),另一类人用全南朝鲜语的标准发音,不把两个辅音合并在一起(非合并组)。非合并组中产生的非合并透镜音停在语音上与两个组产生的潜在透镜音停具有可比性。然而,在非合并组中,元音在非合并的lenis顿音后的开商(OQ)比后面的lenis顿音更高(更喘)。频谱倾斜结果显示,当元音跟随非合并的lenis停顿时,呼吸也会增加。至于全南的非合并组,我们认为说话者表现出不完全中和,因此非合并组产生两种不同类型的浊音,在后面的元音发音上不同。这些发现表明,以前的音系分析假设/h/-缺失在全南朝鲜族非合并组需要重新审视。
{"title":"The Non-Coalescence of /h/ and Incomplete Neutralization in South Jeolla Korean.","authors":"Seunghun J Lee, Julián Villegas, Mira Oh","doi":"10.1177/00238309221116130","DOIUrl":"https://doi.org/10.1177/00238309221116130","url":null,"abstract":"<p><p>This study examines articulatory and acoustic data in order to investigate the non-coalescence of /h/ in South Jeolla. Seoul Korean speakers produce /pap/ \"rice\" followed by /hana/ \"one\" as [pa.<b>p</b><sup>h</sup>a.na] with the coalescence of /p/ and /h/; this is called an aspiration merger. In South Jeolla Korean, this merger may be blocked, as in cases where speakers produce /pap+hana/ as [pa.<b>b</b>a.na]. Electroglottographic (EGG) data indicate the existence of two groups of South Jeolla speakers: one that merges the plosive and /h/ (the merger group), and the other with the canonical South Jeolla Korean pronunciation that does not merge the two consonants (the non-merger group). The production of non-coalesced lenis stops in the non-merger group is phonetically comparable with an underlying lenis stop produced by both of the groups. However, in the non-merger group, the open quotient (OQ) of a vowel following a non-coalesced lenis stop is higher (breathier) than that of an underlying lenis stop. Spectral tilt results display a similarly increased breathiness when the vowel follows a non-coalesced lenis stop. As for the non-merger group of South Jeolla, we argue that speakers display incomplete neutralization such that the non-merger group produces two types of voiced lenis stops differing in the phonation of the following vowel. These findings suggest that previous phonological analyses that posit the /h/-deletion in the non-merger group of South Jeolla Korean need to be revisited.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":"66 2","pages":"442-473"},"PeriodicalIF":1.8,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9912854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1177/00238309221114865
Chih-Chao Chang, Hui-Chun Yang
Using a cross-modal picture-word interference (PWI) task, we examined phonological representations and encoding in Mandarin-speaking children and adults. Pictures of monosyllabic words were presented visually, with auditory primes presented before, concurrent with, or after the picture's appearance (SOA -200, -100, 0, +150). Primes were related to the targets in terms of Onset, Rhyme, Tone, Onset and Tone, Rhyme and Tone, or were unrelated. The rhymes of target words were counterbalanced between simple and complex structures to examine effects of rhyme complexity. Twenty Mandarin-speaking adults (aged 20;3 to 23;10), 20 school-age children (aged 9;1 to 10;11) and 20 preschoolers (aged 5;0 to 5;11) were asked to name the pictures as quickly as possible while ignoring the primes played over a headset. The results showed that adults exhibited consistent Onset and Onset-Tone priming effects across later SOAs, while the older children (9- to 10-year-olds) exhibited Onset, Rhyme, Onset-Tone and Rhyme-Tone priming effects across later SOAs. The younger children (5-year-olds), in contrast, exhibited Rhyme and Rhyme-Tone priming effects at the earliest SOA. For both groups of children, Rhyme and Rhyme-Tone priming effects were complexity-dependent. Our findings suggest that the phonological representations of Mandarin speakers develop from holistic units into those with an onset-based structure. Moreover, an incremental processing pattern at the sub-syllabic level is gradually developed around the age of 9 or 10, though susceptibility to holistic phonological similarity is retained to some degree.
{"title":"Investigation of Mandarin Word Production in Children and Adults: Evidence from Phonological Priming with Non-words.","authors":"Chih-Chao Chang, Hui-Chun Yang","doi":"10.1177/00238309221114865","DOIUrl":"https://doi.org/10.1177/00238309221114865","url":null,"abstract":"<p><p>Using a cross-modal picture-word interference (PWI) task, we examined phonological representations and encoding in Mandarin-speaking children and adults. Pictures of monosyllabic words were presented visually, with auditory primes presented before, concurrent with, or after the picture's appearance (SOA -200, -100, 0, +150). Primes were related to the targets in terms of Onset, Rhyme, Tone, Onset and Tone, Rhyme and Tone, or were unrelated. The rhymes of target words were counterbalanced between simple and complex structures to examine effects of rhyme complexity. Twenty Mandarin-speaking adults (aged 20;3 to 23;10), 20 school-age children (aged 9;1 to 10;11) and 20 preschoolers (aged 5;0 to 5;11) were asked to name the pictures as quickly as possible while ignoring the primes played over a headset. The results showed that adults exhibited consistent Onset and Onset-Tone priming effects across later SOAs, while the older children (9- to 10-year-olds) exhibited Onset, Rhyme, Onset-Tone and Rhyme-Tone priming effects across later SOAs. The younger children (5-year-olds), in contrast, exhibited Rhyme and Rhyme-Tone priming effects at the earliest SOA. For both groups of children, Rhyme and Rhyme-Tone priming effects were complexity-dependent. Our findings suggest that the phonological representations of Mandarin speakers develop from holistic units into those with an onset-based structure. Moreover, an incremental processing pattern at the sub-syllabic level is gradually developed around the age of 9 or 10, though susceptibility to holistic phonological similarity is retained to some degree.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":"66 2","pages":"500-529"},"PeriodicalIF":1.8,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9578510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1177/00238309221108647
Eun Jong Kong, Soyoung Kang
This study investigated individual differences in Korean adult learners' categorical perception of L2 English stops with an aim to explore the relationship of gradient categorizations to perceptual sensitivity to acoustic cues and L2 proficiency. Korean young adult L2 learners of English (N = 49) participated in two speech perception tasks (visual analog scaling and forced-choice identification) in which they listened to English voiced and voiceless stops and Korean lax and aspirated stops with Voice Onset Time (VOT) and F0 manipulated to form a continuum. It was found that in both L1 and L2 stop perception, listeners' gradient category judgment was associated with greater reliance on language-specific redundant cues (i.e., F0 in L2 English and VOT in L1 Korean) and that in the perception of L2 stops, categorical listeners who tended to be less sensitive to F0 were the ones with a higher level of L2 English proficiency. The results suggest that the categorical manner of judging L2 stops reflects learners' better knowledge of L2-specific acoustic cue-weightings, based on which less relevant acoustic information is effectively suppressed.
{"title":"Individual Differences in Categorical Judgment of L2 Stops: A Link to Proficiency and Acoustic Cue-Weighting.","authors":"Eun Jong Kong, Soyoung Kang","doi":"10.1177/00238309221108647","DOIUrl":"https://doi.org/10.1177/00238309221108647","url":null,"abstract":"<p><p>This study investigated individual differences in Korean adult learners' categorical perception of L2 English stops with an aim to explore the relationship of gradient categorizations to perceptual sensitivity to acoustic cues and L2 proficiency. Korean young adult L2 learners of English (<i>N</i> = 49) participated in two speech perception tasks (visual analog scaling and forced-choice identification) in which they listened to English voiced and voiceless stops and Korean lax and aspirated stops with Voice Onset Time (VOT) and F0 manipulated to form a continuum. It was found that in both L1 and L2 stop perception, listeners' gradient category judgment was associated with greater reliance on language-specific redundant cues (i.e., F0 in L2 English and VOT in L1 Korean) and that in the perception of L2 stops, categorical listeners who tended to be less sensitive to F0 were the ones with a higher level of L2 English proficiency. The results suggest that the categorical manner of judging L2 stops reflects learners' better knowledge of L2-specific acoustic cue-weightings, based on which less relevant acoustic information is effectively suppressed.</p>","PeriodicalId":51255,"journal":{"name":"Language and Speech","volume":"66 2","pages":"354-380"},"PeriodicalIF":1.8,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9534302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}