Austronesian languages such as Sasak and Javanese have a pattern of morphological nasal substitution, where nasals alternate with homorganic oral obstruents—except that [s] is described as alternating with [ɲ], not with [n]. This appears to be an abstract morphophonological relation between [s] and [ɲ] where other parts of the paradigm have a concrete homorganic relation. Articulatory ultrasound data were collected of productions of [t, n, ʨ, ɲ], along with [s] and its nasal counterpart from two languages, from 10 Sasak and 8 Javanese speakers. Comparisons of lingual contours using a root mean square analysis were evaluated with linear mixed-effects regression models, a method that proves reliable for testing questions of phonological neutralization. In both languages, [t, n, s] exhibit a high degree of articulatory similarity, whereas postalveolar [ʨ] and its nasal counterpart [ɲ] exhibited less similarity. The nasal counterpart of [s] was identical in articulation to [ɲ]. This indicates an abstract, rather than concrete, relationship between [s] and its morphophonological nasal counterpart, with the two sounds not sharing articulatory place in either Sasak or Javanese.
南岛语,如萨萨克语和爪哇语,有一种形态上的鼻音替代模式,鼻音与同质的口腔阻塞交替出现——除了[s]被描述为与[j]交替出现,而不是与[n]交替出现。这似乎是[s]和[j]之间的抽象词形关系,而范式的其他部分具有具体的同构关系。本文收集了10名萨萨克语和8名爪哇语使用者发出的[t, n, r, r]和[s]及其鼻音的发音超声数据。使用均方根分析的语言轮廓比较使用线性混合效应回归模型进行评估,该方法被证明是可靠的语音中和测试问题。在两种语言中,[t, n, s]表现出高度的发音相似性,而后肺泡[]和它的鼻音对应[j]表现出较少的相似性。[s]的鼻音与[j]的发音相同。这表明[s]和它在形态学上对应的鼻音之间的关系是抽象的,而不是具体的,这两个音在Sasak语或爪哇语中都没有共同的发音位置。
{"title":"Phonological and phonetic properties of nasal substitution in Sasak and Javanese","authors":"D. Archangeli, J. Yip, Lang Qin, Albert Lee","doi":"10.5334/LABPHON.46","DOIUrl":"https://doi.org/10.5334/LABPHON.46","url":null,"abstract":"Austronesian languages such as Sasak and Javanese have a pattern of morphological nasal substitution, where nasals alternate with homorganic oral obstruents—except that [s] is described as alternating with [ɲ], not with [n]. This appears to be an abstract morphophonological relation between [s] and [ɲ] where other parts of the paradigm have a concrete homorganic relation. Articulatory ultrasound data were collected of productions of [t, n, ʨ, ɲ], along with [s] and its nasal counterpart from two languages, from 10 Sasak and 8 Javanese speakers. Comparisons of lingual contours using a root mean square analysis were evaluated with linear mixed-effects regression models, a method that proves reliable for testing questions of phonological neutralization. In both languages, [t, n, s] exhibit a high degree of articulatory similarity, whereas postalveolar [ʨ] and its nasal counterpart [ɲ] exhibited less similarity. The nasal counterpart of [s] was identical in articulation to [ɲ]. This indicates an abstract, rather than concrete, relationship between [s] and its morphophonological nasal counterpart, with the two sounds not sharing articulatory place in either Sasak or Javanese.","PeriodicalId":45128,"journal":{"name":"Laboratory Phonology","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42581596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Long-distance (or ‘transparent’) vowel harmony systems have frequently been considered ‘unnatural’ and analyzed as ‘crazy rules’ (Bach & Harms, 1972) because they violate the principle of strict locality. Articulatory explanations for the phonetic grounding of vowel harmony are unable to extend to non-local processes, and attempts to re-analyze cases of transparent harmony as strictly local have been largely unsuccessful. In this paper, I present experimental evidence suggesting that vowel harmony may be perceptually (as well as articulatorily) grounded, and that this source of phonetic grounding does in fact extend to long-distance as well as local harmony. In a series of four experiments, subjects were presented with a nonsense word followed by an isolated vowel, and asked to report whether the isolated vowel had occurred in the preceding word. Subjects were consistently faster and more accurate in nonsense words which exhibited vowel harmony along the relevant feature dimension, regardless of locality. A fourth experiment included a task requiring subjects to identify whether the vowel occurred in a specific syllable, and here too they showed better performance on items with vowel harmony along the relevant feature dimension. I argue that strict locality is not a necessary component of a phonetically grounded theory of vowel harmony, suggesting that long-distance harmony can be analyzed as an explicitly non-local process without abandoning phonetic grounding.
{"title":"Not crazy after all these years? Perceptual grounding for long-distance vowel harmony","authors":"Wendell A. Kimper","doi":"10.5334/LABPHON.47","DOIUrl":"https://doi.org/10.5334/LABPHON.47","url":null,"abstract":"Long-distance (or ‘transparent’) vowel harmony systems have frequently been considered ‘unnatural’ and analyzed as ‘crazy rules’ (Bach & Harms, 1972) because they violate the principle of strict locality. Articulatory explanations for the phonetic grounding of vowel harmony are unable to extend to non-local processes, and attempts to re-analyze cases of transparent harmony as strictly local have been largely unsuccessful. In this paper, I present experimental evidence suggesting that vowel harmony may be perceptually (as well as articulatorily) grounded, and that this source of phonetic grounding does in fact extend to long-distance as well as local harmony. In a series of four experiments, subjects were presented with a nonsense word followed by an isolated vowel, and asked to report whether the isolated vowel had occurred in the preceding word. Subjects were consistently faster and more accurate in nonsense words which exhibited vowel harmony along the relevant feature dimension, regardless of locality. A fourth experiment included a task requiring subjects to identify whether the vowel occurred in a specific syllable, and here too they showed better performance on items with vowel harmony along the relevant feature dimension. I argue that strict locality is not a necessary component of a phonetically grounded theory of vowel harmony, suggesting that long-distance harmony can be analyzed as an explicitly non-local process without abandoning phonetic grounding.","PeriodicalId":45128,"journal":{"name":"Laboratory Phonology","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2017-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47850231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an empirical analysis of /l/-darkening in English, using ultrasound tongue imaging data from five varieties spoken in the UK. The analysis of near 500 tokens from five participants provides hitherto absent instrumental evidence demonstrating that speakers may display both categorical allophony of light and dark variants, and gradient phonetic effects coexisting in the same grammar. Results are interpreted through the modular architecture of the life cycle of phonological processes, whereby a phonological rule starts its life as a phonetically driven gradient process, over time stabilizing into a phonological process at the phrase level, and advancing through the grammar. Not only does the life cycle make predictions about application at different levels of the grammar, it also predicts that stabilized phonological rules do not replace the phonetic processes from which they emerge, but typically coexist with them, a pattern which is supported in the data. Overall, this paper demonstrates that variation in English /l/ realization has been underestimated in the existing literature, and that we can observe phonetic, phonological, and morphosyntactic conditioning when accounting for a representative range of phonological environments across varieties.
{"title":"Categorical or gradient? An ultrasound investigation of /l/-darkening and vocalization in varieties of English","authors":"Danielle Turton","doi":"10.5334/LABPHON.35","DOIUrl":"https://doi.org/10.5334/LABPHON.35","url":null,"abstract":"This paper presents an empirical analysis of /l/-darkening in English, using ultrasound tongue imaging data from five varieties spoken in the UK. The analysis of near 500 tokens from five participants provides hitherto absent instrumental evidence demonstrating that speakers may display both categorical allophony of light and dark variants, and gradient phonetic effects coexisting in the same grammar. Results are interpreted through the modular architecture of the life cycle of phonological processes, whereby a phonological rule starts its life as a phonetically driven gradient process, over time stabilizing into a phonological process at the phrase level, and advancing through the grammar. Not only does the life cycle make predictions about application at different levels of the grammar, it also predicts that stabilized phonological rules do not replace the phonetic processes from which they emerge, but typically coexist with them, a pattern which is supported in the data. Overall, this paper demonstrates that variation in English /l/ realization has been underestimated in the existing literature, and that we can observe phonetic, phonological, and morphosyntactic conditioning when accounting for a representative range of phonological environments across varieties.","PeriodicalId":45128,"journal":{"name":"Laboratory Phonology","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2017-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42591031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knowledge of phonotactics is commonly assumed to derive from the lexicon. However, computational studies have suggested that phonotactic constraints might arise before the lexicon is in place, in particular from co-occurrences in continuous speech. The current study presents two artificial language learning experiments aimed at testing whether phonotactic learning can take place in the absence of words. Dutch participants were presented with novel consonant constraints embedded in continuous artificial languages. Vowels occurred at random, which resulted in an absence of recurring word forms in the speech stream. In Experiment 1 participants with different training languages showed significantly different preferences on a set of novel test items. However, only one of the two languages resulted in preferences that were above chance-level performance. In Experiment 2 participants were exposed to a control language without novel statistical cues. Participants did not develop a preference for either phonotactic structure in the test items. An analysis of Dutch phonotactics indicated that the failure to induce novel phonotactics in one condition might have been due to interference from the native language. Our findings suggest that novel phonotactics can be learned from continuous speech, but participants have difficulty learning novel patterns that go against the native language.
{"title":"Learning novel phonotactics from exposure to continuous speech","authors":"Frans Adriaans, R. Kager","doi":"10.5334/LABPHON.20","DOIUrl":"https://doi.org/10.5334/LABPHON.20","url":null,"abstract":"Knowledge of phonotactics is commonly assumed to derive from the lexicon. However, computational studies have suggested that phonotactic constraints might arise before the lexicon is in place, in particular from co-occurrences in continuous speech. The current study presents two artificial language learning experiments aimed at testing whether phonotactic learning can take place in the absence of words. Dutch participants were presented with novel consonant constraints embedded in continuous artificial languages. Vowels occurred at random, which resulted in an absence of recurring word forms in the speech stream. In Experiment 1 participants with different training languages showed significantly different preferences on a set of novel test items. However, only one of the two languages resulted in preferences that were above chance-level performance. In Experiment 2 participants were exposed to a control language without novel statistical cues. Participants did not develop a preference for either phonotactic structure in the test items. An analysis of Dutch phonotactics indicated that the failure to induce novel phonotactics in one condition might have been due to interference from the native language. Our findings suggest that novel phonotactics can be learned from continuous speech, but participants have difficulty learning novel patterns that go against the native language.","PeriodicalId":45128,"journal":{"name":"Laboratory Phonology","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48469543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Production data have shown that one of the features distinguishing uptalk rises from question rises in New Zealand English (NZE) is the alignment point of the rise start, which is earlier in question utterances realized by younger speakers. Previous research has indicated that listeners are sensitive to this distinction in making a forced-choice decision as to whether an utterance is a statement or a question. NZE is also characterized by an ongoing merger of the NEAR and SQUARE diphthongs, with younger speakers more likely to realize the vowel in a word such as care with a closer starting point (as in [iɘ], overlapping with their realization of the NEAR vowel), whereas older speakers would have more open starting point (as in [eɘ]). The current study uses the mouse-tracking paradigm to provide evidence that the realization of SQUARE with an innovative vs. a conservative variant in a word early in an utterance affects NZE listeners’ sensitivity firstly to a rise as a potential signal of an uptalked statement and secondly to the early alignment of the rise as a signal of a question. This finding indicates that the interpretation of prosodic variability can depend on speaker characteristics imputed from other sociophonetic cues.
{"title":"The interpretation of prosodic variability in the context of accompanying sociophonetic cues","authors":"P. Warren","doi":"10.5334/LABPHON.92","DOIUrl":"https://doi.org/10.5334/LABPHON.92","url":null,"abstract":"Production data have shown that one of the features distinguishing uptalk rises from question rises in New Zealand English (NZE) is the alignment point of the rise start, which is earlier in question utterances realized by younger speakers. Previous research has indicated that listeners are sensitive to this distinction in making a forced-choice decision as to whether an utterance is a statement or a question. NZE is also characterized by an ongoing merger of the NEAR and SQUARE diphthongs, with younger speakers more likely to realize the vowel in a word such as care with a closer starting point (as in [iɘ], overlapping with their realization of the NEAR vowel), whereas older speakers would have more open starting point (as in [eɘ]). The current study uses the mouse-tracking paradigm to provide evidence that the realization of SQUARE with an innovative vs. a conservative variant in a word early in an utterance affects NZE listeners’ sensitivity firstly to a rise as a potential signal of an uptalked statement and secondly to the early alignment of the rise as a signal of a question. This finding indicates that the interpretation of prosodic variability can depend on speaker characteristics imputed from other sociophonetic cues.","PeriodicalId":45128,"journal":{"name":"Laboratory Phonology","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2017-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45998882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article investigates the articulation of the thumb in flat handshapes (B handshapes) in Sign Language of the Netherlands. On the basis of phonological models of handshape, the hypothesis was generated that the thumb state is variable and will undergo coarticulatory influences of neighboring signs. This hypothesis was tested by investigating thumb articulation in signs with B handshapes that occur frequently in the Corpus NGT. Manual transcriptions were made of the thumb state in two dimensions and of the spreading of the fingers in a total of 728 tokens of 14 sign types, and likewise for the signs on the left and right of these targets, as produced by 61 signers. Linear mixed-effects regression (LME4) analyses showed a significant prediction of the thumb state in the target sign based on the thumb state in the preceding as well as following neighboring sign. Moreover, the degree of spreading of the other fingers in the target sign also influenced the position of the thumb. We conclude that there is evidence for phonological models of handshapes in sign languages that argue that not all fingers are relevant in all signs. Phonological feature specifications can single out specific fingers as the articulators, leaving other fingers unspecified. We thus argue that the standard term ‘handshape’ is in fact a misnomer, as it is typically not the shape of the whole hand that is specified in the lexicon.
{"title":"Coarticulation of handshape in Sign Language of the Netherlands: a corpus study","authors":"E. Ormel, O. Crasborn, G. Kootstra, A. Meijer","doi":"10.5334/LABPHON.45","DOIUrl":"https://doi.org/10.5334/LABPHON.45","url":null,"abstract":"This article investigates the articulation of the thumb in flat handshapes (B handshapes) in Sign Language of the Netherlands. On the basis of phonological models of handshape, the hypothesis was generated that the thumb state is variable and will undergo coarticulatory influences of neighboring signs. This hypothesis was tested by investigating thumb articulation in signs with B handshapes that occur frequently in the Corpus NGT. Manual transcriptions were made of the thumb state in two dimensions and of the spreading of the fingers in a total of 728 tokens of 14 sign types, and likewise for the signs on the left and right of these targets, as produced by 61 signers. Linear mixed-effects regression (LME4) analyses showed a significant prediction of the thumb state in the target sign based on the thumb state in the preceding as well as following neighboring sign. Moreover, the degree of spreading of the other fingers in the target sign also influenced the position of the thumb. We conclude that there is evidence for phonological models of handshapes in sign languages that argue that not all fingers are relevant in all signs. Phonological feature specifications can single out specific fingers as the articulators, leaving other fingers unspecified. We thus argue that the standard term ‘handshape’ is in fact a misnomer, as it is typically not the shape of the whole hand that is specified in the lexicon.","PeriodicalId":45128,"journal":{"name":"Laboratory Phonology","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2017-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45990415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Language and music share many rhythmic properties, such as variations in intensity and duration leading to repeating patterns. Perception of rhythmic properties may rely on cognitive networks that are shared between the two domains. If so, then variability in speech rhythm perception may relate to individual differences in musicality. To examine this possibility, the present study focuses on rhythmic grouping, which is assumed to be guided by a domain-general principle, the Iambic/Trochaic law, stating that sounds alternating in intensity are grouped as strong-weak, and sounds alternating in duration are grouped as weak-strong. German listeners completed a grouping task: They heard streams of syllables alternating in intensity, duration, or neither, and had to indicate whether they perceived a strong-weak or weak-strong pattern. Moreover, their music perception abilities were measured, and they filled out a questionnaire reporting their productive musical experience. Results showed that better musical rhythm perception ability was associated with more consistent rhythmic grouping of speech, while melody perception ability and productive musical experience were not. This suggests shared cognitive procedures in the perception of rhythm in music and speech. Also, the results highlight the relevance of considering individual differences in musicality when aiming to explain variability in prosody perception.
{"title":"Effects of Musicality on the Perception of Rhythmic Structure in Speech","authors":"Natalie Boll-Avetisyan, A. Bhatara, B. Höhle","doi":"10.5334/LABPHON.91","DOIUrl":"https://doi.org/10.5334/LABPHON.91","url":null,"abstract":"Language and music share many rhythmic properties, such as variations in intensity and duration leading to repeating patterns. Perception of rhythmic properties may rely on cognitive networks that are shared between the two domains. If so, then variability in speech rhythm perception may relate to individual differences in musicality. To examine this possibility, the present study focuses on rhythmic grouping, which is assumed to be guided by a domain-general principle, the Iambic/Trochaic law, stating that sounds alternating in intensity are grouped as strong-weak, and sounds alternating in duration are grouped as weak-strong. German listeners completed a grouping task: They heard streams of syllables alternating in intensity, duration, or neither, and had to indicate whether they perceived a strong-weak or weak-strong pattern. Moreover, their music perception abilities were measured, and they filled out a questionnaire reporting their productive musical experience. Results showed that better musical rhythm perception ability was associated with more consistent rhythmic grouping of speech, while melody perception ability and productive musical experience were not. This suggests shared cognitive procedures in the perception of rhythm in music and speech. Also, the results highlight the relevance of considering individual differences in musicality when aiming to explain variability in prosody perception.","PeriodicalId":45128,"journal":{"name":"Laboratory Phonology","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2017-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42171325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adam Ussishkin, N. Warner, I. Clayton, Dan Brenner, A. Carnie, Michael Hammond, Muriel Fisher
When hearing speech, listeners begin recognizing words before reaching the end of the word. Therefore, early sounds impact spoken word recognition before sounds later in the word. In languages like English, most morphophonological alternations affect the ends of words, but in some languages, morphophonology can alter the early sounds of a word. Scottish Gaelic, an endangered language, has a pattern of ‘initial consonant mutation’ that changes initial consonants: P og ‘kiss’ begins with [p h ], but phog ‘kissed’ begins with [f]. This raises questions both of how listeners process words that might begin with a mutated consonant during spoken word recognition, and how listeners relate the mutated and unmutated forms to each other in the lexicon. We present three experiments to investigate these questions. A priming experiment shows that native speakers link the mutated and unmutated forms in the lexicon. A gating experiment shows that Gaelic listeners usually do not consider mutated forms as candidates during lexical recognition until there is enough evidence to force that interpretation. However, a phonetic identification experiment confirms that listeners can identify the mutated sounds correctly. Together, these experiments contribute to our understanding of how speakers represent and process a language with morphophonological alternations at word onset.
{"title":"Lexical representation and processing of word-initial morphological alternations: Scottish Gaelic mutation","authors":"Adam Ussishkin, N. Warner, I. Clayton, Dan Brenner, A. Carnie, Michael Hammond, Muriel Fisher","doi":"10.5334/LABPHON.22","DOIUrl":"https://doi.org/10.5334/LABPHON.22","url":null,"abstract":"When hearing speech, listeners begin recognizing words before reaching the end of the word. Therefore, early sounds impact spoken word recognition before sounds later in the word. In languages like English, most morphophonological alternations affect the ends of words, but in some languages, morphophonology can alter the early sounds of a word. Scottish Gaelic, an endangered language, has a pattern of ‘initial consonant mutation’ that changes initial consonants: P og ‘kiss’ begins with [p h ], but phog ‘kissed’ begins with [f]. This raises questions both of how listeners process words that might begin with a mutated consonant during spoken word recognition, and how listeners relate the mutated and unmutated forms to each other in the lexicon. We present three experiments to investigate these questions. A priming experiment shows that native speakers link the mutated and unmutated forms in the lexicon. A gating experiment shows that Gaelic listeners usually do not consider mutated forms as candidates during lexical recognition until there is enough evidence to force that interpretation. However, a phonetic identification experiment confirms that listeners can identify the mutated sounds correctly. Together, these experiments contribute to our understanding of how speakers represent and process a language with morphophonological alternations at word onset.","PeriodicalId":45128,"journal":{"name":"Laboratory Phonology","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2017-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42089887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The fronting of the high-back, /u:/ and /U/, as currently seen in Southern British English, is a rare opportunity to study two similar sound changes at different stages of their phonetic development: /u:/-fronting is a more advanced change than /U/-fronting. Since the fronting in both vowels is restricted from applying before a following final /l/, e.g. in words like fool or pull, we can exploit the difference in the phonetic advance- ment of /u:/ and /U/-fronting to illuminate the nature of `fuzzy contrasts', affecting vowel+/l/ sequences in morphologically complex words. As recent results show that /u:/-fronting is partially limited in fool-ing (but not in monomorphemes like hula), we ask whether similar morphological constraints affect /U/ followed by /l/ (e.g. bully vs. pull-ing). Simultaneously, we consider the question of what phonological generalisation best captures the interaction between vowel fronting, /l/-darkening, and morphological structure. We present ultrasound data from 20 speakers of SBE representing two age groups. The data show that morphologically conditioned contrasts are consistent for /u:/+/l/, but variable and limited in size for /U/+/l/. We relate these findings to the debate on morphology-phonetics interactions and the emergence of phonological abstraction.
{"title":"Whence the fuzziness? Morphological effects in interacting sound changes in Southern British English","authors":"Patrycja Strycharczuk, J. Scobbie","doi":"10.5334/LABPHON.24","DOIUrl":"https://doi.org/10.5334/LABPHON.24","url":null,"abstract":"The fronting of the high-back, /u:/ and /U/, as currently seen in Southern British \u0000English, is a rare opportunity to study two similar sound changes at different stages of \u0000their phonetic development: /u:/-fronting is a more advanced change than /U/-fronting. \u0000Since the fronting in both vowels is restricted from applying before a following final /l/, \u0000e.g. in words like fool or pull, we can exploit the difference in the phonetic advance- \u0000ment of /u:/ and /U/-fronting to illuminate the nature of `fuzzy contrasts', affecting \u0000vowel+/l/ sequences in morphologically complex words. As recent results show that \u0000/u:/-fronting is partially limited in fool-ing (but not in monomorphemes like hula), we \u0000ask whether similar morphological constraints affect /U/ followed by /l/ (e.g. bully vs. \u0000pull-ing). Simultaneously, we consider the question of what phonological generalisation \u0000best captures the interaction between vowel fronting, /l/-darkening, and morphological \u0000structure. We present ultrasound data from 20 speakers of SBE representing two age \u0000groups. The data show that morphologically conditioned contrasts are consistent for \u0000/u:/+/l/, but variable and limited in size for /U/+/l/. We relate these findings to \u0000the debate on morphology-phonetics interactions and the emergence of phonological \u0000abstraction.","PeriodicalId":45128,"journal":{"name":"Laboratory Phonology","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2017-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43405638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A prominent pitch accent is known to trigger immediate contrastive interpretation of the accented referential expression. Previous experimental demonstrations of this effect, where [L+H* unaccented] contours led to an increase in earlier responses than [H* !H*] contours in contrastive context, may have benefited from the use of laboratory speech with stylized, homogenous pitch contours as well as data collected from a uniform participant group—college students. The present study tested visitors to a science museum, who better represent the general public, comparing lab and spontaneous speech to replicate the contrast-evoking effect of prominent pitch accent. Across two eye-tracking experiments where participants followed spoken instructions to decorate Christmas trees, spontaneous two-word [L+H* unaccented] contours led to faster eye-movements to contrastive ornament sets than [H* !H*] contours with no delay as compared to lab speech. The differences in the fixation functions were overall smaller than those in a previous study that used clear lab speech in richer contexts. Detailed acoustic analyses indicated that the lab speech tune types were distinguishable by any of several independent F0 measures on the adjective and by F0 slope. In contrast, no single phonetic measure on the spontaneous speech adjective distinguished between tune types, which were best classified according to independent noun-based measures. However, a non-linear combination of the adjective measures was shown to be equal to the noun measures in distinguishing between the [H* !H*] and [L+H* unaccented] tunes. The eye-movement data suggest that naive listeners were comparably sensitive to both lab and spontaneous prosodic cues on the adjective and made anticipatory eye-movements accordingly.
{"title":"Allophonic tunes of contrast: Lab and spontaneous speech lead to equivalent fixation responses in museum visitors","authors":"Kiwako Ito, Rory Turnbull, S. Speer","doi":"10.5334/LABPHON.86","DOIUrl":"https://doi.org/10.5334/LABPHON.86","url":null,"abstract":"A prominent pitch accent is known to trigger immediate contrastive interpretation of the accented referential expression. Previous experimental demonstrations of this effect, where [L+H* unaccented] contours led to an increase in earlier responses than [H* !H*] contours in contrastive context, may have benefited from the use of laboratory speech with stylized, homogenous pitch contours as well as data collected from a uniform participant group—college students. The present study tested visitors to a science museum, who better represent the general public, comparing lab and spontaneous speech to replicate the contrast-evoking effect of prominent pitch accent. Across two eye-tracking experiments where participants followed spoken instructions to decorate Christmas trees, spontaneous two-word [L+H* unaccented] contours led to faster eye-movements to contrastive ornament sets than [H* !H*] contours with no delay as compared to lab speech. The differences in the fixation functions were overall smaller than those in a previous study that used clear lab speech in richer contexts. Detailed acoustic analyses indicated that the lab speech tune types were distinguishable by any of several independent F0 measures on the adjective and by F0 slope. In contrast, no single phonetic measure on the spontaneous speech adjective distinguished between tune types, which were best classified according to independent noun-based measures. However, a non-linear combination of the adjective measures was shown to be equal to the noun measures in distinguishing between the [H* !H*] and [L+H* unaccented] tunes. The eye-movement data suggest that naive listeners were comparably sensitive to both lab and spontaneous prosodic cues on the adjective and made anticipatory eye-movements accordingly.","PeriodicalId":45128,"journal":{"name":"Laboratory Phonology","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2017-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46491197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}