This article explores deaf college students' knowledge of English wh-question formation in the context of government-binding theory and an associated learnability theory. The parameters of universal grammar (UG) that are relevant to wh-question formation are identified, and predictions are made regarding the learning of the English values of these parameters in accordance with the subset principle, which, it has been proposed, guides the acquisition of UG parameter values that define languages ordered as proper subsets. The results of two learnability tasks revealed that, despite years of exposure to English language input, many deaf learners have not internalized the positive evidence required to set the marked values of the wh-question parameters. This finding provides strong empirical support for the subset principle. Theoretical and educational implications are discussed.
{"title":"Learnability constraints on deaf learners' acquisition of English wh-questions.","authors":"G P Berent","doi":"10.1044/jshr.3903.625","DOIUrl":"https://doi.org/10.1044/jshr.3903.625","url":null,"abstract":"<p><p>This article explores deaf college students' knowledge of English wh-question formation in the context of government-binding theory and an associated learnability theory. The parameters of universal grammar (UG) that are relevant to wh-question formation are identified, and predictions are made regarding the learning of the English values of these parameters in accordance with the subset principle, which, it has been proposed, guides the acquisition of UG parameter values that define languages ordered as proper subsets. The results of two learnability tasks revealed that, despite years of exposure to English language input, many deaf learners have not internalized the positive evidence required to set the marked values of the wh-question parameters. This finding provides strong empirical support for the subset principle. Theoretical and educational implications are discussed.</p>","PeriodicalId":76022,"journal":{"name":"Journal of speech and hearing research","volume":"39 3","pages":"625-42"},"PeriodicalIF":0.0,"publicationDate":"1996-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1044/jshr.3903.625","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"19755153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main purpose of the present study was to differentiate between people who stutter and control speakers regarding their ability to assemble motor plans and to prepare (and execute) muscle commands. Adult males who stutter, matched for age, gender, and educational level with a group of control speakers, were tested on naming words and symbols. In addition, their ability to encode and retrieve memory representations of combinations of a symbol and a word, was tested in a recognition task, using manual reaction times and sensitivity scores, as defined in signal detection theory, as performance measures. Group differences in muscle command preparation were assessed from electromyographic recordings of upper lip and lower lip. Results indicated no interaction between group and word size effects in choice reaction times or a group effect in the ability to recognize previously learned symbol-word combinations. However, they were significantly different in the timing of peak amplitudes in the integrated electromyographic signals of upper lip and lower lip (IEMG peak latency). Findings question the claim that people who stutter have problems in creating abstract motor plans for speech. In addition, it is argued that the group differences in IEMG peak latency that were found in the present study might be better understood in terms of motor control strategies than in terms of motor control deficits.
{"title":"From planning to articulation in speech production: what differentiates a person who stutters from a person who does not stutter?","authors":"P H van Lieshout, W Hulstijn, H F Peters","doi":"10.1044/jshr.3903.546","DOIUrl":"https://doi.org/10.1044/jshr.3903.546","url":null,"abstract":"<p><p>The main purpose of the present study was to differentiate between people who stutter and control speakers regarding their ability to assemble motor plans and to prepare (and execute) muscle commands. Adult males who stutter, matched for age, gender, and educational level with a group of control speakers, were tested on naming words and symbols. In addition, their ability to encode and retrieve memory representations of combinations of a symbol and a word, was tested in a recognition task, using manual reaction times and sensitivity scores, as defined in signal detection theory, as performance measures. Group differences in muscle command preparation were assessed from electromyographic recordings of upper lip and lower lip. Results indicated no interaction between group and word size effects in choice reaction times or a group effect in the ability to recognize previously learned symbol-word combinations. However, they were significantly different in the timing of peak amplitudes in the integrated electromyographic signals of upper lip and lower lip (IEMG peak latency). Findings question the claim that people who stutter have problems in creating abstract motor plans for speech. In addition, it is argued that the group differences in IEMG peak latency that were found in the present study might be better understood in terms of motor control strategies than in terms of motor control deficits.</p>","PeriodicalId":76022,"journal":{"name":"Journal of speech and hearing research","volume":"39 3","pages":"546-64"},"PeriodicalIF":0.0,"publicationDate":"1996-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1044/jshr.3903.546","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"19755246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A variety of approaches has been used to classify the status of adult subjects in familial studies of developmental language disorders. In this report, we directly compare the results of four different methods that appear in the research literature. Two of the approaches rely on case history reports, and two are performance-based methods. Subjects included 24 parents (12 mothers, 12 fathers) of children with developmental language disorders and 24 unrelated adult control subjects (12 female, 12 male) who completed case history items and standardized language testing designed for classification purposes. All classification methods identified more parents than control subjects as "affected". However, classification by case history methods resulted in fewer affected adults than classification through standardized testing. This outcome suggests that the variability in classification rates in studies to date may be the result of method rather than subject sample differences.
{"title":"Classification of adults for family studies of developmental language disorders.","authors":"E Plante, K Shenkman, M M Clark","doi":"10.1044/jshr.3903.661","DOIUrl":"https://doi.org/10.1044/jshr.3903.661","url":null,"abstract":"<p><p>A variety of approaches has been used to classify the status of adult subjects in familial studies of developmental language disorders. In this report, we directly compare the results of four different methods that appear in the research literature. Two of the approaches rely on case history reports, and two are performance-based methods. Subjects included 24 parents (12 mothers, 12 fathers) of children with developmental language disorders and 24 unrelated adult control subjects (12 female, 12 male) who completed case history items and standardized language testing designed for classification purposes. All classification methods identified more parents than control subjects as \"affected\". However, classification by case history methods resulted in fewer affected adults than classification through standardized testing. This outcome suggests that the variability in classification rates in studies to date may be the result of method rather than subject sample differences.</p>","PeriodicalId":76022,"journal":{"name":"Journal of speech and hearing research","volume":"39 3","pages":"661-7"},"PeriodicalIF":0.0,"publicationDate":"1996-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"19754428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N Schiavetti, R L Whitehead, D E Metz, B Whitehead, M Mignerey
This study investigated speaking rate and voice onset time (VOT) in speech produced during simultaneous communication (SC) by speakers with normal hearing. Stimulus words initiated with voiced and voiceless plosives were embedded in a sentence that was spoken and produced with SC. VOT measures were calculated from acoustic recordings and results indicated significant differences between speech-only and SC conditions, with speech produced during SC demonstrating both slower speaking rate and increased VOT of voiceless consonants. VOTs produced during both SC and speech-only conditions followed English voicing rules and varied appropriately with place of articulation. The somewhat enlarged voicing contrast during SC was consistent with previous findings regarding the influence of rate changes on the temporal fine structure of speech (Miller, 1987) and was similar to the voicing contrast results reported for clear speech by Picheny, Durlach, and Braida (1986).
本研究对听力正常的说话人在同步交流中产生的言语的语速和语音起始时间进行了研究。用发声爆破和不发声爆破启动的刺激词被嵌入在用SC说话和产生的句子中。VOT测量从录音中计算出来,结果表明语音和SC条件之间存在显著差异,在SC过程中产生的语音显示出说话速度变慢和不发声辅音的VOT增加。在SC和纯语音条件下产生的vot都遵循英语发音规则,并随着发音位置的不同而适当变化。SC过程中略微放大的发声对比与先前关于语率变化对言语时间精细结构影响的研究结果一致(Miller, 1987),与Picheny, Durlach, and Braida(1986)报道的清晰言语的发声对比结果相似。
{"title":"Voice onset time in speech produced during simultaneous communication.","authors":"N Schiavetti, R L Whitehead, D E Metz, B Whitehead, M Mignerey","doi":"10.1044/jshr.3903.565","DOIUrl":"https://doi.org/10.1044/jshr.3903.565","url":null,"abstract":"<p><p>This study investigated speaking rate and voice onset time (VOT) in speech produced during simultaneous communication (SC) by speakers with normal hearing. Stimulus words initiated with voiced and voiceless plosives were embedded in a sentence that was spoken and produced with SC. VOT measures were calculated from acoustic recordings and results indicated significant differences between speech-only and SC conditions, with speech produced during SC demonstrating both slower speaking rate and increased VOT of voiceless consonants. VOTs produced during both SC and speech-only conditions followed English voicing rules and varied appropriately with place of articulation. The somewhat enlarged voicing contrast during SC was consistent with previous findings regarding the influence of rate changes on the temporal fine structure of speech (Miller, 1987) and was similar to the voicing contrast results reported for clear speech by Picheny, Durlach, and Braida (1986).</p>","PeriodicalId":76022,"journal":{"name":"Journal of speech and hearing research","volume":"39 3","pages":"565-72"},"PeriodicalIF":0.0,"publicationDate":"1996-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1044/jshr.3903.565","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"19755147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In an earlier study, we evaluated the effectiveness of several acoustic measures in predicting breathiness ratings for sustained vowels spoken by nonpathological talkers who were asked to produce nonbreathy, moderately breathy, and very breathy phonation (Hillenbrand, Cleveland, & Erickson, 1994). The purpose of the present study was to extend these results to speakers with laryngeal pathologies and to conduct tests using connected speech in addition to sustained vowels. Breathiness ratings were obtained from a sustained vowel and a 12-word sentence spoken by 20 pathological and 5 nonpathological talkers. Acoustic measures were made of (a) signal periodicity, (b) first harmonic amplitude, and (c) spectral tilt. For the sustained vowels, a frequency domain measure of periodicity provided the most accurate predictions of perceived breathiness, accounting for 92% of the variance in breathiness ratings. The relative amplitude of the first harmonic and two measures of spectral tilt correlated moderately with breathiness ratings. For the sentences, both signal periodicity and spectral tilt provided accurate predictions of breathiness ratings, accounting for 70%-85% of the variance.
{"title":"Acoustic correlates of breathy vocal quality: dysphonic voices and continuous speech.","authors":"J Hillenbrand, R A Houde","doi":"10.1044/jshr.3902.311","DOIUrl":"https://doi.org/10.1044/jshr.3902.311","url":null,"abstract":"<p><p>In an earlier study, we evaluated the effectiveness of several acoustic measures in predicting breathiness ratings for sustained vowels spoken by nonpathological talkers who were asked to produce nonbreathy, moderately breathy, and very breathy phonation (Hillenbrand, Cleveland, & Erickson, 1994). The purpose of the present study was to extend these results to speakers with laryngeal pathologies and to conduct tests using connected speech in addition to sustained vowels. Breathiness ratings were obtained from a sustained vowel and a 12-word sentence spoken by 20 pathological and 5 nonpathological talkers. Acoustic measures were made of (a) signal periodicity, (b) first harmonic amplitude, and (c) spectral tilt. For the sustained vowels, a frequency domain measure of periodicity provided the most accurate predictions of perceived breathiness, accounting for 92% of the variance in breathiness ratings. The relative amplitude of the first harmonic and two measures of spectral tilt correlated moderately with breathiness ratings. For the sentences, both signal periodicity and spectral tilt provided accurate predictions of breathiness ratings, accounting for 70%-85% of the variance.</p>","PeriodicalId":76022,"journal":{"name":"Journal of speech and hearing research","volume":"39 2","pages":"311-21"},"PeriodicalIF":0.0,"publicationDate":"1996-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1044/jshr.3902.311","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"19703749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although noise may be innocuous in many vocational environments, there is a growing concern in industry that it can reach hazardous levels when amplified by hearing aids. This study examined the daily noise exposures associated with hearing aid use in industry. This was done by both laboratory and site measurements in which hearing aids were coupled to the microphone of an integrating sound level meter or dosimeter. The former method involved the use of recorded railroad and manufacturing noise and a Bruel and Kjaer 4128 Head and Torso simulator. In the latter procedure, a worker wore one of three hearing aids coupled to a dosimeter during 8-hour shifts in a manufacturing plant. Both methods demonstrated that even when amplified by mild-gain hearing aids, noise exposures rose from time-weighted averages near 80 dBA to well above the OSHA maximum of 90 dBA. The OSHA maximum was also exceeded when moderate and high gain instruments were worn in non-occupational listening environments. The results suggest that current OSHA regulations that limit noise exposure in sound field are inappropriate for hearing aid users.
{"title":"Noise exposure associated with hearing aid use in industry.","authors":"T G Dolan, J F Maurer","doi":"10.1044/jshr.3902.251","DOIUrl":"https://doi.org/10.1044/jshr.3902.251","url":null,"abstract":"<p><p>Although noise may be innocuous in many vocational environments, there is a growing concern in industry that it can reach hazardous levels when amplified by hearing aids. This study examined the daily noise exposures associated with hearing aid use in industry. This was done by both laboratory and site measurements in which hearing aids were coupled to the microphone of an integrating sound level meter or dosimeter. The former method involved the use of recorded railroad and manufacturing noise and a Bruel and Kjaer 4128 Head and Torso simulator. In the latter procedure, a worker wore one of three hearing aids coupled to a dosimeter during 8-hour shifts in a manufacturing plant. Both methods demonstrated that even when amplified by mild-gain hearing aids, noise exposures rose from time-weighted averages near 80 dBA to well above the OSHA maximum of 90 dBA. The OSHA maximum was also exceeded when moderate and high gain instruments were worn in non-occupational listening environments. The results suggest that current OSHA regulations that limit noise exposure in sound field are inappropriate for hearing aid users.</p>","PeriodicalId":76022,"journal":{"name":"Journal of speech and hearing research","volume":"39 2","pages":"251-60"},"PeriodicalIF":0.0,"publicationDate":"1996-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1044/jshr.3902.251","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"19702548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Smith, M Denny, L A Shaffer, E M Kelly, M Hirano
The goal of the present experiment was to determine if stuttering is associated with unusually high levels of activity in laryngeal muscles. Qualitative and quantitative analyses of thyroarytenoid and cricothyroid recordings from 4 stuttering and 3 nonstuttering adults revealed the following: Compared to periods of fluent speech, intervals of disfluent speech are not typically characterized by higher levels of activity in these muscles; and when EMG levels during conversational speech are compared to maximal activation levels for these muscles (e.g., those observed during singing and the Valsalva maneuver), normally fluent adults show robust and sometimes near maximal recruitment during conversational speech. The adults who stutter had a lower operating range for these muscles during conversational speech, and their disfluencies did not produce relatively high activation levels. In summary, the present data require us to reject the claim that adults with a history of chronic stuttering routinely produce excessive levels of intrinsic laryngeal muscle activity. These results suggest that the use of botulinum toxin injections into the vocal folds to treat stuttering should be questioned.
{"title":"Activity of intrinsic laryngeal muscles in fluent and disfluent speech.","authors":"A Smith, M Denny, L A Shaffer, E M Kelly, M Hirano","doi":"10.1044/jshr.3902.329","DOIUrl":"https://doi.org/10.1044/jshr.3902.329","url":null,"abstract":"<p><p>The goal of the present experiment was to determine if stuttering is associated with unusually high levels of activity in laryngeal muscles. Qualitative and quantitative analyses of thyroarytenoid and cricothyroid recordings from 4 stuttering and 3 nonstuttering adults revealed the following: Compared to periods of fluent speech, intervals of disfluent speech are not typically characterized by higher levels of activity in these muscles; and when EMG levels during conversational speech are compared to maximal activation levels for these muscles (e.g., those observed during singing and the Valsalva maneuver), normally fluent adults show robust and sometimes near maximal recruitment during conversational speech. The adults who stutter had a lower operating range for these muscles during conversational speech, and their disfluencies did not produce relatively high activation levels. In summary, the present data require us to reject the claim that adults with a history of chronic stuttering routinely produce excessive levels of intrinsic laryngeal muscle activity. These results suggest that the use of botulinum toxin injections into the vocal folds to treat stuttering should be questioned.</p>","PeriodicalId":76022,"journal":{"name":"Journal of speech and hearing research","volume":"39 2","pages":"329-48"},"PeriodicalIF":0.0,"publicationDate":"1996-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1044/jshr.3902.329","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"19703031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A J Parkinson, R S Tyler, G G Woodworth, M W Lowder, B J Gantz
This study compares the Nucleus F0F1F2 and F0F1F2B3B4B5 (also known as "Multipeak") of "Mpeak") processing schemes in 17 patients wearing the Mini Speech Processor. All patients had at least 18 months implant experience using the F0F1F2 processing strategy. For this study, they were switched to the F0F1F2B3B4B5 processing strategy for 3 months. They then returned to using the F0F1F2 strategy for 3 months, then used the F0F1F2B3B4B5 strategy again for 3 months, and lastly used the F0F1F2 strategy for 3 months. Performance' was evaluated with both schemes after each interval, using speech recognition tests and subjective ratings. Overall, differences between the results for the two processing schemes were not large. Average performance was somewhat better for the F0F1F2B3B4B5 strategy for word and sentence identification, but not for any of the other speech measures. Superior performance was observed in 8 patients with the F0F1F2B3B4B5 strategy. However, 6 of the 8 individuals were significantly better on only one of the six speech measures in the test battery. The other 2 patients performed better on two of the speech measures. Superior performance was also observed in 3 patients with F0F1F2 strategy for consonant recognition. For the remaining patients, there was little difference in their performance with the two strategies. Information transmission analyses indicated that the F0F1F2B3B4B5 strategy transmitted consonant duration and frication cues more efficiently than F0F1F2. Experience with one strategy appeared to benefit performance with the other strategy.
{"title":"A within-subject comparison of adult patients using the Nucleus F0F1F2 and F0F1F2B3B4B5 speech processing strategies.","authors":"A J Parkinson, R S Tyler, G G Woodworth, M W Lowder, B J Gantz","doi":"10.1044/jshr.3902.261","DOIUrl":"https://doi.org/10.1044/jshr.3902.261","url":null,"abstract":"<p><p>This study compares the Nucleus F0F1F2 and F0F1F2B3B4B5 (also known as \"Multipeak\") of \"Mpeak\") processing schemes in 17 patients wearing the Mini Speech Processor. All patients had at least 18 months implant experience using the F0F1F2 processing strategy. For this study, they were switched to the F0F1F2B3B4B5 processing strategy for 3 months. They then returned to using the F0F1F2 strategy for 3 months, then used the F0F1F2B3B4B5 strategy again for 3 months, and lastly used the F0F1F2 strategy for 3 months. Performance' was evaluated with both schemes after each interval, using speech recognition tests and subjective ratings. Overall, differences between the results for the two processing schemes were not large. Average performance was somewhat better for the F0F1F2B3B4B5 strategy for word and sentence identification, but not for any of the other speech measures. Superior performance was observed in 8 patients with the F0F1F2B3B4B5 strategy. However, 6 of the 8 individuals were significantly better on only one of the six speech measures in the test battery. The other 2 patients performed better on two of the speech measures. Superior performance was also observed in 3 patients with F0F1F2 strategy for consonant recognition. For the remaining patients, there was little difference in their performance with the two strategies. Information transmission analyses indicated that the F0F1F2B3B4B5 strategy transmitted consonant duration and frication cues more efficiently than F0F1F2. Experience with one strategy appeared to benefit performance with the other strategy.</p>","PeriodicalId":76022,"journal":{"name":"Journal of speech and hearing research","volume":"39 2","pages":"261-77"},"PeriodicalIF":0.0,"publicationDate":"1996-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"19703744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H M Sussman, F D Minifie, E H Buder, C Stoel-Gammon, J Smith
Consonant-vowel productions at two distinct stages of language development were studied in a single female child. At 12 months canonical babbling syllables (N = 144) identified by a panel of listeners as comprising [bV], [dV], and [gv] tokens were acoustically analyzed by measuring F2 transition onset and F2 midvowel frequencies and plotting their relationship as locus equations for each stop category. A regression analysis performed on these scatterplots revealed differential slopes and y-intercepts as a function of stop place. The same analysis was performed 9 months later on CV utterances (N = 243) produced as syllable-initial segments of real words by the same child. Whereas labial and velar locus equation parameters moved toward more adult-like values, alveolar slope and y-intercept moved away from adult values and more in the direction of decreased coarticulation between vowel and consonant. There was greater scatter of data points around the regression line for production of words compared to babbling. These results are compared to locus equations obtained from 3-5-year-olds and adults. Locus equations appear to be useful as an empirical developmental probe to document how CV productions gradually approach adult categorical standards.
{"title":"Consonant-vowel interdependencies in babbling and early words: preliminary examination of a locus equation approach.","authors":"H M Sussman, F D Minifie, E H Buder, C Stoel-Gammon, J Smith","doi":"10.1044/jshr.3902.424","DOIUrl":"https://doi.org/10.1044/jshr.3902.424","url":null,"abstract":"<p><p>Consonant-vowel productions at two distinct stages of language development were studied in a single female child. At 12 months canonical babbling syllables (N = 144) identified by a panel of listeners as comprising [bV], [dV], and [gv] tokens were acoustically analyzed by measuring F2 transition onset and F2 midvowel frequencies and plotting their relationship as locus equations for each stop category. A regression analysis performed on these scatterplots revealed differential slopes and y-intercepts as a function of stop place. The same analysis was performed 9 months later on CV utterances (N = 243) produced as syllable-initial segments of real words by the same child. Whereas labial and velar locus equation parameters moved toward more adult-like values, alveolar slope and y-intercept moved away from adult values and more in the direction of decreased coarticulation between vowel and consonant. There was greater scatter of data points around the regression line for production of words compared to babbling. These results are compared to locus equations obtained from 3-5-year-olds and adults. Locus equations appear to be useful as an empirical developmental probe to document how CV productions gradually approach adult categorical standards.</p>","PeriodicalId":76022,"journal":{"name":"Journal of speech and hearing research","volume":"39 2","pages":"424-33"},"PeriodicalIF":0.0,"publicationDate":"1996-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1044/jshr.3902.424","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"19702961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prosodic speech cues for rhythm, stress, and intonation are related primarily to variations in intensity, duration, and fundamental frequency. Because these cues make use of temporal properties of the speech waveform they are likely to be represented broadly across the speech spectrum. In order to determine the relative importance of different frequency regions for the recognition of prosodic cues, identification of four prosodic features, syllable number, syllabic stress, sentence intonation, and phrase boundary location, was evaluated under six filter conditions spanning the range from 200-6100 Hz. Each filter condition had equal articulation index (AI) weights, AI = 0.01; p(C)isolated words approximately equal to 0.40. Results obtained with normally hearing subjects showed that there was an interaction between filter condition and the identification of specific prosodic features. For example, information from high-frequency regions of speech was particularly useful in the identification of syllable number and stress, whereas information from low-frequency regions was helpful in identifying intonation patterns. In spite of these spectral differences, overall listeners performed remarkably well in identifying prosodic patterns, although individual differences were apparent. For some subjects, equivalent levels of performance across the six filter conditions were achieved. These results are discussed in relation to auditory and auditory-visual speech recognition.
节奏、重音和语调的韵律语音线索主要与强度、持续时间和基本频率的变化有关。因为这些线索利用了语音波形的时间特性,它们很可能在整个语音频谱中得到广泛的表示。为了确定不同频率区域对韵律线索识别的相对重要性,在200-6100 Hz范围内的六种过滤条件下,评估了音节数、音节重音、句子语调和短语边界位置四种韵律特征的识别。各滤波条件的articulation index (AI)权值相等,AI = 0.01;p(C)孤立词约等于0.40。听力正常的被试的结果表明,过滤条件与特定韵律特征的识别之间存在交互作用。例如,来自高频区域的信息在识别音节数和重音方面特别有用,而来自低频区域的信息则有助于识别语调模式。尽管存在这些谱上的差异,但总体而言,听众在识别韵律模式方面表现得非常好,尽管个体差异很明显。对于一些受试者,在六个过滤条件下达到了相同的性能水平。这些结果在听觉和听觉-视觉语音识别方面进行了讨论。
{"title":"Spectral distribution of prosodic information.","authors":"K W Grant, B E Walden","doi":"10.1044/jshr.3902.228","DOIUrl":"https://doi.org/10.1044/jshr.3902.228","url":null,"abstract":"<p><p>Prosodic speech cues for rhythm, stress, and intonation are related primarily to variations in intensity, duration, and fundamental frequency. Because these cues make use of temporal properties of the speech waveform they are likely to be represented broadly across the speech spectrum. In order to determine the relative importance of different frequency regions for the recognition of prosodic cues, identification of four prosodic features, syllable number, syllabic stress, sentence intonation, and phrase boundary location, was evaluated under six filter conditions spanning the range from 200-6100 Hz. Each filter condition had equal articulation index (AI) weights, AI = 0.01; p(C)isolated words approximately equal to 0.40. Results obtained with normally hearing subjects showed that there was an interaction between filter condition and the identification of specific prosodic features. For example, information from high-frequency regions of speech was particularly useful in the identification of syllable number and stress, whereas information from low-frequency regions was helpful in identifying intonation patterns. In spite of these spectral differences, overall listeners performed remarkably well in identifying prosodic patterns, although individual differences were apparent. For some subjects, equivalent levels of performance across the six filter conditions were achieved. These results are discussed in relation to auditory and auditory-visual speech recognition.</p>","PeriodicalId":76022,"journal":{"name":"Journal of speech and hearing research","volume":"39 2","pages":"228-38"},"PeriodicalIF":0.0,"publicationDate":"1996-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1044/jshr.3902.228","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"19702544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}