首页 > 最新文献

Journal of Speech Language and Hearing Research最新文献

英文 中文
The Roles of Language Ability and Language Dominance in Bilingual Parent-Child Language Alignment. 语言能力和语言优势在双语亲子语言一致性中的作用。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-05 Epub Date: 2025-02-20 DOI: 10.1044/2024_JSLHR-24-00240
Caitlyn Slawny, Emma Libersky, Margarita Kaushanskaya

Purpose: In the current study, we examined the alignment of language choice of bilingual parent-child dyads in play-based interactions.

Method: Forty-four bilingual Spanish-English parent-child dyads participated in a 10-min naturalistic free-play interaction to determine whether bilingual children and their parents respond to each other in the same language(s) across conversational turns and whether children's language ability and children's and parents' language dominance affect language alignment. Children's language ability was indexed by the Bilingual English-Spanish Assessment. Logistic regression was used to test the effects of children's language ability and children's and parents' language dominance on the alignment of language choice.

Results: Results revealed that children and parents largely aligned their language choice and that children's and parents' language dominance, but not children's language ability, influenced alignment. Patterns of alignment differed between children and parents. Children aligned to their dominant language, and this was true for both English- and Spanish-dominant children. In contrast, English-dominant parents aligned equally to both languages, whereas Spanish-dominant parents aligned significantly more to Spanish.

Conclusion: Together, these findings suggest that bilinguals' alignment of language choice is deeply sensitive to language dominance effects in both children and adults but that parents may also choose their language strategically in conversations with their children.

目的:在本研究中,我们考察了双语亲子二人组在游戏互动中语言选择的一致性:44对西班牙-英语双语亲子组合参加了10分钟的自然自由游戏互动,以确定双语儿童及其父母在轮流对话时是否用相同的语言回应对方,以及儿童的语言能力、儿童和父母的语言优势是否会影响语言的一致性。儿童的语言能力以英语-西班牙语双语评估为指标。采用逻辑回归法检验儿童的语言能力以及儿童和父母的语言优势对语言选择一致性的影响:结果表明,儿童和家长的语言选择基本一致,儿童和家长的语言优势对语言选择的一致性有影响,而儿童的语言能力对语言选择的一致性没有影响。儿童和家长的语言排列模式有所不同。儿童倾向于使用自己的主导语言,英语和西班牙语主导的儿童都是如此。相比之下,英语占主导地位的父母对两种语言的认同度相同,而西班牙语占主导地位的父母对西班牙语的认同度明显更高:总之,这些研究结果表明,双语者的语言选择排列对儿童和成人的语言优势效应非常敏感,但父母在与子女对话时也可能战略性地选择自己的语言。
{"title":"The Roles of Language Ability and Language Dominance in Bilingual Parent-Child Language Alignment.","authors":"Caitlyn Slawny, Emma Libersky, Margarita Kaushanskaya","doi":"10.1044/2024_JSLHR-24-00240","DOIUrl":"10.1044/2024_JSLHR-24-00240","url":null,"abstract":"<p><strong>Purpose: </strong>In the current study, we examined the alignment of language choice of bilingual parent-child dyads in play-based interactions.</p><p><strong>Method: </strong>Forty-four bilingual Spanish-English parent-child dyads participated in a 10-min naturalistic free-play interaction to determine whether bilingual children and their parents respond to each other in the same language(s) across conversational turns and whether children's language ability and children's and parents' language dominance affect language alignment. Children's language ability was indexed by the Bilingual English-Spanish Assessment. Logistic regression was used to test the effects of children's language ability and children's and parents' language dominance on the alignment of language choice.</p><p><strong>Results: </strong>Results revealed that children and parents largely aligned their language choice and that children's and parents' language dominance, but not children's language ability, influenced alignment. Patterns of alignment differed between children and parents. Children aligned to their dominant language, and this was true for both English- and Spanish-dominant children. In contrast, English-dominant parents aligned equally to both languages, whereas Spanish-dominant parents aligned significantly more to Spanish.</p><p><strong>Conclusion: </strong>Together, these findings suggest that bilinguals' alignment of language choice is deeply sensitive to language dominance effects in both children and adults but that parents may also choose their language strategically in conversations with their children.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1092-1104"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143469980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Methodological Stimulus Considerations for Auditory Emotion Recognition Test Design.
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-05 Epub Date: 2025-02-03 DOI: 10.1044/2024_JSLHR-24-00189
Shae D Morgan, Bailey LaPaugh

Purpose: Many studies have investigated test design influences (e.g., number of stimuli, open- vs. closed-set tasks) on word recognition ability, but the impact that stimuli selection has on auditory emotion recognition has not been explored. This study assessed the impact of some stimulus parameters and test design methodologies on emotion recognition performance to optimize stimuli to use for auditory emotion recognition testing.

Method: Twenty-five young adult participants with normal or near-normal hearing completed four tasks evaluating methodological parameters that may affect emotion recognition performance. The four conditions assessed (a) word stimuli versus sentence stimuli, (b) the total number of stimuli and number of stimuli per emotion category, (c) the number of talkers, and (d) the number of emotion categories.

Results: Sentence stimuli yielded higher emotion recognition performance and increased performance variability compared to word stimuli. Recognition performance was independent of the number of stimuli per category, the number of talkers, and the number of emotion categories. Task duration expectedly increased with the total number of stimuli. A test of auditory emotion recognition that combined these design methodologies yielded high performance with low variability for listeners with normal hearing.

Conclusions: Stimulus selection influences performance and test reliability for auditory emotion recognition. Researchers should consider these influences when designing future tests of auditory emotion recognition to ensure tests are able to accomplish the study's aims.

Supplemental material: https://doi.org/10.23641/asha.28270943.

{"title":"Methodological Stimulus Considerations for Auditory Emotion Recognition Test Design.","authors":"Shae D Morgan, Bailey LaPaugh","doi":"10.1044/2024_JSLHR-24-00189","DOIUrl":"10.1044/2024_JSLHR-24-00189","url":null,"abstract":"<p><strong>Purpose: </strong>Many studies have investigated test design influences (e.g., number of stimuli, open- vs. closed-set tasks) on word recognition ability, but the impact that stimuli selection has on auditory emotion recognition has not been explored. This study assessed the impact of some stimulus parameters and test design methodologies on emotion recognition performance to optimize stimuli to use for auditory emotion recognition testing.</p><p><strong>Method: </strong>Twenty-five young adult participants with normal or near-normal hearing completed four tasks evaluating methodological parameters that may affect emotion recognition performance. The four conditions assessed (a) word stimuli versus sentence stimuli, (b) the total number of stimuli and number of stimuli per emotion category, (c) the number of talkers, and (d) the number of emotion categories.</p><p><strong>Results: </strong>Sentence stimuli yielded higher emotion recognition performance and increased performance variability compared to word stimuli. Recognition performance was independent of the number of stimuli per category, the number of talkers, and the number of emotion categories. Task duration expectedly increased with the total number of stimuli. A test of auditory emotion recognition that combined these design methodologies yielded high performance with low variability for listeners with normal hearing.</p><p><strong>Conclusions: </strong>Stimulus selection influences performance and test reliability for auditory emotion recognition. Researchers should consider these influences when designing future tests of auditory emotion recognition to ensure tests are able to accomplish the study's aims.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28270943.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1209-1224"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143081997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Talker Differences in Perceived Emotion in Clear and Conversational Speech.
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-05 Epub Date: 2025-02-18 DOI: 10.1044/2024_JSLHR-24-00325
Elizabeth D Young, Shae D Morgan, Sarah Hargus Ferguson

Purpose: Previous work has shown that judgments of emotion differ between clear and conversational speech, particularly for perceived anger. The current study examines talker differences in perceived emotion for a database of talkers producing clear and conversational speech.

Method: A database of 41 talkers was used to assess talker differences in six emotion categories ("Anger," "Fear," "Disgust," "Happiness," "Sadness," and "Neutral"). Twenty-six healthy young adult listeners rated perceived emotion in 14 emotionally neutral sentences produced in clear and conversational styles by all talkers in the database. Generalized linear mixed-effects modeling was utilized to examine talker differences in all six emotion categories.

Results: There was a significant effect of speaking style for all emotion categories, and substantial talker differences existed after controlling for speaking style in all categories. Additionally, many emotion categories, including anger, had significant Talker × Style interactions. Perceived anger was significantly higher in clear speech compared to conversational speech for 85% of the talkers.

Conclusions: While there is a large speaking style effect for perceived anger, the magnitude of the effect varies between talkers. The perception of negatively valenced emotions in clear speech, including anger, may result in unintended interpersonal consequences for those utilizing clear speech as a communication facilitator. Further research is needed to examine potential acoustic sources of perceived anger in clear speech.

Supplemental material: https://doi.org/10.23641/asha.28304384.

{"title":"Talker Differences in Perceived Emotion in Clear and Conversational Speech.","authors":"Elizabeth D Young, Shae D Morgan, Sarah Hargus Ferguson","doi":"10.1044/2024_JSLHR-24-00325","DOIUrl":"10.1044/2024_JSLHR-24-00325","url":null,"abstract":"<p><strong>Purpose: </strong>Previous work has shown that judgments of emotion differ between clear and conversational speech, particularly for perceived anger. The current study examines talker differences in perceived emotion for a database of talkers producing clear and conversational speech.</p><p><strong>Method: </strong>A database of 41 talkers was used to assess talker differences in six emotion categories (\"Anger,\" \"Fear,\" \"Disgust,\" \"Happiness,\" \"Sadness,\" and \"Neutral\"). Twenty-six healthy young adult listeners rated perceived emotion in 14 emotionally neutral sentences produced in clear and conversational styles by all talkers in the database. Generalized linear mixed-effects modeling was utilized to examine talker differences in all six emotion categories.</p><p><strong>Results: </strong>There was a significant effect of speaking style for all emotion categories, and substantial talker differences existed after controlling for speaking style in all categories. Additionally, many emotion categories, including anger, had significant Talker × Style interactions. Perceived anger was significantly higher in clear speech compared to conversational speech for 85% of the talkers.</p><p><strong>Conclusions: </strong>While there is a large speaking style effect for perceived anger, the magnitude of the effect varies between talkers. The perception of negatively valenced emotions in clear speech, including anger, may result in unintended interpersonal consequences for those utilizing clear speech as a communication facilitator. Further research is needed to examine potential acoustic sources of perceived anger in clear speech.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28304384.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1263-1276"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143442851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Configuration of Hearing Loss Simulation Modulates Mismatch Responses and Discrimination to Mandarin Lexical Tone Contrasts.
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-05 Epub Date: 2025-02-18 DOI: 10.1044/2024_JSLHR-23-00745
Ying-Ying Cheng, Chia-Ying Lee

Purpose: Objective measures of auditory capacity in the hearing loss population are crucial for cross-checking behavioral measures. Mismatch negativity (MMN) is an auditory event-related potential component indexing automatic change detection and reflecting speech discrimination performance. MMN can potentially serve as an objective measure of speech discrimination. This study examined whether the audibility of stimuli modulates MMN to Mandarin lexical tone contrasts by analyzing hearing loss simulation (HLS) in adults with normal hearing.

Method: The configurations of HLS were the between-subjects variable, with the sloping HLS simulating high-frequency hearing loss (more severe hearing loss at frequencies > 1000 Hz) and the rising HLS simulating low-frequency hearing loss. An AX discrimination task was used to measure the lexical tone discrimination by calculating d'. A multideviant oddball paradigm with large (high-level vs. low-dipping tones, T1-T3) and small (high-rising vs. low-dipping tones, T2-T3) deviant contrasts was employed to examine whether deviant size affects MMN sensitivity to stimuli's audibility.

Results: The results showed that the T1-T3 change elicited MMN in the sloping and rising HLS groups. The T2-T3 change elicited MMN in the sloping HLS group but a positive mismatch response in the rising HLS group. Furthermore, regression analysis indicates that more negative mismatch responses to T2-T3 predict better performance in discriminating T2-T3 contrasts.

Conclusions: MMN to the T2-T3 change is sensitive to reduced audibility at frequencies lower than 1000 Hz. This suggests that MMN has the potential to serve as an objective assessment for evaluating lexical tone discrimination in people with hearing loss.

{"title":"The Configuration of Hearing Loss Simulation Modulates Mismatch Responses and Discrimination to Mandarin Lexical Tone Contrasts.","authors":"Ying-Ying Cheng, Chia-Ying Lee","doi":"10.1044/2024_JSLHR-23-00745","DOIUrl":"10.1044/2024_JSLHR-23-00745","url":null,"abstract":"<p><strong>Purpose: </strong>Objective measures of auditory capacity in the hearing loss population are crucial for cross-checking behavioral measures. Mismatch negativity (MMN) is an auditory event-related potential component indexing automatic change detection and reflecting speech discrimination performance. MMN can potentially serve as an objective measure of speech discrimination. This study examined whether the audibility of stimuli modulates MMN to Mandarin lexical tone contrasts by analyzing hearing loss simulation (HLS) in adults with normal hearing.</p><p><strong>Method: </strong>The configurations of HLS were the between-subjects variable, with the sloping HLS simulating high-frequency hearing loss (more severe hearing loss at frequencies > 1000 Hz) and the rising HLS simulating low-frequency hearing loss. An AX discrimination task was used to measure the lexical tone discrimination by calculating <i>d</i>'. A multideviant oddball paradigm with large (high-level vs. low-dipping tones, T1-T3) and small (high-rising vs. low-dipping tones, T2-T3) deviant contrasts was employed to examine whether deviant size affects MMN sensitivity to stimuli's audibility.</p><p><strong>Results: </strong>The results showed that the T1-T3 change elicited MMN in the sloping and rising HLS groups. The T2-T3 change elicited MMN in the sloping HLS group but a positive mismatch response in the rising HLS group. Furthermore, regression analysis indicates that more negative mismatch responses to T2-T3 predict better performance in discriminating T2-T3 contrasts.</p><p><strong>Conclusions: </strong>MMN to the T2-T3 change is sensitive to reduced audibility at frequencies lower than 1000 Hz. This suggests that MMN has the potential to serve as an objective assessment for evaluating lexical tone discrimination in people with hearing loss.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1250-1262"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variability of Preference-Based Adjustments on Hearing Aid Frequency-Gain Response.
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-03-04 DOI: 10.1044/2024_JSLHR-24-00215
Bertan Kursun, Chemay Shola, Isabella E Cunio, Lauren Langley, Yi Shen

Purpose: Although users can customize the frequency-gain response of hearing aids, the variability in their individual adjustments remains a concern. This study investigated the within-subject variability in the gain adjustments made within a single self-adjustment procedure.

Method: Two experiments were conducted with 20 older adults with mild-to-severe hearing loss. Participants used a two-dimensional touchscreen to adjust hearing aid amplification across six frequency bands (0.25-8 kHz) while listening to continuous speech in background noise. In these two experiments, two user interface designs, differing in control-to-gain map, were tested. For each participant, the statistical properties of 30 repeated gain adjustments within a single self-adjustment procedure were analyzed.

Results: When participants made multiple gain adjustments, their preferred gain settings showed the highest variability in the 4- and 8-kHz frequency bands and the lowest variability in the 1- and 2-kHz bands, suggesting that midfrequency bands are weighted more heavily in their preferences compared to high frequencies. Additionally, significant correlations were observed for the preferred gains between the 0.25- and 0.5-kHz bands, between the 0.5- and 1-kHz bands, and between the 4- and 8-kHz bands. Lastly, the standard error of the preferred gain reduced with an increasing number of trials, with a rate close to being slightly shallower than would be expected for invariant mean preference for most participants, suggesting convergent estimation of the underlying preference across trials.

Conclusion: Self-adjustments of frequency-gain profiles are informative about the underlying preference; however, the contributions from various frequency bands are neither equal nor independent.

Supplemental material: https://doi.org/10.23641/asha.28405397.

{"title":"Variability of Preference-Based Adjustments on Hearing Aid Frequency-Gain Response.","authors":"Bertan Kursun, Chemay Shola, Isabella E Cunio, Lauren Langley, Yi Shen","doi":"10.1044/2024_JSLHR-24-00215","DOIUrl":"10.1044/2024_JSLHR-24-00215","url":null,"abstract":"<p><strong>Purpose: </strong>Although users can customize the frequency-gain response of hearing aids, the variability in their individual adjustments remains a concern. This study investigated the within-subject variability in the gain adjustments made within a single self-adjustment procedure.</p><p><strong>Method: </strong>Two experiments were conducted with 20 older adults with mild-to-severe hearing loss. Participants used a two-dimensional touchscreen to adjust hearing aid amplification across six frequency bands (0.25-8 kHz) while listening to continuous speech in background noise. In these two experiments, two user interface designs, differing in control-to-gain map, were tested. For each participant, the statistical properties of 30 repeated gain adjustments within a single self-adjustment procedure were analyzed.</p><p><strong>Results: </strong>When participants made multiple gain adjustments, their preferred gain settings showed the highest variability in the 4- and 8-kHz frequency bands and the lowest variability in the 1- and 2-kHz bands, suggesting that midfrequency bands are weighted more heavily in their preferences compared to high frequencies. Additionally, significant correlations were observed for the preferred gains between the 0.25- and 0.5-kHz bands, between the 0.5- and 1-kHz bands, and between the 4- and 8-kHz bands. Lastly, the standard error of the preferred gain reduced with an increasing number of trials, with a rate close to being slightly shallower than would be expected for invariant mean preference for most participants, suggesting convergent estimation of the underlying preference across trials.</p><p><strong>Conclusion: </strong>Self-adjustments of frequency-gain profiles are informative about the underlying preference; however, the contributions from various frequency bands are neither equal nor independent.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28405397.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-20"},"PeriodicalIF":2.2,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Laryngeal Aerodynamics, Acoustics, and Hypernasality in Children With Cleft Palate.
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-02-28 DOI: 10.1044/2024_JSLHR-24-00763
Robert Brinton Fujiki, John Munday, Rebecca Johnson, Susan L Thibeault

Objective: The objective of this study was to examine the relationship between laryngeal aerodynamics, acoustics, and hypernasality in children with cleft palate with or without lip (CP ± L).

Method: This study used a prospectively performed cross-sectional design. Fifty-six children between the ages of 6 and 17 years with CP ± L participated (Mage= 11.7, SD = 3.4; male = 32, female = 24). Children were separated into four groups based on auditory-perceptual ratings of hypernasality made using the Cleft Audit Protocol for Speech-Augmented-Americleft Modification protocol. Laryngeal aerodynamic measures including subglottal pressure, transglottal airflow, laryngeal aerodynamic resistance (LAR), and phonation threshold pressure were collected. Acoustic measures of smoothed cepstral peak prominence (CPP) and low-to-high ratio on sustained vowels and connected speech were also considered. Analyses controlled for age, sex, auditory-perceptual ratings of voice quality, and speech intelligibility.

Results: Children with minimally or mildly hypernasal resonance demonstrated significantly increased subglottal pressure, reduced transglottal airflow, and increased LAR, when compared with children with balanced or moderately hypernasal resonance. CPP on sustained vowel was significantly lower for children with moderate hypernasality when compared with all other groups-suggesting poorer voice quality. Other acoustic measures were in or near normative pediatric range.

Conclusions: Children with CP ± L and minimal or mildly hypernasal resonance demonstrated aerodynamic voice measures indicative of vocal hyperfunction. These findings suggest that children with CP ± L may compensate for velopharyngeal dysfunction on a laryngeal level, thus increasing the risk of laryngeal pathology. Future study should explore the relationship between laryngeal function and velopharyngeal port closure and consider how voice problems can be prevented or mitigated in children with CP ± L.

{"title":"Laryngeal Aerodynamics, Acoustics, and Hypernasality in Children With Cleft Palate.","authors":"Robert Brinton Fujiki, John Munday, Rebecca Johnson, Susan L Thibeault","doi":"10.1044/2024_JSLHR-24-00763","DOIUrl":"10.1044/2024_JSLHR-24-00763","url":null,"abstract":"<p><strong>Objective: </strong>The objective of this study was to examine the relationship between laryngeal aerodynamics, acoustics, and hypernasality in children with cleft palate with or without lip (CP ± L).</p><p><strong>Method: </strong>This study used a prospectively performed cross-sectional design. Fifty-six children between the ages of 6 and 17 years with CP ± L participated (<i>M</i><sub>age</sub>= 11.7, <i>SD</i> = 3.4; male = 32, female = 24). Children were separated into four groups based on auditory-perceptual ratings of hypernasality made using the Cleft Audit Protocol for Speech-Augmented-Americleft Modification protocol. Laryngeal aerodynamic measures including subglottal pressure, transglottal airflow, laryngeal aerodynamic resistance (LAR), and phonation threshold pressure were collected. Acoustic measures of smoothed cepstral peak prominence (CPP) and low-to-high ratio on sustained vowels and connected speech were also considered. Analyses controlled for age, sex, auditory-perceptual ratings of voice quality, and speech intelligibility.</p><p><strong>Results: </strong>Children with minimally or mildly hypernasal resonance demonstrated significantly increased subglottal pressure, reduced transglottal airflow, and increased LAR, when compared with children with balanced or moderately hypernasal resonance. CPP on sustained vowel was significantly lower for children with moderate hypernasality when compared with all other groups-suggesting poorer voice quality. Other acoustic measures were in or near normative pediatric range.</p><p><strong>Conclusions: </strong>Children with CP ± L and minimal or mildly hypernasal resonance demonstrated aerodynamic voice measures indicative of vocal hyperfunction. These findings suggest that children with CP ± L may compensate for velopharyngeal dysfunction on a laryngeal level, thus increasing the risk of laryngeal pathology. Future study should explore the relationship between laryngeal function and velopharyngeal port closure and consider how voice problems can be prevented or mitigated in children with CP ± L.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-15"},"PeriodicalIF":2.2,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143531512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Current Age and Language Use Impact Speech-in-Noise Differently for Monolingual and Bilingual Adults.
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-02-28 DOI: 10.1044/2024_JSLHR-24-00264
Rebecca E Bieber, Ian Phillips, Gregory M Ellis, Douglas S Brungart

Purpose: Some bilinguals may exhibit lower performance when recognizing speech in noise (SiN) in their second language (L2) compared to monolinguals in their first language. Poorer performance has been found mostly for late bilinguals (L2 acquired after childhood) listening to sentences containing linguistic context and less so for simultaneous/early bilinguals (L2 acquired during childhood) and when testing context-free stimuli. However, most previous studies tested younger participants, meaning little is known about interactions with age; the purpose of this study was to address this gap.

Method: Context-free SiN understanding was measured via the Modified Rhyme Test (MRT) in 3,803 young and middle-aged bilingual and monolingual adults (ages 18-57 years; 19.6% bilinguals, all L2 English) with normal to near-normal hearing. Bilingual adults included simultaneous (n = 462), early (n = 185), and late (n = 97) bilinguals. Performance on the MRT was measured with both accuracy and response time. A self-reported measure of current English use was also collected for bilinguals to evaluate its impact on MRT performance.

Results: Current age impacted MRT accuracy scores differently for each listener group. Relative to monolinguals, simultaneous and early bilinguals showed decreased performance with older age. Response times slowed with increasing current age at similar rates for all groups, despite faster overall response times for monolinguals. Among all bilingual listeners, greater current English language use predicted higher MRT accuracy. For simultaneous bilinguals, greater English use was associated with faster response times.

Conclusions: SiN outcomes in bilingual adults are impacted by age at time of testing and by fixed features of their language history (i.e., age of acquisition) as well as language practices, which can shift over time (i.e., current language use). Results support routine querying of language history and use in the audiology clinic.

Supplemental material: https://doi.org/10.23641/asha.28405430.

{"title":"Current Age and Language Use Impact Speech-in-Noise Differently for Monolingual and Bilingual Adults.","authors":"Rebecca E Bieber, Ian Phillips, Gregory M Ellis, Douglas S Brungart","doi":"10.1044/2024_JSLHR-24-00264","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-24-00264","url":null,"abstract":"<p><strong>Purpose: </strong>Some bilinguals may exhibit lower performance when recognizing speech in noise (SiN) in their second language (L2) compared to monolinguals in their first language. Poorer performance has been found mostly for late bilinguals (L2 acquired after childhood) listening to sentences containing linguistic context and less so for simultaneous/early bilinguals (L2 acquired during childhood) and when testing context-free stimuli. However, most previous studies tested younger participants, meaning little is known about interactions with age; the purpose of this study was to address this gap.</p><p><strong>Method: </strong>Context-free SiN understanding was measured via the Modified Rhyme Test (MRT) in 3,803 young and middle-aged bilingual and monolingual adults (ages 18-57 years; 19.6% bilinguals, all L2 English) with normal to near-normal hearing. Bilingual adults included simultaneous (<i>n</i> = 462), early (<i>n</i> = 185), and late (<i>n</i> = 97) bilinguals. Performance on the MRT was measured with both accuracy and response time. A self-reported measure of current English use was also collected for bilinguals to evaluate its impact on MRT performance.</p><p><strong>Results: </strong>Current age impacted MRT accuracy scores differently for each listener group. Relative to monolinguals, simultaneous and early bilinguals showed decreased performance with older age. Response times slowed with increasing current age at similar rates for all groups, despite faster overall response times for monolinguals. Among all bilingual listeners, greater current English language use predicted higher MRT accuracy. For simultaneous bilinguals, greater English use was associated with faster response times.</p><p><strong>Conclusions: </strong>SiN outcomes in bilingual adults are impacted by age at time of testing and by fixed features of their language history (i.e., age of acquisition) as well as language practices, which can shift over time (i.e., current language use). Results support routine querying of language history and use in the audiology clinic.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28405430.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-21"},"PeriodicalIF":2.2,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143531084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing the Effects of Sensory Tricks on Voice Symptoms in Patients With Laryngeal Dystonia and Essential Vocal Tremor.
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-02-27 DOI: 10.1044/2024_JSLHR-24-00476
Kaitlyn Dwenger, Nelson Roy, Skyler G Jennings, Marshall E Smith, Pamela Mathy, Kristina Simonyan, Julie M Barkmeier-Kraemer

Purpose: This pilot study systematically compared voice symptomatology across varied sensory trick conditions in those with laryngeal dystonia (LD), those with essential vocal tremor (EVT), and vocally normal controls (NCs). Sensory tricks are considered signature characteristics of dystonia and were hypothesized to reduce voice symptoms in those with LD compared to EVT and NC groups.

Method: Five participants from each group (LD, EVT, and NC) completed speech recordings under control and sensory trick conditions (delayed auditory feedback [DAF], vibrotactile stimulation [VTS], and nasoendoscopic recordings with and without topical anesthesia). Comparisons between groups and conditions were made using (a) a paired-comparison paradigm (control vs. sensory condition) listener ratings of voice quality, (b) participant-perceived vocal effort ratings, and (c) average smoothed cepstral peak prominence (CPPS).

Results: Participants with EVT displayed significantly worse listener ratings under most sensory trick conditions, whereas participants with LD were rated significantly worse for DAF and VTS conditions only. However, participant vocal effort ratings were similar across all sensory trick conditions. Average CPPS values generally supported listener ratings across conditions and speakers except during DAF, wherein CPPS values increased (i.e., measurably improved voice quality), whereas listener ratings indicated worsened voice quality for both voice disorder groups.

Conclusions: Outcomes of this study did not support the hypothesized influences of sensory trick conditions on LD voice symptoms, with both LD and EVT groups experiencing worsened symptoms under VTS and DAF conditions. These adverse effects on voice symptoms warrant further research to further evaluate neural pathways and associated sensorimotor response patterns that distinguish individuals with LD and EVT.

Supplemental material: https://doi.org/10.23641/asha.28462292.

{"title":"Comparing the Effects of Sensory Tricks on Voice Symptoms in Patients With Laryngeal Dystonia and Essential Vocal Tremor.","authors":"Kaitlyn Dwenger, Nelson Roy, Skyler G Jennings, Marshall E Smith, Pamela Mathy, Kristina Simonyan, Julie M Barkmeier-Kraemer","doi":"10.1044/2024_JSLHR-24-00476","DOIUrl":"10.1044/2024_JSLHR-24-00476","url":null,"abstract":"<p><strong>Purpose: </strong>This pilot study systematically compared voice symptomatology across varied sensory trick conditions in those with laryngeal dystonia (LD), those with essential vocal tremor (EVT), and vocally normal controls (NCs). Sensory tricks are considered signature characteristics of dystonia and were hypothesized to reduce voice symptoms in those with LD compared to EVT and NC groups.</p><p><strong>Method: </strong>Five participants from each group (LD, EVT, and NC) completed speech recordings under control and sensory trick conditions (delayed auditory feedback [DAF], vibrotactile stimulation [VTS], and nasoendoscopic recordings with and without topical anesthesia). Comparisons between groups and conditions were made using (a) a paired-comparison paradigm (control vs. sensory condition) listener ratings of voice quality, (b) participant-perceived vocal effort ratings, and (c) average smoothed cepstral peak prominence (CPPS).</p><p><strong>Results: </strong>Participants with EVT displayed significantly worse listener ratings under most sensory trick conditions, whereas participants with LD were rated significantly worse for DAF and VTS conditions only. However, participant vocal effort ratings were similar across all sensory trick conditions. Average CPPS values generally supported listener ratings across conditions and speakers except during DAF, wherein CPPS values increased (i.e., measurably improved voice quality), whereas listener ratings indicated worsened voice quality for both voice disorder groups.</p><p><strong>Conclusions: </strong>Outcomes of this study did not support the hypothesized influences of sensory trick conditions on LD voice symptoms, with both LD and EVT groups experiencing worsened symptoms under VTS and DAF conditions. These adverse effects on voice symptoms warrant further research to further evaluate neural pathways and associated sensorimotor response patterns that distinguish individuals with LD and EVT.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28462292.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-22"},"PeriodicalIF":2.2,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143525154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparison of Item Acquisition and Response Generalization for Semantic Versus Phonological Treatment of Aphasia.
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-02-26 DOI: 10.1044/2024_JSLHR-24-00304
Deena Schwen Blackett, Sigfus Kristinsson, Grant Walker, Sara Sayers, Makayla Gibson, Janina Wilmskoetter, Dirk B den Ouden, Julius Fridriksson, Leonardo Bonilha

Purpose: The purpose of this work is to examine whether therapy-related improvements in trained versus untrained items (acquisition and response generalization, respectively) are differentially affected by phonological versus semantic language treatments and to investigate individual variables associated with treatment response.

Method: Sixty-three participants with chronic poststroke aphasia were included in this retrospective analysis of data from a large, multisite clinical trial with an unblinded cross-over design in which all participants underwent 3 weeks of semantic treatment and 3 weeks of phonological treatment. A linear mixed-effects model was used to examine treatment acquisition and generalization effects for the two treatment types. Multiple regression analyses were also conducted to examine individual participant factors associated with acquisition compared to generalization.

Results: Results showed main effects of outcome type (acquisition vs. response generalization) and treatment type (semantic vs. phonological) on posttreatment changes in naming and an interaction between these factors: For acquisition, phonological treatment resulted in better gains than semantic treatment, whereas for response generalization, semantic treatment resulted in slightly better gains than phonological treatment. There were no significant associates of generalization gains. However, acquisition after phonological treatment was associated with less severe aphasia and higher nonverbal semantic processing abilities at baseline, whereas acquisition after semantic treatment was associated with apraxia of speech.

Conclusions: On average, phonological treatment may be more effective for acquiring trained items, whereas semantic treatment may be more effective for response generalization to untrained items. Moreover, acquisition gains are associated with individual baseline variables. These findings could have clinical implications for treatment planning.

Supplemental material: https://doi.org/10.23641/asha.28410212.

{"title":"A Comparison of Item Acquisition and Response Generalization for Semantic Versus Phonological Treatment of Aphasia.","authors":"Deena Schwen Blackett, Sigfus Kristinsson, Grant Walker, Sara Sayers, Makayla Gibson, Janina Wilmskoetter, Dirk B den Ouden, Julius Fridriksson, Leonardo Bonilha","doi":"10.1044/2024_JSLHR-24-00304","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-24-00304","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of this work is to examine whether therapy-related improvements in trained versus untrained items (acquisition and response generalization, respectively) are differentially affected by phonological versus semantic language treatments and to investigate individual variables associated with treatment response.</p><p><strong>Method: </strong>Sixty-three participants with chronic poststroke aphasia were included in this retrospective analysis of data from a large, multisite clinical trial with an unblinded cross-over design in which all participants underwent 3 weeks of semantic treatment and 3 weeks of phonological treatment. A linear mixed-effects model was used to examine treatment acquisition and generalization effects for the two treatment types. Multiple regression analyses were also conducted to examine individual participant factors associated with acquisition compared to generalization.</p><p><strong>Results: </strong>Results showed main effects of outcome type (acquisition vs. response generalization) and treatment type (semantic vs. phonological) on posttreatment changes in naming and an interaction between these factors: For acquisition, phonological treatment resulted in better gains than semantic treatment, whereas for response generalization, semantic treatment resulted in slightly better gains than phonological treatment. There were no significant associates of generalization gains. However, acquisition after phonological treatment was associated with less severe aphasia and higher nonverbal semantic processing abilities at baseline, whereas acquisition after semantic treatment was associated with apraxia of speech.</p><p><strong>Conclusions: </strong>On average, phonological treatment may be more effective for acquiring trained items, whereas semantic treatment may be more effective for response generalization to untrained items. Moreover, acquisition gains are associated with individual baseline variables. These findings could have clinical implications for treatment planning.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28410212.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-16"},"PeriodicalIF":2.2,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing Fundamental Frequency Variation in Speakers With Parkinson's Disease: Effects of Tracking Errors.
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-02-19 DOI: 10.1044/2024_JSLHR-24-00381
Alena Portnova, Annalise Fletcher, Alan Wisler, Stephanie A Borrie

Purpose: Automatic measurements of fundamental frequency (F0) typically contain tracking errors that can be challenging to accurately correct. This study assessed to what degree these errors change F0 summary statistics in speakers with Parkinson's disease (PD) and neurotypical adults. In addition, we include a case study examining how the removal of tracking errors influenced our ability to predict a perceptual outcome measure, speech expressiveness, associated with dysarthria and PD. Several different statistical approaches for characterizing F0 variability were used to demonstrate the influence of tracking errors.

Method: Eight speakers with PD and eight neurotypical speakers were recorded reading The Caterpillar passage. F0 measurements were extracted in Praat and tracking errors were manually identified. The effect of tracking errors on F0 mean and standard deviation was statistically analyzed. Twenty listeners rated speech expressiveness across 80 sentences. The relationship between listener ratings and F0 variability was examined using different statistical approaches for characterizing F0 variability (with and without tracking errors).

Results: Measurements of F0 standard deviation, but not F0 mean, were significantly affected by tracking errors. Relationships between measurements of F0 variability and expressiveness were strengthened when tracking errors were removed from data analysis.

Conclusions: Tracking errors significantly alter F0 standard deviation values for both speakers with PD and neurotypical adults. Case study evidence also suggests that tracking errors can reduce the strength of relationships between F0 variability and perceptual outcome measures, such as speech expressiveness.

{"title":"Assessing Fundamental Frequency Variation in Speakers With Parkinson's Disease: Effects of Tracking Errors.","authors":"Alena Portnova, Annalise Fletcher, Alan Wisler, Stephanie A Borrie","doi":"10.1044/2024_JSLHR-24-00381","DOIUrl":"10.1044/2024_JSLHR-24-00381","url":null,"abstract":"<p><strong>Purpose: </strong>Automatic measurements of fundamental frequency (<i>F</i>0) typically contain tracking errors that can be challenging to accurately correct. This study assessed to what degree these errors change <i>F</i>0 summary statistics in speakers with Parkinson's disease (PD) and neurotypical adults. In addition, we include a case study examining how the removal of tracking errors influenced our ability to predict a perceptual outcome measure, speech expressiveness, associated with dysarthria and PD. Several different statistical approaches for characterizing <i>F</i>0 variability were used to demonstrate the influence of tracking errors.</p><p><strong>Method: </strong>Eight speakers with PD and eight neurotypical speakers were recorded reading The Caterpillar passage. <i>F</i>0 measurements were extracted in Praat and tracking errors were manually identified. The effect of tracking errors on <i>F</i>0 mean and standard deviation was statistically analyzed. Twenty listeners rated speech expressiveness across 80 sentences. The relationship between listener ratings and <i>F</i>0 variability was examined using different statistical approaches for characterizing <i>F</i>0 variability (with and without tracking errors).</p><p><strong>Results: </strong>Measurements of <i>F</i>0 standard deviation, but not <i>F</i>0 mean, were significantly affected by tracking errors. Relationships between measurements of <i>F</i>0 variability and expressiveness were strengthened when tracking errors were removed from data analysis.</p><p><strong>Conclusions: </strong>Tracking errors significantly alter <i>F</i>0 standard deviation values for both speakers with PD and neurotypical adults. Case study evidence also suggests that tracking errors can reduce the strength of relationships between <i>F</i>0 variability and perceptual outcome measures, such as speech expressiveness.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-15"},"PeriodicalIF":2.2,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Speech Language and Hearing Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1