{"title":"在模态内和模态间整合部分信息:对口语和书面句子识别的贡献。","authors":"Kimberly G Smith, Daniel Fogerty","doi":"10.1044/2015_JSLHR-H-14-0272","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>This study evaluated the extent to which partial spoken or written information facilitates sentence recognition under degraded unimodal and multimodal conditions.</p><p><strong>Method: </strong>Twenty young adults with typical hearing completed sentence recognition tasks in unimodal and multimodal conditions across 3 proportions of preservation. In the unimodal condition, performance was examined when only interrupted text or interrupted speech stimuli were available. In the multimodal condition, performance was examined when both interrupted text and interrupted speech stimuli were concurrently presented. Sentence recognition scores were obtained from simultaneous and delayed response conditions.</p><p><strong>Results: </strong>Significantly better performance was obtained for unimodal speech-only compared with text-only conditions across all proportions preserved. The multimodal condition revealed better performance when responses were delayed. During simultaneous responses, participants received equal benefit from speech information when the text was moderately and significantly degraded. The benefit from text in degraded auditory environments occurred only when speech was highly degraded.</p><p><strong>Conclusions: </strong>The speech signal, compared with text, is robust against degradation likely due to its continuous, versus discrete, features. Allowing time for offline linguistic processing is beneficial for the recognition of partial sensory information in unimodal and multimodal conditions. Despite the perceptual differences between the 2 modalities, the results highlight the utility of multimodal speech + text signals.</p>","PeriodicalId":85125,"journal":{"name":"Marriage and family living","volume":"8 1","pages":"1805-17"},"PeriodicalIF":0.0000,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4987035/pdf/","citationCount":"0","resultStr":"{\"title\":\"Integration of Partial Information Within and Across Modalities: Contributions to Spoken and Written Sentence Recognition.\",\"authors\":\"Kimberly G Smith, Daniel Fogerty\",\"doi\":\"10.1044/2015_JSLHR-H-14-0272\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>This study evaluated the extent to which partial spoken or written information facilitates sentence recognition under degraded unimodal and multimodal conditions.</p><p><strong>Method: </strong>Twenty young adults with typical hearing completed sentence recognition tasks in unimodal and multimodal conditions across 3 proportions of preservation. In the unimodal condition, performance was examined when only interrupted text or interrupted speech stimuli were available. In the multimodal condition, performance was examined when both interrupted text and interrupted speech stimuli were concurrently presented. Sentence recognition scores were obtained from simultaneous and delayed response conditions.</p><p><strong>Results: </strong>Significantly better performance was obtained for unimodal speech-only compared with text-only conditions across all proportions preserved. The multimodal condition revealed better performance when responses were delayed. During simultaneous responses, participants received equal benefit from speech information when the text was moderately and significantly degraded. The benefit from text in degraded auditory environments occurred only when speech was highly degraded.</p><p><strong>Conclusions: </strong>The speech signal, compared with text, is robust against degradation likely due to its continuous, versus discrete, features. Allowing time for offline linguistic processing is beneficial for the recognition of partial sensory information in unimodal and multimodal conditions. Despite the perceptual differences between the 2 modalities, the results highlight the utility of multimodal speech + text signals.</p>\",\"PeriodicalId\":85125,\"journal\":{\"name\":\"Marriage and family living\",\"volume\":\"8 1\",\"pages\":\"1805-17\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4987035/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Marriage and family living\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1044/2015_JSLHR-H-14-0272\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Marriage and family living","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1044/2015_JSLHR-H-14-0272","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Integration of Partial Information Within and Across Modalities: Contributions to Spoken and Written Sentence Recognition.
Purpose: This study evaluated the extent to which partial spoken or written information facilitates sentence recognition under degraded unimodal and multimodal conditions.
Method: Twenty young adults with typical hearing completed sentence recognition tasks in unimodal and multimodal conditions across 3 proportions of preservation. In the unimodal condition, performance was examined when only interrupted text or interrupted speech stimuli were available. In the multimodal condition, performance was examined when both interrupted text and interrupted speech stimuli were concurrently presented. Sentence recognition scores were obtained from simultaneous and delayed response conditions.
Results: Significantly better performance was obtained for unimodal speech-only compared with text-only conditions across all proportions preserved. The multimodal condition revealed better performance when responses were delayed. During simultaneous responses, participants received equal benefit from speech information when the text was moderately and significantly degraded. The benefit from text in degraded auditory environments occurred only when speech was highly degraded.
Conclusions: The speech signal, compared with text, is robust against degradation likely due to its continuous, versus discrete, features. Allowing time for offline linguistic processing is beneficial for the recognition of partial sensory information in unimodal and multimodal conditions. Despite the perceptual differences between the 2 modalities, the results highlight the utility of multimodal speech + text signals.