Daniel E. Drebing, Jared Medina, H. Coslett, Jeffrey T. Shenton, R. Hamilton
{"title":"An acquired deficit of intermodal temporal processing for audiovisual speech: A case study","authors":"Daniel E. Drebing, Jared Medina, H. Coslett, Jeffrey T. Shenton, R. Hamilton","doi":"10.1163/187847612X648152","DOIUrl":null,"url":null,"abstract":"Integrating sensory information across modalities is necessary for a cohesive experience of the world; disrupting the ability to bind the multisensory stimuli arising from an event leads to a disjointed and confusing percept. We previously reported (Hamilton et al., 2006) a patient, AWF, who suffered an acute neural incident after which he displayed a distinct inability to integrate auditory and visual speech information. While our prior experiments involving AWF suggested that he had a deficit of audiovisual speech processing, they did not explore the hypothesis that his deficits in audiovisual integration are restricted to speech. In order to test this notion, we conducted a series of experiments aimed at testing AWF’s ability to integrate cross-modal information from both speech and non-speech events. AWF was tasked with making temporal order judgments (TOJs) for videos of object noises (such as hands clapping) or speech, wherein the onsets of auditory and visual information were manipulated. Results from the experiments show that while AWF performed worse than controls in his ability to accurately judge even the most salient onset differences for speech videos, he did not differ significantly from controls in his ability to make TOJs for the object videos. These results illustrate the possibility of disruption of intermodal binding for audiovisual speech events with spared binding for real-world, non-speech events.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"186-186"},"PeriodicalIF":0.0000,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648152","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Seeing and Perceiving","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1163/187847612X648152","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Integrating sensory information across modalities is necessary for a cohesive experience of the world; disrupting the ability to bind the multisensory stimuli arising from an event leads to a disjointed and confusing percept. We previously reported (Hamilton et al., 2006) a patient, AWF, who suffered an acute neural incident after which he displayed a distinct inability to integrate auditory and visual speech information. While our prior experiments involving AWF suggested that he had a deficit of audiovisual speech processing, they did not explore the hypothesis that his deficits in audiovisual integration are restricted to speech. In order to test this notion, we conducted a series of experiments aimed at testing AWF’s ability to integrate cross-modal information from both speech and non-speech events. AWF was tasked with making temporal order judgments (TOJs) for videos of object noises (such as hands clapping) or speech, wherein the onsets of auditory and visual information were manipulated. Results from the experiments show that while AWF performed worse than controls in his ability to accurately judge even the most salient onset differences for speech videos, he did not differ significantly from controls in his ability to make TOJs for the object videos. These results illustrate the possibility of disruption of intermodal binding for audiovisual speech events with spared binding for real-world, non-speech events.
跨模式整合感官信息对于世界的凝聚力体验是必要的;破坏从一个事件中产生的多感官刺激的结合能力会导致一个脱节和混乱的感知。我们之前报道过(Hamilton et al., 2006)一位患有急性神经事件的AWF患者,在此之后,他表现出明显的无法整合听觉和视觉语音信息。虽然我们之前涉及AWF的实验表明他有视听语音处理的缺陷,但他们没有探索他的视听整合缺陷仅限于言语的假设。为了验证这一概念,我们进行了一系列实验,旨在测试AWF整合来自语音和非语音事件的跨模态信息的能力。AWF的任务是对物体噪声(如鼓掌)或语音视频进行时间顺序判断(toj),其中听觉和视觉信息的开始被操纵。实验结果表明,虽然AWF在准确判断语音视频中最显著的开始差异的能力上比对照组差,但他在为物体视频制作toj的能力上与对照组没有显著差异。这些结果说明了对视听语音事件的多模式绑定中断的可能性,而对现实世界的非语音事件则保留绑定。