An acquired deficit of intermodal temporal processing for audiovisual speech: A case study

Daniel E. Drebing, Jared Medina, H. Coslett, Jeffrey T. Shenton, R. Hamilton
{"title":"An acquired deficit of intermodal temporal processing for audiovisual speech: A case study","authors":"Daniel E. Drebing, Jared Medina, H. Coslett, Jeffrey T. Shenton, R. Hamilton","doi":"10.1163/187847612X648152","DOIUrl":null,"url":null,"abstract":"Integrating sensory information across modalities is necessary for a cohesive experience of the world; disrupting the ability to bind the multisensory stimuli arising from an event leads to a disjointed and confusing percept. We previously reported (Hamilton et al., 2006) a patient, AWF, who suffered an acute neural incident after which he displayed a distinct inability to integrate auditory and visual speech information. While our prior experiments involving AWF suggested that he had a deficit of audiovisual speech processing, they did not explore the hypothesis that his deficits in audiovisual integration are restricted to speech. In order to test this notion, we conducted a series of experiments aimed at testing AWF’s ability to integrate cross-modal information from both speech and non-speech events. AWF was tasked with making temporal order judgments (TOJs) for videos of object noises (such as hands clapping) or speech, wherein the onsets of auditory and visual information were manipulated. Results from the experiments show that while AWF performed worse than controls in his ability to accurately judge even the most salient onset differences for speech videos, he did not differ significantly from controls in his ability to make TOJs for the object videos. These results illustrate the possibility of disruption of intermodal binding for audiovisual speech events with spared binding for real-world, non-speech events.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"186-186"},"PeriodicalIF":0.0000,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648152","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Seeing and Perceiving","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1163/187847612X648152","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Integrating sensory information across modalities is necessary for a cohesive experience of the world; disrupting the ability to bind the multisensory stimuli arising from an event leads to a disjointed and confusing percept. We previously reported (Hamilton et al., 2006) a patient, AWF, who suffered an acute neural incident after which he displayed a distinct inability to integrate auditory and visual speech information. While our prior experiments involving AWF suggested that he had a deficit of audiovisual speech processing, they did not explore the hypothesis that his deficits in audiovisual integration are restricted to speech. In order to test this notion, we conducted a series of experiments aimed at testing AWF’s ability to integrate cross-modal information from both speech and non-speech events. AWF was tasked with making temporal order judgments (TOJs) for videos of object noises (such as hands clapping) or speech, wherein the onsets of auditory and visual information were manipulated. Results from the experiments show that while AWF performed worse than controls in his ability to accurately judge even the most salient onset differences for speech videos, he did not differ significantly from controls in his ability to make TOJs for the object videos. These results illustrate the possibility of disruption of intermodal binding for audiovisual speech events with spared binding for real-world, non-speech events.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
视听言语的获得性多模态时间处理缺陷:个案研究
跨模式整合感官信息对于世界的凝聚力体验是必要的;破坏从一个事件中产生的多感官刺激的结合能力会导致一个脱节和混乱的感知。我们之前报道过(Hamilton et al., 2006)一位患有急性神经事件的AWF患者,在此之后,他表现出明显的无法整合听觉和视觉语音信息。虽然我们之前涉及AWF的实验表明他有视听语音处理的缺陷,但他们没有探索他的视听整合缺陷仅限于言语的假设。为了验证这一概念,我们进行了一系列实验,旨在测试AWF整合来自语音和非语音事件的跨模态信息的能力。AWF的任务是对物体噪声(如鼓掌)或语音视频进行时间顺序判断(toj),其中听觉和视觉信息的开始被操纵。实验结果表明,虽然AWF在准确判断语音视频中最显著的开始差异的能力上比对照组差,但他在为物体视频制作toj的能力上与对照组没有显著差异。这些结果说明了对视听语音事件的多模式绑定中断的可能性,而对现实世界的非语音事件则保留绑定。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Seeing and Perceiving
Seeing and Perceiving BIOPHYSICS-PSYCHOLOGY
自引率
0.00%
发文量
0
审稿时长
>12 weeks
期刊最新文献
Chapter ten. Color Vision Chapter six. Brightness Constancy Chapter One. Our Idea of the Physical World Chapter nine. Optometrists, Ophthalmologists, Opticians: What They Do Chapter seven. Why the Rate of Unbleaching is Important
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1