A model of audio-visual motion integration during active self-movement.

IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY Journal of Vision Pub Date : 2025-02-03 DOI:10.1167/jov.25.2.8
Maria Gallagher, Joshua D Haynes, John F Culling, Tom C A Freeman
{"title":"A model of audio-visual motion integration during active self-movement.","authors":"Maria Gallagher, Joshua D Haynes, John F Culling, Tom C A Freeman","doi":"10.1167/jov.25.2.8","DOIUrl":null,"url":null,"abstract":"<p><p>Despite good evidence for optimal audio-visual integration in stationary observers, few studies have considered the impact of self-movement on this process. When the head and/or eyes move, the integration of vision and hearing is complicated, as the sensory measurements begin in different coordinate frames. To successfully integrate these signals, they must first be transformed into the same coordinate frame. We propose that audio and visual motion cues are separately transformed using self-movement signals, before being integrated as body-centered cues to audio-visual motion. We tested this hypothesis using a psychophysical audio-visual integration task in which participants made left/right judgments of audio, visual, or audio-visual targets during self-generated yaw head rotations. Estimates of precision and bias from the audio and visual conditions were used to predict performance in the audio-visual conditions. We found that audio-visual performance was predicted well by models that suggested the transformation of cues into common coordinates but could not be explained by a model that did not rely on coordinate transformation before integration. We also found that precision specifically was better predicted by a model that accounted for shared noise arising from signals encoding head movement. Taken together, our findings suggest that motion perception in active observers is based on the integration of partially correlated body-centered signals.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 2","pages":"8"},"PeriodicalIF":2.3000,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11841688/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Vision","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1167/jov.25.2.8","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Despite good evidence for optimal audio-visual integration in stationary observers, few studies have considered the impact of self-movement on this process. When the head and/or eyes move, the integration of vision and hearing is complicated, as the sensory measurements begin in different coordinate frames. To successfully integrate these signals, they must first be transformed into the same coordinate frame. We propose that audio and visual motion cues are separately transformed using self-movement signals, before being integrated as body-centered cues to audio-visual motion. We tested this hypothesis using a psychophysical audio-visual integration task in which participants made left/right judgments of audio, visual, or audio-visual targets during self-generated yaw head rotations. Estimates of precision and bias from the audio and visual conditions were used to predict performance in the audio-visual conditions. We found that audio-visual performance was predicted well by models that suggested the transformation of cues into common coordinates but could not be explained by a model that did not rely on coordinate transformation before integration. We also found that precision specifically was better predicted by a model that accounted for shared noise arising from signals encoding head movement. Taken together, our findings suggest that motion perception in active observers is based on the integration of partially correlated body-centered signals.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
主动自我运动中视听运动整合模型。
尽管有充分的证据表明,静止观察者的最佳视听整合,但很少有研究考虑到自我运动对这一过程的影响。当头部和/或眼睛移动时,视觉和听觉的整合是复杂的,因为感觉测量开始于不同的坐标框架。为了成功地整合这些信号,必须首先将它们转换到相同的坐标系中。我们建议将声音和视觉运动线索分别使用自我运动信号进行转换,然后将其整合为以身体为中心的视听运动线索。我们使用心理物理视听整合任务来检验这一假设,在该任务中,参与者在自我产生的偏航头旋转过程中对音频、视觉或视听目标进行左/右判断。从视听条件估计的精度和偏差被用来预测在视听条件下的表现。我们发现,建议将线索转换为公共坐标的模型可以很好地预测视听表现,但不依赖于坐标转换的模型无法解释整合前的视听表现。我们还发现,通过考虑编码头部运动的信号产生的共享噪声的模型,可以更好地预测精度。综上所述,我们的研究结果表明,主动观察者的运动感知是基于部分相关的身体中心信号的整合。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Vision
Journal of Vision 医学-眼科学
CiteScore
2.90
自引率
5.60%
发文量
218
审稿时长
3-6 weeks
期刊介绍: Exploring all aspects of biological visual function, including spatial vision, perception, low vision, color vision and more, spanning the fields of neuroscience, psychology and psychophysics.
期刊最新文献
Different effects of flash-grab and frame stimuli on position shift and shape distortion. When sounds control sight: Associative learning modifies perceptual transitions in binocular rivalry. Microsaccadic modulation in goal-directed reaching. Retinal and extra-retinal contributions of blinking to perceptual alternation in bistable apparent motion. Frame effects across space and time.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1