首页 > 最新文献

Multisensory Research最新文献

英文 中文
Audio-Visual Interference During Motion Discrimination in Starlings. 欧椋鸟运动识别过程中的视听干扰。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2023-01-17 DOI: 10.1163/22134808-bja10092
Gesa Feenders, Georg M Klump

Motion discrimination is essential for animals to avoid collisions, to escape from predators, to catch prey or to communicate. Although most terrestrial vertebrates can benefit by combining concurrent stimuli from sound and vision to obtain a most salient percept of the moving object, there is little research on the mechanisms involved in such cross-modal motion discrimination. We used European starlings as a model with a well-studied visual and auditory system. In a behavioural motion discrimination task with visual and acoustic stimuli, we investigated the effects of cross-modal interference and attentional processes. Our results showed an impairment of motion discrimination when the visual and acoustic stimuli moved in opposite directions as compared to congruent motion direction. By presenting an acoustic stimulus of very short duration, thus lacking directional motion information, an additional alerting effect of the acoustic stimulus became evident. Finally, we show that a temporally leading acoustic stimulus did not improve the response behaviour compared to the synchronous presentation of the stimuli as would have been expected in case of major alerting effects. This further supports the importance of congruency and synchronicity in the current test paradigm with a minor role of attentional processes elicited by the acoustic stimulus. Together, our data clearly show cross-modal interference effects in an audio-visual motion discrimination paradigm when carefully selecting real-life stimuli under parameter conditions that meet the known criteria for cross-modal binding.

动作辨别对动物避免碰撞、躲避捕食者、捕捉猎物或交流至关重要。虽然大多数陆生脊椎动物可以通过结合声音和视觉的并发刺激来获得对运动物体的最显著感知,但对这种跨模态运动识别的机制研究很少。我们用欧洲椋鸟作为模型,对其视觉和听觉系统进行了充分的研究。在一个具有视觉和听觉刺激的行为运动辨别任务中,我们研究了跨模态干扰和注意过程的影响。我们的研究结果表明,当视觉和听觉刺激在相反的方向移动时,与一致的运动方向相比,运动识别功能受损。通过呈现持续时间很短的声刺激,从而缺乏定向运动信息,声刺激的额外警报效果变得明显。最后,我们表明,与在主要警报效应的情况下预期的同步呈现刺激相比,暂时领先的声刺激并没有改善反应行为。这进一步支持了一致性和同步性在当前测试范式中的重要性,而声刺激引起的注意过程的作用较小。总之,我们的数据清楚地表明,在符合已知的跨模态绑定标准的参数条件下,仔细选择现实生活中的刺激时,视听运动鉴别范式中的跨模态干扰效应。
{"title":"Audio-Visual Interference During Motion Discrimination in Starlings.","authors":"Gesa Feenders,&nbsp;Georg M Klump","doi":"10.1163/22134808-bja10092","DOIUrl":"https://doi.org/10.1163/22134808-bja10092","url":null,"abstract":"<p><p>Motion discrimination is essential for animals to avoid collisions, to escape from predators, to catch prey or to communicate. Although most terrestrial vertebrates can benefit by combining concurrent stimuli from sound and vision to obtain a most salient percept of the moving object, there is little research on the mechanisms involved in such cross-modal motion discrimination. We used European starlings as a model with a well-studied visual and auditory system. In a behavioural motion discrimination task with visual and acoustic stimuli, we investigated the effects of cross-modal interference and attentional processes. Our results showed an impairment of motion discrimination when the visual and acoustic stimuli moved in opposite directions as compared to congruent motion direction. By presenting an acoustic stimulus of very short duration, thus lacking directional motion information, an additional alerting effect of the acoustic stimulus became evident. Finally, we show that a temporally leading acoustic stimulus did not improve the response behaviour compared to the synchronous presentation of the stimuli as would have been expected in case of major alerting effects. This further supports the importance of congruency and synchronicity in the current test paradigm with a minor role of attentional processes elicited by the acoustic stimulus. Together, our data clearly show cross-modal interference effects in an audio-visual motion discrimination paradigm when carefully selecting real-life stimuli under parameter conditions that meet the known criteria for cross-modal binding.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 2","pages":"181-212"},"PeriodicalIF":1.6,"publicationDate":"2023-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10834687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can We Train Multisensory Integration in Adults? A Systematic Review. 我们能训练成人的多感觉统合吗?系统评价。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2023-01-13 DOI: 10.1163/22134808-bja10090
Jessica O'Brien, Amy Mason, Jason Chan, Annalisa Setti

The ability to efficiently combine information from different senses is an important perceptual process that underpins much of our daily activities. This process, known as multisensory integration, varies from individual to individual, and is affected by the ageing process, with impaired processing associated with age-related conditions, including balance difficulties, mild cognitive impairment and cognitive decline. Impaired multisensory perception has also been associated with a range of neurodevelopmental conditions, where novel intervention approaches are actively sought, for example dyslexia and autism. However, it remains unclear to what extent and how multisensory perception can be modified by training. This systematic review aims to evaluate the evidence that we can train multisensory perception in neurotypical adults. In all, 1521 studies were identified following a systematic search of the databases PubMed, Scopus, PsychInfo and Web of Science. Following screening for inclusion and exclusion criteria, 27 studies were chosen for inclusion. Study quality was assessed using the Methodological Index for Non-Randomised Studies (MINORS) tool and the Cochrane Risk of Bias tool 2.0 for Randomised Control Trials. We found considerable evidence that in-task feedback training using psychophysics protocols led to improved task performance. The generalisability of this training to other tasks of multisensory integration was inconclusive, with few studies and mixed findings reported. Promising findings from exercise-based training indicate physical activity protocols warrant further investigation as potential training avenues for improving multisensory integration. Future research directions should include trialling training protocols with clinical populations and other groups who would benefit from targeted training to improve inefficient multisensory integration.

有效地结合来自不同感官的信息的能力是一个重要的感知过程,它支撑着我们的日常活动。这一过程被称为多感觉统合,因人而异,并受到衰老过程的影响,与年龄相关的疾病(包括平衡困难、轻度认知障碍和认知衰退)相关的处理受损。多感觉知觉受损也与一系列神经发育状况有关,因此人们正在积极寻求新的干预方法,例如阅读障碍和自闭症。然而,目前还不清楚多感官知觉可以通过训练改变到什么程度以及如何改变。本系统综述的目的是评估证据,我们可以训练多感官知觉在神经正常的成年人。通过对PubMed、Scopus、PsychInfo和Web of Science等数据库的系统搜索,总共确定了1521项研究。在筛选纳入和排除标准后,选择了27项研究纳入。采用非随机研究方法学指数(Methodological Index for Non-Randomised Studies,简称:minor)工具和Cochrane Risk of Bias工具2.0随机对照试验评估研究质量。我们发现大量证据表明,使用心理物理学协议的任务内反馈训练可以提高任务绩效。这种训练对其他多感觉统合任务的普遍性尚无定论,报道的研究和结果不一。基于运动的训练有希望的发现表明,体育活动方案值得进一步研究,作为改善多感觉整合的潜在训练途径。未来的研究方向应该包括在临床人群和其他群体中试验训练方案,这些人群将受益于有针对性的训练,以改善低效的多感觉整合。
{"title":"Can We Train Multisensory Integration in Adults? A Systematic Review.","authors":"Jessica O'Brien,&nbsp;Amy Mason,&nbsp;Jason Chan,&nbsp;Annalisa Setti","doi":"10.1163/22134808-bja10090","DOIUrl":"https://doi.org/10.1163/22134808-bja10090","url":null,"abstract":"<p><p>The ability to efficiently combine information from different senses is an important perceptual process that underpins much of our daily activities. This process, known as multisensory integration, varies from individual to individual, and is affected by the ageing process, with impaired processing associated with age-related conditions, including balance difficulties, mild cognitive impairment and cognitive decline. Impaired multisensory perception has also been associated with a range of neurodevelopmental conditions, where novel intervention approaches are actively sought, for example dyslexia and autism. However, it remains unclear to what extent and how multisensory perception can be modified by training. This systematic review aims to evaluate the evidence that we can train multisensory perception in neurotypical adults. In all, 1521 studies were identified following a systematic search of the databases PubMed, Scopus, PsychInfo and Web of Science. Following screening for inclusion and exclusion criteria, 27 studies were chosen for inclusion. Study quality was assessed using the Methodological Index for Non-Randomised Studies (MINORS) tool and the Cochrane Risk of Bias tool 2.0 for Randomised Control Trials. We found considerable evidence that in-task feedback training using psychophysics protocols led to improved task performance. The generalisability of this training to other tasks of multisensory integration was inconclusive, with few studies and mixed findings reported. Promising findings from exercise-based training indicate physical activity protocols warrant further investigation as potential training avenues for improving multisensory integration. Future research directions should include trialling training protocols with clinical populations and other groups who would benefit from targeted training to improve inefficient multisensory integration.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 2","pages":"111-180"},"PeriodicalIF":1.6,"publicationDate":"2023-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10835145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Front matter 前页
4区 心理学 Q3 BIOPHYSICS Pub Date : 2023-01-11 DOI: 10.1163/22134808-00351p14
{"title":"Front matter","authors":"","doi":"10.1163/22134808-00351p14","DOIUrl":"https://doi.org/10.1163/22134808-00351p14","url":null,"abstract":"","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136082543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
'Tasting Imagination': What Role Chemosensory Mental Imagery in Multisensory Flavour Perception? “味觉想象”:化学感觉心理意象在多感官味觉中的作用?
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-12-30 DOI: 10.1163/22134808-bja10091
Charles Spence

A number of perplexing phenomena in the area of olfactory/flavour perception may fruitfully be explained by the suggestion that chemosensory mental imagery can be triggered automatically by perceptual inputs. In particular, the disconnect between the seemingly limited ability of participants in chemosensory psychophysics studies to distinguish more than two or three odorants in mixtures and the rich and detailed flavour descriptions that are sometimes reported by wine experts; the absence of awareness of chemosensory loss in many elderly individuals; and the insensitivity of the odour-induced taste enhancement (OITE) effect to the mode of presentation of olfactory stimuli (i.e., orthonasal or retronasal). The suggestion made here is that the theory of predictive coding, developed first in the visual modality, be extended to chemosensation. This may provide a fruitful way of thinking about the interaction between mental imagery and perception in the experience of aromas and flavours. Accepting such a suggestion also raises some important questions concerning the ecological validity/meaning of much of the chemosensory psychophysics literature that has been published to date.

在嗅觉/味道感知领域,许多令人困惑的现象可以通过这样的建议得到有效的解释:化学感觉心理意象可以由感知输入自动触发。特别是,在化学感觉心理物理学研究中,参与者在混合物中区分两到三种以上气味的能力似乎有限,而葡萄酒专家有时会报告丰富而详细的味道描述,这两者之间的脱节;许多老年人缺乏对化学感觉丧失的认识;以及气味诱导的味觉增强(OITE)效应对嗅觉刺激的呈现方式(即正鼻或后鼻)不敏感。这里提出的建议是,首先在视觉模态中发展起来的预测编码理论,可以扩展到化学感觉。这可能提供了一种富有成效的方式来思考在香气和味道的体验中心理意象和感知之间的相互作用。接受这样的建议也提出了一些重要的问题,涉及到迄今为止发表的许多化学感觉心理物理学文献的生态有效性/意义。
{"title":"'Tasting Imagination': What Role Chemosensory Mental Imagery in Multisensory Flavour Perception?","authors":"Charles Spence","doi":"10.1163/22134808-bja10091","DOIUrl":"https://doi.org/10.1163/22134808-bja10091","url":null,"abstract":"<p><p>A number of perplexing phenomena in the area of olfactory/flavour perception may fruitfully be explained by the suggestion that chemosensory mental imagery can be triggered automatically by perceptual inputs. In particular, the disconnect between the seemingly limited ability of participants in chemosensory psychophysics studies to distinguish more than two or three odorants in mixtures and the rich and detailed flavour descriptions that are sometimes reported by wine experts; the absence of awareness of chemosensory loss in many elderly individuals; and the insensitivity of the odour-induced taste enhancement (OITE) effect to the mode of presentation of olfactory stimuli (i.e., orthonasal or retronasal). The suggestion made here is that the theory of predictive coding, developed first in the visual modality, be extended to chemosensation. This may provide a fruitful way of thinking about the interaction between mental imagery and perception in the experience of aromas and flavours. Accepting such a suggestion also raises some important questions concerning the ecological validity/meaning of much of the chemosensory psychophysics literature that has been published to date.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 1","pages":"93-109"},"PeriodicalIF":1.6,"publicationDate":"2022-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10708023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Impact of Singing on Visual and Multisensory Speech Perception in Children on the Autism Spectrum. 唱歌对自闭症儿童视觉和多感官言语感知的影响
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-12-30 DOI: 10.1163/22134808-bja10087
Jacob I Feldman, Alexander Tu, Julie G Conrad, Wayne Kuang, Pooja Santapuram, Tiffany G Woynaroski

Autistic children show reduced multisensory integration of audiovisual speech stimuli in response to the McGurk illusion. Previously, it has been shown that adults can integrate sung McGurk tokens. These sung speech tokens offer more salient visual and auditory cues, in comparison to the spoken tokens, which may increase the identification and integration of visual speech cues in autistic children. Forty participants (20 autism, 20 non-autistic peers) aged 7-14 completed the study. Participants were presented with speech tokens in four modalities: auditory-only, visual-only, congruent audiovisual, and incongruent audiovisual (i.e., McGurk; auditory 'ba' and visual 'ga'). Tokens were also presented in two formats: spoken and sung. Participants indicated what they perceived via a four-button response box (i.e., 'ba', 'ga', 'da', or 'tha'). Accuracies and perception of the McGurk illusion were calculated for each modality and format. Analysis of visual-only identification indicated a significant main effect of format, whereby participants were more accurate in sung versus spoken trials, but no significant main effect of group or interaction effect. Analysis of the McGurk trials indicated no significant main effect of format or group and no significant interaction effect. Sung speech tokens improved identification of visual speech cues, but did not boost the integration of visual cues with heard speech across groups. Additional work is needed to determine what properties of spoken speech contributed to the observed improvement in visual accuracy and to evaluate whether more prolonged exposure to sung speech may yield effects on multisensory integration.

自闭症儿童在应对麦格克幻觉时,对视听语音刺激的多感官整合能力会有所下降。此前已有研究表明,成人可以整合唱出的麦格克标记。与口语令牌相比,这些唱出的语音令牌提供了更突出的视觉和听觉线索,这可能会提高自闭症儿童对视觉语音线索的识别和整合能力。40 名 7-14 岁的参与者(20 名自闭症儿童,20 名非自闭症儿童)完成了这项研究。研究人员以四种方式向参与者展示语音标记:纯听觉、纯视觉、一致视听和不一致视听(即麦克格克;听觉 "ba "和视觉 "ga")。代币还以两种形式呈现:口语和歌唱。参与者通过四键反应框(即 "ba"、"ga"、"da "或 "tha")来表示他们的感知。我们计算了每种模式和形式的准确度和对麦格克幻觉的感知。对纯视觉识别的分析表明,形式具有显著的主效应,即参与者在唱歌和说话的试验中更准确,但组别和交互效应没有显著的主效应。对麦格克试验的分析表明,形式或组别没有明显的主效应,也没有明显的交互效应。唱词令牌提高了视觉语音线索的识别能力,但并没有促进各组视觉线索与听力语音的整合。我们还需要做更多的工作来确定口语语音的哪些特性有助于提高视觉准确性,并评估更长时间地接触歌唱语音是否会对多感官整合产生影响。
{"title":"The Impact of Singing on Visual and Multisensory Speech Perception in Children on the Autism Spectrum.","authors":"Jacob I Feldman, Alexander Tu, Julie G Conrad, Wayne Kuang, Pooja Santapuram, Tiffany G Woynaroski","doi":"10.1163/22134808-bja10087","DOIUrl":"10.1163/22134808-bja10087","url":null,"abstract":"<p><p>Autistic children show reduced multisensory integration of audiovisual speech stimuli in response to the McGurk illusion. Previously, it has been shown that adults can integrate sung McGurk tokens. These sung speech tokens offer more salient visual and auditory cues, in comparison to the spoken tokens, which may increase the identification and integration of visual speech cues in autistic children. Forty participants (20 autism, 20 non-autistic peers) aged 7-14 completed the study. Participants were presented with speech tokens in four modalities: auditory-only, visual-only, congruent audiovisual, and incongruent audiovisual (i.e., McGurk; auditory 'ba' and visual 'ga'). Tokens were also presented in two formats: spoken and sung. Participants indicated what they perceived via a four-button response box (i.e., 'ba', 'ga', 'da', or 'tha'). Accuracies and perception of the McGurk illusion were calculated for each modality and format. Analysis of visual-only identification indicated a significant main effect of format, whereby participants were more accurate in sung versus spoken trials, but no significant main effect of group or interaction effect. Analysis of the McGurk trials indicated no significant main effect of format or group and no significant interaction effect. Sung speech tokens improved identification of visual speech cues, but did not boost the integration of visual cues with heard speech across groups. Additional work is needed to determine what properties of spoken speech contributed to the observed improvement in visual accuracy and to evaluate whether more prolonged exposure to sung speech may yield effects on multisensory integration.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 1","pages":"57-74"},"PeriodicalIF":1.8,"publicationDate":"2022-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9924934/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10707539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crossmodal Texture Perception Is Illumination-Dependent. 交叉模态纹理感知依赖于光照。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-12-28 DOI: 10.1163/22134808-bja10089
Karina Kangur, Martin Giesel, Julie M Harris, Constanze Hesse

Visually perceived roughness of 3D textures varies with illumination direction. Surfaces appear rougher when the illumination angle is lowered resulting in a lack of roughness constancy. Here we aimed to investigate whether the visual system also relies on illumination-dependent features when judging roughness in a crossmodal matching task or whether it can access illumination-invariant surface features that can also be evaluated by the tactile system. Participants ( N = 32) explored an abrasive paper of medium physical roughness either tactually, or visually under two different illumination conditions (top vs oblique angle). Subsequently, they had to judge if a comparison stimulus (varying in physical roughness) matched the previously explored standard. Matching was either performed using the same modality as during exploration (intramodal) or using a different modality (crossmodal). In the intramodal conditions, participants performed equally well independent of the modality or illumination employed. In the crossmodal conditions, participants selected rougher tactile matches after exploring the standard visually under oblique illumination than under top illumination. Conversely, after tactile exploration, they selected smoother visual matches under oblique than under top illumination. These findings confirm that visual roughness perception depends on illumination direction and show, for the first time, that this failure of roughness constancy also transfers to judgements made crossmodally.

三维纹理的视觉感知粗糙度随光照方向的变化而变化。当照明角度降低时,表面会显得粗糙,从而导致粗糙度不恒定。在这里,我们的目的是研究视觉系统在判断交叉模态匹配任务中的粗糙度时是否也依赖于光照依赖的特征,或者它是否可以访问也可以由触觉系统评估的光照不变的表面特征。参与者(N = 32)在两种不同的照明条件下(顶角和斜角)实际或视觉上探索了中等物理粗糙度的砂纸。随后,他们必须判断比较刺激(不同的物理粗糙度)是否与先前探索的标准相匹配。匹配要么使用与探索期间相同的模态(模态内),要么使用不同的模态(跨模态)。在模态内条件下,参与者的表现与模态或照明无关。在交叉模态条件下,被试在斜视光照条件下比在顶光条件下选择更粗糙的触觉匹配。相反,经过触觉探索,他们在倾斜照明下比在顶部照明下选择更平滑的视觉匹配。这些发现证实,视觉粗糙度感知取决于光照方向,并首次表明,这种粗糙度恒定的失败也转移到交叉模态的判断上。
{"title":"Crossmodal Texture Perception Is Illumination-Dependent.","authors":"Karina Kangur,&nbsp;Martin Giesel,&nbsp;Julie M Harris,&nbsp;Constanze Hesse","doi":"10.1163/22134808-bja10089","DOIUrl":"https://doi.org/10.1163/22134808-bja10089","url":null,"abstract":"<p><p>Visually perceived roughness of 3D textures varies with illumination direction. Surfaces appear rougher when the illumination angle is lowered resulting in a lack of roughness constancy. Here we aimed to investigate whether the visual system also relies on illumination-dependent features when judging roughness in a crossmodal matching task or whether it can access illumination-invariant surface features that can also be evaluated by the tactile system. Participants ( N = 32) explored an abrasive paper of medium physical roughness either tactually, or visually under two different illumination conditions (top vs oblique angle). Subsequently, they had to judge if a comparison stimulus (varying in physical roughness) matched the previously explored standard. Matching was either performed using the same modality as during exploration (intramodal) or using a different modality (crossmodal). In the intramodal conditions, participants performed equally well independent of the modality or illumination employed. In the crossmodal conditions, participants selected rougher tactile matches after exploring the standard visually under oblique illumination than under top illumination. Conversely, after tactile exploration, they selected smoother visual matches under oblique than under top illumination. These findings confirm that visual roughness perception depends on illumination direction and show, for the first time, that this failure of roughness constancy also transfers to judgements made crossmodally.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 1","pages":"75-91"},"PeriodicalIF":1.6,"publicationDate":"2022-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10707537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Body Pitch Together With Translational Body Motion Biases the Subjective Haptic Vertical. 身体俯仰与平移身体运动一起使主观触觉产生垂直偏差。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-12-20 DOI: 10.1163/22134808-bja10086
Chia-Huei Tseng, Hiu Mei Chow, Lothar Spillmann, Matt Oxner, Kenzo Sakurai

Accurate perception of verticality is critical for postural maintenance and successful physical interaction with the world. Although previous research has examined the independent influences of body orientation and self-motion under well-controlled laboratory conditions, these factors are constantly changing and interacting in the real world. In this study, we examine the subjective haptic vertical in a real-world scenario. Here, we report a bias of verticality perception in a field experiment on the Hong Kong Peak Tram as participants traveled on a slope ranging from 6° to 26°. Mean subjective haptic vertical (SHV) increased with slope by as much as 15°, regardless of whether the eyes were open (Experiment 1) or closed (Experiment 2). Shifting the body pitch by a fixed degree in an effort to compensate for the mountain slope failed to reduce the verticality bias (Experiment 3). These manipulations separately rule out visual and vestibular inputs about absolute body pitch as contributors to our observed bias. Observations collected on a tram traveling on level ground (Experiment 4A) or in a static dental chair with a range of inclinations similar to those encountered on the mountain tram (Experiment 4B) showed no significant deviation of the subjective vertical from gravity. We conclude that the SHV error is due to a combination of large, dynamic body pitch and translational motion. These observations made in a real-world scenario represent an incentive to neuroscientists and aviation experts alike for studying perceived verticality under field conditions and raising awareness of dangerous misperceptions of verticality when body pitch and translational self-motion come together.

对垂直度的准确感知对于保持姿势和成功地与外界进行身体互动至关重要。虽然之前的研究已经在控制良好的实验室条件下考察了身体方向和自我运动的独立影响,但这些因素在现实世界中是不断变化和相互作用的。在这项研究中,我们在现实世界的场景中检查主观触觉垂直。在这里,我们报告了在香港山顶缆车的现场实验中,当参与者在6°到26°的斜坡上行驶时,垂直感知的偏差。无论眼睛是睁着的(实验1)还是闭着的(实验2),平均主观触觉垂直度(SHV)随坡度增加最多可达15°。通过固定程度的身体俯仰来补偿山体坡度并不能减少垂直度偏差(实验3)。这些操作分别排除了视觉和前庭输入对绝对身体俯仰的影响,这些输入是我们观察到的偏差的原因。在平地上行驶的有轨电车(实验4A)或在与山上有轨电车相似的倾斜范围的静态牙科椅(实验4B)上收集的观察结果显示,主观垂直方向与重力没有明显偏差。我们得出的结论是,SHV误差是由于大的,动态的身体俯仰和平移运动的结合。这些在现实世界中观察到的结果,激励了神经科学家和航空专家们在野外条件下研究感知垂直度,并提高了人们对身体俯仰和平移自我运动结合在一起时对垂直度的危险误解的认识。
{"title":"Body Pitch Together With Translational Body Motion Biases the Subjective Haptic Vertical.","authors":"Chia-Huei Tseng,&nbsp;Hiu Mei Chow,&nbsp;Lothar Spillmann,&nbsp;Matt Oxner,&nbsp;Kenzo Sakurai","doi":"10.1163/22134808-bja10086","DOIUrl":"https://doi.org/10.1163/22134808-bja10086","url":null,"abstract":"<p><p>Accurate perception of verticality is critical for postural maintenance and successful physical interaction with the world. Although previous research has examined the independent influences of body orientation and self-motion under well-controlled laboratory conditions, these factors are constantly changing and interacting in the real world. In this study, we examine the subjective haptic vertical in a real-world scenario. Here, we report a bias of verticality perception in a field experiment on the Hong Kong Peak Tram as participants traveled on a slope ranging from 6° to 26°. Mean subjective haptic vertical (SHV) increased with slope by as much as 15°, regardless of whether the eyes were open (Experiment 1) or closed (Experiment 2). Shifting the body pitch by a fixed degree in an effort to compensate for the mountain slope failed to reduce the verticality bias (Experiment 3). These manipulations separately rule out visual and vestibular inputs about absolute body pitch as contributors to our observed bias. Observations collected on a tram traveling on level ground (Experiment 4A) or in a static dental chair with a range of inclinations similar to those encountered on the mountain tram (Experiment 4B) showed no significant deviation of the subjective vertical from gravity. We conclude that the SHV error is due to a combination of large, dynamic body pitch and translational motion. These observations made in a real-world scenario represent an incentive to neuroscientists and aviation experts alike for studying perceived verticality under field conditions and raising awareness of dangerous misperceptions of verticality when body pitch and translational self-motion come together.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 1","pages":"1-29"},"PeriodicalIF":1.6,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10707538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Weighting of Time-Varying Visual and Auditory Evidence During Multisensory Decision Making. 多感官决策中时变视觉和听觉证据的动态加权。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-12-01 DOI: 10.1163/22134808-bja10088
Rosanne R M Tuip, Wessel van der Ham, Jeannette A M Lorteije, Filip Van Opstal

Perceptual decision-making in a dynamic environment requires two integration processes: integration of sensory evidence from multiple modalities to form a coherent representation of the environment, and integration of evidence across time to accurately make a decision. Only recently studies started to unravel how evidence from two modalities is accumulated across time to form a perceptual decision. One important question is whether information from individual senses contributes equally to multisensory decisions. We designed a new psychophysical task that measures how visual and auditory evidence is weighted across time. Participants were asked to discriminate between two visual gratings, and/or two sounds presented to the right and left ear based on respectively contrast and loudness. We varied the evidence, i.e., the contrast of the gratings and amplitude of the sound, over time. Results showed a significant increase in performance accuracy on multisensory trials compared to unisensory trials, indicating that discriminating between two sources is improved when multisensory information is available. Furthermore, we found that early evidence contributed most to sensory decisions. Weighting of unisensory information during audiovisual decision-making dynamically changed over time. A first epoch was characterized by both visual and auditory weighting, during the second epoch vision dominated and the third epoch finalized the weighting profile with auditory dominance. Our results suggest that during our task multisensory improvement is generated by a mechanism that requires cross-modal interactions but also dynamically evokes dominance switching.

动态环境下的感知决策需要两个整合过程:整合来自多种形态的感官证据以形成对环境的连贯表征;整合跨时间的证据以准确做出决策。直到最近,研究才开始揭示两种模式的证据是如何随着时间的推移而积累起来形成感知决策的。一个重要的问题是,来自单个感官的信息是否同样有助于多感官决策。我们设计了一个新的心理物理任务来衡量视觉和听觉证据是如何随时间加权的。参与者被要求根据对比度和响度区分两种视觉光栅和/或分别呈现给右耳和左耳的两种声音。随着时间的推移,我们改变了证据,即光栅的对比度和声音的振幅。结果显示,与单感觉试验相比,多感觉试验的性能准确性显著提高,这表明当有多感觉信息可用时,两个来源之间的区分得到了改善。此外,我们发现早期证据对感官决策贡献最大。视听决策过程中感官信息的权重随时间动态变化。第一阶段以视觉和听觉加权为主,第二阶段以视觉为主,第三阶段以听觉为主。我们的研究结果表明,在我们的任务过程中,多感官的改善是由一种需要跨模态交互的机制产生的,但也动态地唤起了优势转换。
{"title":"Dynamic Weighting of Time-Varying Visual and Auditory Evidence During Multisensory Decision Making.","authors":"Rosanne R M Tuip,&nbsp;Wessel van der Ham,&nbsp;Jeannette A M Lorteije,&nbsp;Filip Van Opstal","doi":"10.1163/22134808-bja10088","DOIUrl":"https://doi.org/10.1163/22134808-bja10088","url":null,"abstract":"<p><p>Perceptual decision-making in a dynamic environment requires two integration processes: integration of sensory evidence from multiple modalities to form a coherent representation of the environment, and integration of evidence across time to accurately make a decision. Only recently studies started to unravel how evidence from two modalities is accumulated across time to form a perceptual decision. One important question is whether information from individual senses contributes equally to multisensory decisions. We designed a new psychophysical task that measures how visual and auditory evidence is weighted across time. Participants were asked to discriminate between two visual gratings, and/or two sounds presented to the right and left ear based on respectively contrast and loudness. We varied the evidence, i.e., the contrast of the gratings and amplitude of the sound, over time. Results showed a significant increase in performance accuracy on multisensory trials compared to unisensory trials, indicating that discriminating between two sources is improved when multisensory information is available. Furthermore, we found that early evidence contributed most to sensory decisions. Weighting of unisensory information during audiovisual decision-making dynamically changed over time. A first epoch was characterized by both visual and auditory weighting, during the second epoch vision dominated and the third epoch finalized the weighting profile with auditory dominance. Our results suggest that during our task multisensory improvement is generated by a mechanism that requires cross-modal interactions but also dynamically evokes dominance switching.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 1","pages":"31-56"},"PeriodicalIF":1.6,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10708021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prior Exposure to Dynamic Visual Displays Reduces Vection Onset Latency. 先前暴露于动态视觉显示减少向量开始延迟。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-11-16 DOI: 10.1163/22134808-bja10084
Jing Ni, Hiroyuki Ito, Masaki Ogawa, Shoji Sunaga, Stephen Palmisano

While compelling illusions of self-motion (vection) can be induced purely by visual motion, they are rarely experienced immediately. This vection onset latency is thought to represent the time required to resolve sensory conflicts between the stationary observer's visual and nonvisual information about self-motion. In this study, we investigated whether manipulations designed to increase the weightings assigned to vision (compared to the nonvisual senses) might reduce vection onset latency. We presented two different types of visual priming displays directly before our main vection-inducing displays: (1) 'random motion' priming displays - designed to pre-activate general, as opposed to self-motion-specific, visual motion processing systems; and (2) 'dynamic no-motion' priming displays - designed to stimulate vision, but not generate conscious motion perceptions. Prior exposure to both types of priming displays was found to significantly shorten vection onset latencies for the main self-motion display. These experiments show that vection onset latencies can be reduced by pre-activating the visual system with both types of priming display. Importantly, these visual priming displays did not need to be capable of inducing vection or conscious motion perception in order to produce such benefits.

虽然令人信服的自我运动幻觉(垂直运动)可以纯粹由视觉运动引起,但它们很少立即体验到。这个矢量开始延迟被认为是用来表示解决静止观察者的视觉和非视觉信息之间的感觉冲突所需的时间。在这项研究中,我们调查了设计用于增加分配给视觉的权重的操作(与非视觉感官相比)是否可以减少向量开始延迟。在主要的矢量诱导显示之前,我们展示了两种不同类型的视觉启动显示:(1)“随机运动”启动显示-旨在预先激活一般,而不是自我运动特定的视觉运动处理系统;和(2)“动态无运动”启动显示——旨在刺激视觉,但不会产生有意识的运动感知。先前暴露于两种类型的启动显示被发现显著缩短向量开始潜伏期的主要自我运动显示。这些实验表明,用两种类型的启动显示预先激活视觉系统可以减少向量启动延迟。重要的是,为了产生这样的好处,这些视觉启动显示并不需要能够诱导矢量或有意识的运动感知。
{"title":"Prior Exposure to Dynamic Visual Displays Reduces Vection Onset Latency.","authors":"Jing Ni,&nbsp;Hiroyuki Ito,&nbsp;Masaki Ogawa,&nbsp;Shoji Sunaga,&nbsp;Stephen Palmisano","doi":"10.1163/22134808-bja10084","DOIUrl":"https://doi.org/10.1163/22134808-bja10084","url":null,"abstract":"<p><p>While compelling illusions of self-motion (vection) can be induced purely by visual motion, they are rarely experienced immediately. This vection onset latency is thought to represent the time required to resolve sensory conflicts between the stationary observer's visual and nonvisual information about self-motion. In this study, we investigated whether manipulations designed to increase the weightings assigned to vision (compared to the nonvisual senses) might reduce vection onset latency. We presented two different types of visual priming displays directly before our main vection-inducing displays: (1) 'random motion' priming displays - designed to pre-activate general, as opposed to self-motion-specific, visual motion processing systems; and (2) 'dynamic no-motion' priming displays - designed to stimulate vision, but not generate conscious motion perceptions. Prior exposure to both types of priming displays was found to significantly shorten vection onset latencies for the main self-motion display. These experiments show that vection onset latencies can be reduced by pre-activating the visual system with both types of priming display. Importantly, these visual priming displays did not need to be capable of inducing vection or conscious motion perception in order to produce such benefits.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 7-8","pages":"653-676"},"PeriodicalIF":1.6,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10708022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can the Perceived Timing of Multisensory Events Predict Cybersickness? 多感官事件的感知时间能否预测晕机?
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-10-24 DOI: 10.1163/22134808-bja10083
Ogai Sadiq, Michael Barnett-Cowan

Humans are constantly presented with rich sensory information that the central nervous system (CNS) must process to form a coherent perception of the self and its relation to its surroundings. While the CNS is efficient in processing multisensory information in natural environments, virtual reality (VR) poses challenges of temporal discrepancies that the CNS must solve. These temporal discrepancies between information from different sensory modalities leads to inconsistencies in perception of the virtual environment which often causes cybersickness. Here, we investigate whether individual differences in the perceived relative timing of sensory events, specifically parameters of temporal-order judgement (TOJ), can predict cybersickness. Study 1 examined audiovisual (AV) TOJs while Study 2 examined audio-active head movement (AAHM) TOJs. We deduced metrics of the temporal binding window (TBW) and point of subjective simultaneity (PSS) for a total of 50 participants. Cybersickness was quantified using the Simulator Sickness Questionnaire (SSQ). Study 1 results (correlations and multiple regression) show that the oculomotor SSQ shares a significant yet positive correlation with AV PSS and TBW. While there is a positive correlation between the total SSQ scores and the TBW and PSS, these correlations are not significant. Therefore, although these results are promising, we did not find the same effect for AAHM TBW and PSS. We conclude that AV TOJ may serve as a potential tool to predict cybersickness in VR. Such findings will generate a better understanding of cybersickness which can be used for development of VR to help mitigate discomfort and maximize adoption.

人类不断获得丰富的感官信息,中枢神经系统(CNS)必须对这些信息进行处理,以形成对自我及其与周围环境关系的连贯感知。虽然中枢神经系统在自然环境中处理多感官信息是有效的,但虚拟现实(VR)提出了中枢神经系统必须解决的时间差异挑战。这些来自不同感官模式的信息之间的时间差异导致对虚拟环境的感知不一致,这往往导致晕屏。在本研究中,我们研究了感觉事件的感知相对时间的个体差异,特别是时间顺序判断参数(TOJ)是否可以预测晕动病。研究1检查视听(AV) toj,而研究2检查听觉主动头部运动(AAHM) toj。我们推导了50名参与者的时间绑定窗口(TBW)和主观同时性点(PSS)的度量。使用模拟晕机问卷(SSQ)对晕机进行量化。研究1结果(相关和多元回归)显示,动眼肌SSQ与AV PSS和TBW呈显著正相关。虽然总SSQ得分与TBW和PSS之间存在正相关,但这些相关性并不显著。因此,虽然这些结果很有希望,但我们没有发现AAHM TBW和PSS的效果相同。我们的结论是,AV TOJ可以作为预测虚拟现实中的晕动病的潜在工具。这些发现将有助于更好地理解晕动症,这可以用于VR的开发,以帮助减轻不适并最大限度地采用。
{"title":"Can the Perceived Timing of Multisensory Events Predict Cybersickness?","authors":"Ogai Sadiq,&nbsp;Michael Barnett-Cowan","doi":"10.1163/22134808-bja10083","DOIUrl":"https://doi.org/10.1163/22134808-bja10083","url":null,"abstract":"<p><p>Humans are constantly presented with rich sensory information that the central nervous system (CNS) must process to form a coherent perception of the self and its relation to its surroundings. While the CNS is efficient in processing multisensory information in natural environments, virtual reality (VR) poses challenges of temporal discrepancies that the CNS must solve. These temporal discrepancies between information from different sensory modalities leads to inconsistencies in perception of the virtual environment which often causes cybersickness. Here, we investigate whether individual differences in the perceived relative timing of sensory events, specifically parameters of temporal-order judgement (TOJ), can predict cybersickness. Study 1 examined audiovisual (AV) TOJs while Study 2 examined audio-active head movement (AAHM) TOJs. We deduced metrics of the temporal binding window (TBW) and point of subjective simultaneity (PSS) for a total of 50 participants. Cybersickness was quantified using the Simulator Sickness Questionnaire (SSQ). Study 1 results (correlations and multiple regression) show that the oculomotor SSQ shares a significant yet positive correlation with AV PSS and TBW. While there is a positive correlation between the total SSQ scores and the TBW and PSS, these correlations are not significant. Therefore, although these results are promising, we did not find the same effect for AAHM TBW and PSS. We conclude that AV TOJ may serve as a potential tool to predict cybersickness in VR. Such findings will generate a better understanding of cybersickness which can be used for development of VR to help mitigate discomfort and maximize adoption.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 7-8","pages":"623-652"},"PeriodicalIF":1.6,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10708024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Multisensory Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1