Statistical context learning in tactile search: Crossmodally redundant, visuo-tactile contexts fail to enhance contextual cueing

Siyi Chen, Zhuanghua Shi, Gizem Vural, H. Müller, T. Geyer
{"title":"Statistical context learning in tactile search: Crossmodally redundant, visuo-tactile contexts fail to enhance contextual cueing","authors":"Siyi Chen, Zhuanghua Shi, Gizem Vural, H. Müller, T. Geyer","doi":"10.3389/fcogn.2023.1124286","DOIUrl":null,"url":null,"abstract":"In search tasks, reaction times become faster when the target is repeatedly encountered at a fixed position within a consistent spatial arrangement of distractor items, compared to random arrangements. Such “contextual cueing” is also obtained when the predictive distractor context is provided by a non-target modality. Thus, in tactile search, finding a target defined by a deviant vibro-tactile pattern (delivered to one fingertip) from the patterns at other, distractor (fingertip) locations is facilitated not only when the configuration of tactile distractors is predictive of the target location, but also when a configuration of (collocated) visual distractors is predictive—where intramodal-tactile cueing is mediated by a somatotopic and crossmodal-visuotactile cueing by a spatiotopic reference frame. This raises the question of whether redundant multisensory, tactile-plus-visual contexts would enhance contextual cueing of tactile search over and above the level attained by unisensory contexts alone. To address this, we implemented a tactile search task in which, in 50% of the trials in a “multisensory” phase, the tactile target location was predicted by both the tactile and the visual distractor context; in the other 50%, as well as a “unisensory” phase, the target location was solely predicted by the tactile context. We observed no redundancy gains by multisensory-visuotactile contexts, compared to unisensory-tactile contexts. This argues that the reference frame for contextual learning is determined by the task-critical modality (somatotopic coordinates for tactile search). And whether redundant predictive contexts from another modality (vision) can enhance contextual cueing depends on the availability of the corresponding spatial (spatiotopic-visual to somatotopic-tactile) remapping routines.","PeriodicalId":94013,"journal":{"name":"Frontiers in Cognition","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fcogn.2023.1124286","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In search tasks, reaction times become faster when the target is repeatedly encountered at a fixed position within a consistent spatial arrangement of distractor items, compared to random arrangements. Such “contextual cueing” is also obtained when the predictive distractor context is provided by a non-target modality. Thus, in tactile search, finding a target defined by a deviant vibro-tactile pattern (delivered to one fingertip) from the patterns at other, distractor (fingertip) locations is facilitated not only when the configuration of tactile distractors is predictive of the target location, but also when a configuration of (collocated) visual distractors is predictive—where intramodal-tactile cueing is mediated by a somatotopic and crossmodal-visuotactile cueing by a spatiotopic reference frame. This raises the question of whether redundant multisensory, tactile-plus-visual contexts would enhance contextual cueing of tactile search over and above the level attained by unisensory contexts alone. To address this, we implemented a tactile search task in which, in 50% of the trials in a “multisensory” phase, the tactile target location was predicted by both the tactile and the visual distractor context; in the other 50%, as well as a “unisensory” phase, the target location was solely predicted by the tactile context. We observed no redundancy gains by multisensory-visuotactile contexts, compared to unisensory-tactile contexts. This argues that the reference frame for contextual learning is determined by the task-critical modality (somatotopic coordinates for tactile search). And whether redundant predictive contexts from another modality (vision) can enhance contextual cueing depends on the availability of the corresponding spatial (spatiotopic-visual to somatotopic-tactile) remapping routines.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
触觉搜索中的统计上下文学习:交叉模态冗余,视觉-触觉上下文不能增强上下文线索
在搜索任务中,与随机安排相比,当目标在一个固定的位置,在一个一致的空间安排中反复遇到时,反应时间会更快。这种“情境线索”也可以在非目标模态提供预测干扰物情境时获得。因此,在触觉搜索中,不仅当触觉干扰物的配置能够预测目标位置时,而且当(同时配置的)视觉干扰物的配置具有预测性时——其中模态内触觉线索由体位和跨模态视觉触觉线索通过空间位参考框架介导时——都有助于从其他干扰物(指尖)位置的模式中找到由异常振动-触觉模式(传递到一个指尖)定义的目标。这就提出了一个问题,即冗余的多感官、触觉加视觉环境是否会在单感官环境所达到的水平之上,增强触觉搜索的上下文线索。为了解决这个问题,我们实施了一个触觉搜索任务,其中,在50%的“多感官”阶段的试验中,触觉目标位置由触觉和视觉干扰物上下文预测;在另外的50%,以及一个“无感觉”阶段,目标位置仅由触觉环境预测。我们观察到,与单感官-触觉环境相比,多感官-视觉-触觉环境没有冗余增益。这表明上下文学习的参考框架是由任务关键模态(触觉搜索的体位坐标)决定的。来自另一模态(视觉)的冗余预测上下文是否能增强上下文线索取决于相应的空间(空间-视觉到躯体-触觉)重新映射程序的可用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Flexible encoding of multiple task dimensions in human cerebral cortex Cycle-based high-intensity sprint exercise elicits acute cognitive dysfunction in psychomotor and memory task performance Genetic background of cognitive decline in Parkinson's disease This time with feeling: recommendations for full-bodied reporting of research on dance Children's recognition of slapstick humor is linked to their Theory of Mind
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1