单对象一致性促进多感官配对学习:统一的证据

Elan Barenholtz, D. Lewkowicz, Lauren Kogelschatz
{"title":"单对象一致性促进多感官配对学习:统一的证据","authors":"Elan Barenholtz, D. Lewkowicz, Lauren Kogelschatz","doi":"10.1163/187847612X646343","DOIUrl":null,"url":null,"abstract":"Learning about objects often involves associating multisensory properties such as the taste and smell of a food or the face and voice of a person. Here, we report a novel phenomenon in associative learning in which pairs of multisensory attributes that are consistent with deriving from a single object are learned better than pairs that are not. In Experiment 1, we found superior learning of arbitrary pairs of human faces and voices when they were gender-congruent — and thus were consistent with belonging to a single personal identity — compared with gender-incongruent pairs. In Experiment 2, we found a similar advantage when the learned pair consisted of species-congruent animal pictures and vocalizations vs. species-incongruent pairs. In Experiment 3, we found that temporal synchrony — which provides a highly reliable alternative cue that properties derive from a single object — improved performance specifically for the incongruent pairs. Together, these findings demonstrate a novel principle in associative learning in which multisensory pairs that are consistent with having a single object as their source are learned more easily than multisensory pairs that are not. These results suggest that unitizing multisensory properties into a single representation may be a specialized learning mechanism.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"87 1","pages":"11-11"},"PeriodicalIF":0.0000,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646343","citationCount":"0","resultStr":"{\"title\":\"Single-object consistency facilitates multisensory pair learning: Evidence for unitization\",\"authors\":\"Elan Barenholtz, D. Lewkowicz, Lauren Kogelschatz\",\"doi\":\"10.1163/187847612X646343\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Learning about objects often involves associating multisensory properties such as the taste and smell of a food or the face and voice of a person. Here, we report a novel phenomenon in associative learning in which pairs of multisensory attributes that are consistent with deriving from a single object are learned better than pairs that are not. In Experiment 1, we found superior learning of arbitrary pairs of human faces and voices when they were gender-congruent — and thus were consistent with belonging to a single personal identity — compared with gender-incongruent pairs. In Experiment 2, we found a similar advantage when the learned pair consisted of species-congruent animal pictures and vocalizations vs. species-incongruent pairs. In Experiment 3, we found that temporal synchrony — which provides a highly reliable alternative cue that properties derive from a single object — improved performance specifically for the incongruent pairs. Together, these findings demonstrate a novel principle in associative learning in which multisensory pairs that are consistent with having a single object as their source are learned more easily than multisensory pairs that are not. These results suggest that unitizing multisensory properties into a single representation may be a specialized learning mechanism.\",\"PeriodicalId\":49553,\"journal\":{\"name\":\"Seeing and Perceiving\",\"volume\":\"87 1\",\"pages\":\"11-11\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1163/187847612X646343\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Seeing and Perceiving\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1163/187847612X646343\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Seeing and Perceiving","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1163/187847612X646343","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

对物体的学习通常涉及到将多感官特性联系起来,比如食物的味道和气味,或者一个人的脸和声音。在这里,我们报告了联想学习中的一种新现象,在这种现象中,与单一对象一致的多感官属性对比不一致的多感官属性对学习得更好。在实验1中,我们发现,当任意配对的人脸和声音性别一致时,与性别不一致的配对相比,他们的学习能力更强,因为他们属于单一的个人身份。在实验2中,我们发现当由物种一致的动物图片和发声组成的学习对与物种不一致的学习对相比具有类似的优势。在实验3中,我们发现时间同步——它提供了一个高度可靠的替代线索,即属性来自单个对象——提高了性能,特别是对于不一致的对。总之,这些发现证明了联想学习中的一个新原则,即以单一物体为来源的多感觉配对比非单一物体的多感觉配对更容易学习。这些结果表明,将多感官属性统一为单一表征可能是一种特殊的学习机制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Single-object consistency facilitates multisensory pair learning: Evidence for unitization
Learning about objects often involves associating multisensory properties such as the taste and smell of a food or the face and voice of a person. Here, we report a novel phenomenon in associative learning in which pairs of multisensory attributes that are consistent with deriving from a single object are learned better than pairs that are not. In Experiment 1, we found superior learning of arbitrary pairs of human faces and voices when they were gender-congruent — and thus were consistent with belonging to a single personal identity — compared with gender-incongruent pairs. In Experiment 2, we found a similar advantage when the learned pair consisted of species-congruent animal pictures and vocalizations vs. species-incongruent pairs. In Experiment 3, we found that temporal synchrony — which provides a highly reliable alternative cue that properties derive from a single object — improved performance specifically for the incongruent pairs. Together, these findings demonstrate a novel principle in associative learning in which multisensory pairs that are consistent with having a single object as their source are learned more easily than multisensory pairs that are not. These results suggest that unitizing multisensory properties into a single representation may be a specialized learning mechanism.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Seeing and Perceiving
Seeing and Perceiving BIOPHYSICS-PSYCHOLOGY
自引率
0.00%
发文量
0
审稿时长
>12 weeks
期刊最新文献
Chapter ten. Color Vision Chapter six. Brightness Constancy Chapter One. Our Idea of the Physical World Chapter nine. Optometrists, Ophthalmologists, Opticians: What They Do Chapter seven. Why the Rate of Unbleaching is Important
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1