我们在遮挡物后面看到了什么?复杂物体统计特性的模态完成。

IF 1.7 4区 心理学 Q3 PSYCHOLOGY Attention Perception & Psychophysics Pub Date : 2024-10-26 DOI:10.3758/s13414-024-02948-w
Thomas Cherian, S P Arun
{"title":"我们在遮挡物后面看到了什么?复杂物体统计特性的模态完成。","authors":"Thomas Cherian, S P Arun","doi":"10.3758/s13414-024-02948-w","DOIUrl":null,"url":null,"abstract":"<p><p>When a spiky object is occluded, we expect its spiky features to continue behind the occluder. Although many real-world objects contain complex features, it is unclear how more complex features are amodally completed and whether this process is automatic. To investigate this issue, we created pairs of displays with identical contour edges up to the point of occlusion, but with occluded portions exchanged. We then asked participants to search for oddball targets among distractors and asked whether relations between searches involving occluded displays would match better with relations between searches involving completions that are either globally consistent or inconsistent with the visible portions of these displays. Across two experiments involving simple and complex shapes, search times involving occluded displays matched better with those involving globally consistent compared with inconsistent displays. Analogous analyses on deep networks pretrained for object categorization revealed a similar pattern of results for simple but not complex shapes. Thus, deep networks seem to extrapolate simple occluded contours but not more complex contours. Taken together, our results show that amodal completion in humans is sophisticated and can be based on extrapolating global statistical properties.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"What do we see behind an occluder? Amodal completion of statistical properties in complex objects.\",\"authors\":\"Thomas Cherian, S P Arun\",\"doi\":\"10.3758/s13414-024-02948-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>When a spiky object is occluded, we expect its spiky features to continue behind the occluder. Although many real-world objects contain complex features, it is unclear how more complex features are amodally completed and whether this process is automatic. To investigate this issue, we created pairs of displays with identical contour edges up to the point of occlusion, but with occluded portions exchanged. We then asked participants to search for oddball targets among distractors and asked whether relations between searches involving occluded displays would match better with relations between searches involving completions that are either globally consistent or inconsistent with the visible portions of these displays. Across two experiments involving simple and complex shapes, search times involving occluded displays matched better with those involving globally consistent compared with inconsistent displays. Analogous analyses on deep networks pretrained for object categorization revealed a similar pattern of results for simple but not complex shapes. Thus, deep networks seem to extrapolate simple occluded contours but not more complex contours. Taken together, our results show that amodal completion in humans is sophisticated and can be based on extrapolating global statistical properties.</p>\",\"PeriodicalId\":55433,\"journal\":{\"name\":\"Attention Perception & Psychophysics\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-10-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Attention Perception & Psychophysics\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.3758/s13414-024-02948-w\",\"RegionNum\":4,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PSYCHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Attention Perception & Psychophysics","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13414-024-02948-w","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

当一个尖状物体被遮挡时,我们希望它的尖状特征能在遮挡物后面继续存在。虽然现实世界中的许多物体都包含复杂的特征,但目前还不清楚更复杂的特征是如何模态完成的,也不清楚这一过程是否是自动完成的。为了研究这个问题,我们制作了一对显示屏,显示屏的轮廓边缘到遮挡点为止完全相同,但遮挡部分互换了。然后,我们让参与者在分散注意力的物体中搜索奇异目标,并询问涉及闭塞显示的搜索之间的关系是否与涉及与这些显示的可见部分总体一致或不一致的完成部分的搜索之间的关系更匹配。在涉及简单形状和复杂形状的两个实验中,涉及遮挡显示的搜索时间与涉及全局一致显示的搜索时间相比更匹配。对经过物体分类预训练的深度网络进行的类似分析表明,简单形状的搜索结果与之类似,而复杂形状的搜索结果则与之不同。因此,深度网络似乎能推断出简单的闭塞轮廓,但不能推断出更复杂的轮廓。综上所述,我们的研究结果表明,人类的模态完成是复杂的,可以基于全局统计特性进行推断。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
What do we see behind an occluder? Amodal completion of statistical properties in complex objects.

When a spiky object is occluded, we expect its spiky features to continue behind the occluder. Although many real-world objects contain complex features, it is unclear how more complex features are amodally completed and whether this process is automatic. To investigate this issue, we created pairs of displays with identical contour edges up to the point of occlusion, but with occluded portions exchanged. We then asked participants to search for oddball targets among distractors and asked whether relations between searches involving occluded displays would match better with relations between searches involving completions that are either globally consistent or inconsistent with the visible portions of these displays. Across two experiments involving simple and complex shapes, search times involving occluded displays matched better with those involving globally consistent compared with inconsistent displays. Analogous analyses on deep networks pretrained for object categorization revealed a similar pattern of results for simple but not complex shapes. Thus, deep networks seem to extrapolate simple occluded contours but not more complex contours. Taken together, our results show that amodal completion in humans is sophisticated and can be based on extrapolating global statistical properties.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.60
自引率
17.60%
发文量
197
审稿时长
4-8 weeks
期刊介绍: The journal Attention, Perception, & Psychophysics is an official journal of the Psychonomic Society. It spans all areas of research in sensory processes, perception, attention, and psychophysics. Most articles published are reports of experimental work; the journal also presents theoretical, integrative, and evaluative reviews. Commentary on issues of importance to researchers appears in a special section of the journal. Founded in 1966 as Perception & Psychophysics, the journal assumed its present name in 2009.
期刊最新文献
Disentangling decision errors from action execution in mouse-tracking studies: The case of effect-based action control. Parafoveal N400 effects reveal that word skipping is associated with deeper lexical processing in the presence of context-driven expectations. Correction to: On the relationship between spatial attention and semantics in the context of a Stroop paradigm. Can the left hand benefit from being right? The influence of body side on perceived grasping ability. Gaze-action coupling, gaze-gesture coupling, and exogenous attraction of gaze in dyadic interactions.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1