首页 > 最新文献

VISUAL COGNITION最新文献

英文 中文
Objects’ perceived meaningfulness predicts both subjective memorability judgments and actual memory performance 物体的感知意义可预测主观可记性判断和实际记忆效果
IF 2 4区 心理学 Q1 Arts and Humanities Pub Date : 2023-12-26 DOI: 10.1080/13506285.2023.2288433
Roy Shoval, Nurit Gronau, Yael Sidi, Tal Makovski
Memorability studies have revealed a limitation in our ability to accurately judge which images are memorable. Conversely, metacognitive research suggests that individuals can utilize cues to relia...
记忆性研究表明,我们在准确判断哪些图像具有记忆性方面存在局限性。与此相反,元认知研究表明,个人可以利用线索来可靠地识别图像。
{"title":"Objects’ perceived meaningfulness predicts both subjective memorability judgments and actual memory performance","authors":"Roy Shoval, Nurit Gronau, Yael Sidi, Tal Makovski","doi":"10.1080/13506285.2023.2288433","DOIUrl":"https://doi.org/10.1080/13506285.2023.2288433","url":null,"abstract":"Memorability studies have revealed a limitation in our ability to accurately judge which images are memorable. Conversely, metacognitive research suggests that individuals can utilize cues to relia...","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139064796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What makes a visual scene more memorable? A rapid serial visual presentation (RSVP) study with dynamic visual scenes 是什么让视觉场景更令人难忘?动态视觉场景的快速连续视觉呈现(RSVP)研究
IF 2 4区 心理学 Q1 Arts and Humanities Pub Date : 2023-12-06 DOI: 10.1080/13506285.2023.2288361
Ayşe Candan Şimşek, Nazif Karaca, Berk Can Kırmızı, Furkan Ekiz
The visual system has been characterized as having limited processing capacity. Research suggests that not all visual information is equal and that certain visual scenes are registered better than ...
视觉系统的特点是处理能力有限。研究表明,并不是所有的视觉信息都是平等的,某些视觉场景比……
{"title":"What makes a visual scene more memorable? A rapid serial visual presentation (RSVP) study with dynamic visual scenes","authors":"Ayşe Candan Şimşek, Nazif Karaca, Berk Can Kırmızı, Furkan Ekiz","doi":"10.1080/13506285.2023.2288361","DOIUrl":"https://doi.org/10.1080/13506285.2023.2288361","url":null,"abstract":"The visual system has been characterized as having limited processing capacity. Research suggests that not all visual information is equal and that certain visual scenes are registered better than ...","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
­­­­­The other-race effect in face recognition: Do people shift criterion equally for own- and other-race faces? 人脸识别中的他种效应:人们对自己种族和其他种族面孔的标准转换是否相同?
IF 2 4区 心理学 Q1 Arts and Humanities Pub Date : 2023-12-04 DOI: 10.1080/13506285.2023.2288358
Daniel Guilbert, Sachiko Kinoshita, Kim M. Curby
People are better at recognizing own-race faces than other-race faces. This other-race effect in face recognition typically manifests in sensitivity (i.e., better discrimination). However, research...
与其他种族的人脸相比,人们更善于识别自己种族的人脸。人脸识别中的这种他种效应通常表现为敏感性(即更好的辨别能力)。然而,研究...
{"title":"­­­­­The other-race effect in face recognition: Do people shift criterion equally for own- and other-race faces?","authors":"Daniel Guilbert, Sachiko Kinoshita, Kim M. Curby","doi":"10.1080/13506285.2023.2288358","DOIUrl":"https://doi.org/10.1080/13506285.2023.2288358","url":null,"abstract":"People are better at recognizing own-race faces than other-race faces. This other-race effect in face recognition typically manifests in sensitivity (i.e., better discrimination). However, research...","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138579779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding face detection with visual arrays and real-world scenes 理解人脸检测与视觉阵列和现实世界的场景
IF 2 4区 心理学 Q1 Arts and Humanities Pub Date : 2023-11-20 DOI: 10.1080/13506285.2023.2277475
Alice Nevard, Graham J. Hole, Jonathan E. Prunty, Markus Bindemann
Face detection has been studied by presenting faces in blank displays, object arrays, and real-world scenes. This study investigated whether these display contexts differ in what they can reveal ab...
人脸检测已经通过在空白显示器、对象阵列和现实世界场景中呈现人脸进行了研究。这项研究调查了这些显示上下文是否在它们能揭示的东西上有所不同。
{"title":"Understanding face detection with visual arrays and real-world scenes","authors":"Alice Nevard, Graham J. Hole, Jonathan E. Prunty, Markus Bindemann","doi":"10.1080/13506285.2023.2277475","DOIUrl":"https://doi.org/10.1080/13506285.2023.2277475","url":null,"abstract":"Face detection has been studied by presenting faces in blank displays, object arrays, and real-world scenes. This study investigated whether these display contexts differ in what they can reveal ab...","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When memory meets distraction: The role of unexpected stimulus-driven attentional capture on contextual cueing 当记忆遇到分心:意外刺激驱动的注意捕捉对上下文线索的作用
4区 心理学 Q1 Arts and Humanities Pub Date : 2023-11-09 DOI: 10.1080/13506285.2023.2279217
Danlei Chen, J. Benjamin Hutchinson
ABSTRACTVisuospatial attention plays a critical role in prioritizing behaviourally-relevant information and can be guided by task goals, stimulus salience, and memory. Here, we examined the interaction between memory-guided attention (contextual cueing) and stimulus-driven attention (unexpected colour singletons). In two visual search experiments with different set sizes, colour singletons were introduced unexpectedly in some trials after repeated configurations were used to establish contextual cueing. Reaction times were rapidly impacted by both contextual cueing and colour singletons, without significant interaction. However, introducing color singletons also impeded reaction times for novel configurations without color singletons, while repeated configurations were not impacted. These results suggest that on a trial level, contextual cueing and colour singleton effects are largely two independent factors driving selective attention, but there is evidence for a more general disruption of introducing distraction in cases where memory cannot be relied upon, suggesting a more complex interaction between attentional influences.KEYWORDS: Visual searchcontextual cueingpop-out effectepisodic memory AcknowledgmentsWe thank Emma Takizawa, Ramana Housman, and Sarah Zhang for participant recruitment and data collection.Disclosure statementNo potential conflict of interest was reported by the author(s).
摘要视觉空间注意在行为相关信息的优先排序中起着至关重要的作用,它可以由任务目标、刺激显著性和记忆来引导。在这里,我们研究了记忆引导的注意(语境线索)和刺激驱动的注意(意想不到的单色)之间的相互作用。在两个不同集合大小的视觉搜索实验中,一些实验在使用重复配置来建立上下文线索后,意外地引入了颜色单元素。反应时间很快受到上下文线索和颜色单点的影响,没有明显的相互作用。然而,引入颜色单子也会影响没有颜色单子的新构型的反应时间,而重复构型则不会受到影响。这些结果表明,在试验层面上,上下文提示和颜色单态效应在很大程度上是驱动选择性注意的两个独立因素,但有证据表明,在无法依赖记忆的情况下,引入分心会造成更普遍的破坏,这表明注意力影响之间存在更复杂的相互作用。关键词:视觉搜索上下文提示弹出效应情景记忆感谢Emma Takizawa, Ramana Housman和Sarah Zhang招募参与者和收集数据。披露声明作者未报告潜在的利益冲突。
{"title":"When memory meets distraction: The role of unexpected stimulus-driven attentional capture on contextual cueing","authors":"Danlei Chen, J. Benjamin Hutchinson","doi":"10.1080/13506285.2023.2279217","DOIUrl":"https://doi.org/10.1080/13506285.2023.2279217","url":null,"abstract":"ABSTRACTVisuospatial attention plays a critical role in prioritizing behaviourally-relevant information and can be guided by task goals, stimulus salience, and memory. Here, we examined the interaction between memory-guided attention (contextual cueing) and stimulus-driven attention (unexpected colour singletons). In two visual search experiments with different set sizes, colour singletons were introduced unexpectedly in some trials after repeated configurations were used to establish contextual cueing. Reaction times were rapidly impacted by both contextual cueing and colour singletons, without significant interaction. However, introducing color singletons also impeded reaction times for novel configurations without color singletons, while repeated configurations were not impacted. These results suggest that on a trial level, contextual cueing and colour singleton effects are largely two independent factors driving selective attention, but there is evidence for a more general disruption of introducing distraction in cases where memory cannot be relied upon, suggesting a more complex interaction between attentional influences.KEYWORDS: Visual searchcontextual cueingpop-out effectepisodic memory AcknowledgmentsWe thank Emma Takizawa, Ramana Housman, and Sarah Zhang for participant recruitment and data collection.Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135243046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memorable beginnings, but forgettable endings: Intrinsic memorability alters our subjective experience of time 令人难忘的开始,但令人难忘的结局:内在的可记忆性改变了我们对时间的主观体验
4区 心理学 Q1 Arts and Humanities Pub Date : 2023-10-31 DOI: 10.1080/13506285.2023.2268382
Madeline Gedvila, Joan Danielle K. Ongchoco, Wilma A. Bainbridge
ABSTRACTTime is the fabric of experience – yet it is incredibly malleable in the mind of the observer: seeming to drag on, or fly right by at different moments. One of the most influential drivers of temporal distortions is attention, where heightened attention dilates subjective time. But an equally important feature of subjective experience involves not just the objects of attention, but also what information will naturally be remembered or forgotten, independent of attention (i.e., intrinsic image memorability). Here we test how memorability influences time perception. Observers viewed scenes in an oddball paradigm, where the last scene could be a forgettable “oddball” amidst memorable ones, or vice versa. Subjective time dilation occurred only for forgettable oddballs, but not memorable ones – demonstrating an oddball effect where the oddball did not differ in low-level visual features, image category, or even subjective memorability. But more importantly, these results emphasize how memory can interact with temporal experience: memorable beginnings may put people in an efficient encoding state, which may in turn influence which moments are dilated in time.KEYWORDS: Time perceptiontime dilationoddball effectmemorabilityscene perception Disclosure statementNo potential conflict of interest was reported by the author(s).Author contributionsMG, JDKO, and WAB designed the research and wrote the manuscript. MG and JDKO conducted the experiments and analyzed the data with input from WAB.Open practicesAll data will be available in the Supplementary Raw Data Archive included with this submission, and via OSF: https://osf.io/dkxez/?view_only=38c7d6db309d49219360b21c41b431d2.Additional informationFundingMG was funded by the University of Chicago Metcalf Research Internship in Neuroscience. WAB is supported by the National Eye Institute (R01-EY034432). For helpful comments, we thank the members of the Brain Bridge Lab.
【摘要】时间是经验的结构——然而它在观察者的头脑中具有难以置信的可塑性:似乎在不同的时刻拖着,或者飞过去。时间扭曲最具影响力的驱动因素之一是注意力,高度的注意力会扩大主观时间。但是,主观经验的一个同样重要的特征不仅涉及注意的对象,还涉及独立于注意的信息自然会被记住或遗忘(即内在图像记忆)。在这里,我们测试记忆如何影响时间感知。观察者以一种古怪的范式观看场景,最后一个场景可能是令人难忘的场景中一个容易被遗忘的“古怪”,反之亦然。主观时间膨胀只发生在容易忘记的怪人身上,而不发生在容易记住的人身上——这证明了一个怪人效应,怪人在低级视觉特征、图像类别甚至主观记忆方面都没有区别。但更重要的是,这些结果强调了记忆如何与时间经验相互作用:令人难忘的开始可能使人们处于有效的编码状态,这可能反过来影响哪些时刻在时间上被扩展。关键词:时间感知、时间扩张、古怪效应、记忆性、场景感知披露声明作者未发现潜在的利益冲突。smg、JDKO和WAB设计了研究并撰写了手稿。MG和JDKO在WAB的输入下进行了实验和数据分析。开放实践所有数据将在本次提交的补充原始数据档案中提供,并通过OSF: https://osf.io/dkxez/?view_only=38c7d6db309d49219360b21c41b431d2.Additional informationFundingMG由芝加哥大学梅特卡夫神经科学研究实习资助。WAB由国家眼科研究所(R01-EY034432)支持。对于有用的评论,我们感谢脑桥实验室的成员。
{"title":"Memorable beginnings, but forgettable endings: Intrinsic memorability alters our subjective experience of time","authors":"Madeline Gedvila, Joan Danielle K. Ongchoco, Wilma A. Bainbridge","doi":"10.1080/13506285.2023.2268382","DOIUrl":"https://doi.org/10.1080/13506285.2023.2268382","url":null,"abstract":"ABSTRACTTime is the fabric of experience – yet it is incredibly malleable in the mind of the observer: seeming to drag on, or fly right by at different moments. One of the most influential drivers of temporal distortions is attention, where heightened attention dilates subjective time. But an equally important feature of subjective experience involves not just the objects of attention, but also what information will naturally be remembered or forgotten, independent of attention (i.e., intrinsic image memorability). Here we test how memorability influences time perception. Observers viewed scenes in an oddball paradigm, where the last scene could be a forgettable “oddball” amidst memorable ones, or vice versa. Subjective time dilation occurred only for forgettable oddballs, but not memorable ones – demonstrating an oddball effect where the oddball did not differ in low-level visual features, image category, or even subjective memorability. But more importantly, these results emphasize how memory can interact with temporal experience: memorable beginnings may put people in an efficient encoding state, which may in turn influence which moments are dilated in time.KEYWORDS: Time perceptiontime dilationoddball effectmemorabilityscene perception Disclosure statementNo potential conflict of interest was reported by the author(s).Author contributionsMG, JDKO, and WAB designed the research and wrote the manuscript. MG and JDKO conducted the experiments and analyzed the data with input from WAB.Open practicesAll data will be available in the Supplementary Raw Data Archive included with this submission, and via OSF: https://osf.io/dkxez/?view_only=38c7d6db309d49219360b21c41b431d2.Additional informationFundingMG was funded by the University of Chicago Metcalf Research Internship in Neuroscience. WAB is supported by the National Eye Institute (R01-EY034432). For helpful comments, we thank the members of the Brain Bridge Lab.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135813237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the automaticity of links between body perception and trait concepts 研究身体知觉与特质概念之间联系的自动性
4区 心理学 Q1 Arts and Humanities Pub Date : 2023-10-16 DOI: 10.1080/13506285.2023.2250505
Andrew Wildman, Richard Ramsey
ABSTRACTSocial cognition has been argued to rely on automatic mechanisms, but little is known about how automatically the processing of body shapes is linked to other social processes, such as trait inference. In three pre-registered experiments, we tested the automaticity of links between body shape perception and trait inference by manipulating cognitive load during a response-competition task. In Experiment 1 (N = 52), participants categorised body shapes in the context of compatible or incompatible trait words, under high and low cognitive load. Bayesian multi-level modelling of reaction times indicated that interference caused by the compatibility of trait cues was insensitive to concurrent demands placed on working memory resources. These findings indicate that the linking of body shapes and traits is resource-light and more “automatic” in this sense. In Experiment 2 (N = 39) and 3 (N = 70), we asked participants to categorise trait words in the context of task-irrelevant body shapes. Under these conditions, no evidence of interference was found, regardless of concurrent load. These results suggest that while body shapes and trait concepts can be linked in an automatic manner, such processes are sensitive to wider contextual factors, such as the order in which information is presented.KEYWORDS: Social cognitionbody perceptionautomaticitytrait inferencecognitive load Disclosure statementNo potential conflict of interest was reported by the author(s).
摘要社会认知一直被认为依赖于自动机制,但人们对身体形状的自动加工如何与其他社会过程(如特征推断)联系在一起知之甚少。在三个预注册实验中,我们通过在反应-竞争任务中操纵认知负荷来测试体型感知与特质推断之间联系的自动性。在实验1 (N = 52)中,被试分别在高认知负荷和低认知负荷下,在兼容或不兼容特质词的语境下对体型进行分类。贝叶斯多层次反应时间模型表明,特质线索的相容性引起的干扰对工作记忆资源的并发需求不敏感。这些发现表明,从这个意义上说,体型和特征之间的联系是资源较少的,而且更“自动”。在实验2 (N = 39)和实验3 (N = 70)中,我们要求参与者在与任务无关的体型背景下对特征词进行分类。在这些条件下,无论并发负载如何,都没有发现干扰的证据。这些结果表明,虽然体型和特征概念可以以一种自动的方式联系起来,但这种过程对更广泛的背景因素很敏感,比如信息呈现的顺序。关键词:社会认知、身体知觉、自动性、特质推理、认知负荷披露声明作者未发现潜在利益冲突。
{"title":"Investigating the automaticity of links between body perception and trait concepts","authors":"Andrew Wildman, Richard Ramsey","doi":"10.1080/13506285.2023.2250505","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250505","url":null,"abstract":"ABSTRACTSocial cognition has been argued to rely on automatic mechanisms, but little is known about how automatically the processing of body shapes is linked to other social processes, such as trait inference. In three pre-registered experiments, we tested the automaticity of links between body shape perception and trait inference by manipulating cognitive load during a response-competition task. In Experiment 1 (N = 52), participants categorised body shapes in the context of compatible or incompatible trait words, under high and low cognitive load. Bayesian multi-level modelling of reaction times indicated that interference caused by the compatibility of trait cues was insensitive to concurrent demands placed on working memory resources. These findings indicate that the linking of body shapes and traits is resource-light and more “automatic” in this sense. In Experiment 2 (N = 39) and 3 (N = 70), we asked participants to categorise trait words in the context of task-irrelevant body shapes. Under these conditions, no evidence of interference was found, regardless of concurrent load. These results suggest that while body shapes and trait concepts can be linked in an automatic manner, such processes are sensitive to wider contextual factors, such as the order in which information is presented.KEYWORDS: Social cognitionbody perceptionautomaticitytrait inferencecognitive load Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136113723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are attentional momentum and representational momentum related? 注意动量和表征动量相关吗?
4区 心理学 Q1 Arts and Humanities Pub Date : 2023-10-02 DOI: 10.1080/13506285.2023.2263204
Timothy L. Hubbard, Susan E. Ruppel
ABSTRACTIn attentional momentum, detection of a target further ahead in the direction of an ongoing attention shift is faster than detection of a target an equal distance in an orthogonal direction. In representational momentum, memory for the location of a previously viewed target is displaced in the direction of target motion. Hubbard [Hubbard, T. L. (2014). Forms of momentum across space: Representational, operational, and attentional. Psychonomic Bulletin & Review, 21(6), 1371–1403; Hubbard, T. L. (2015). The varieties of momentum-like experience. Psychological Bulletin, 141(6), 1081–1119] hypothesized that attentional momentum and representational momentum might be related or reflect the same mechanism or similar mechanisms. Two experiments collected measures of attentional momentum and representational momentum. In Experiment 1, attentional momentum based on differences between detecting targets opposite or orthogonal to a cued location was not correlated with representational momentum based on M displacement for the final location of a target. In Experiment 2, attentional momentum based on facilitation in detecting a gap on a probe presented in front of the final target location was not correlated with representational momentum based on a weighted mean of the probabilities of a same response in probe judgments of the final target location. Implications of the findings for the relationship of attentional momentum and representational momentum, and for theories of momentum-like effects in general, are considered.KEYWORDS: Attentional momentumrepresentational momentumdisplacementspatial representation AcknowledgementThe authors thank two anonymous reviewers for helpful comments on a previous version of the manuscript.Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 Durations of the different stages of a trial differed slightly from those in Pratt et al. (Citation1999) to ensure that timing in the attentional momentum task was consistent with timing in the representational momentum task.2 Hubbard (Citation2019) suggested that an understanding of momentum-like processes needed to consider all of Marr’s (Citation1982) levels of analysis. Accordingly, although attentional momentum and representational momentum appear similar at the level of computational theory (i.e., both facilitate processing of spatial information expected to be present in the near future and both involve displacement across space, Hubbard, Citation2014, Citation2015), the current data suggest attentional momentum and representational momentum could be different at the level of representation and algorithm or the level of implementation (i.e., involve different mechanisms).
摘要:在注意动量中,在注意力持续转移的方向上检测到前方更远的目标比在正交方向上检测到等距离的目标要快。在表征动量中,对先前观察到的目标位置的记忆在目标运动的方向上移位。哈伯德,T. L.(2014)。跨越空间的动量形式:代表、操作和注意。心理科学通报,21(6),1371-1403;哈伯德,t.l.(2015)。动量样经验的种类。心理学报,41(6),1081-1119 [j]假设注意动量和表征动量可能相关或反映相同或相似的机制。两个实验收集了注意动量和表征动量的测量。在实验1中,基于与提示位置相反或正交的检测目标之间差异的注意动量与基于目标最终位置M位移的表征动量不相关。在实验2中,基于在最终目标位置前呈现的探针上发现间隙的便利的注意动量与基于最终目标位置的探针判断相同反应概率的加权平均值的表征动量不相关。研究结果对注意动量和表征动量的关系以及一般的类动量效应理论的意义进行了考虑。关键词:注意动量表征动量位移空间表征致谢作者感谢两位匿名审稿人对前一版本手稿的有益评论。披露声明作者未报告潜在的利益冲突。注1:试验不同阶段的持续时间与Pratt等人(Citation1999)略有不同,以确保注意动量任务的时间与表征动量任务的时间一致Hubbard (Citation2019)认为,对动量过程的理解需要考虑Marr (Citation1982)的所有分析水平。因此,尽管注意动量和表征动量在计算理论层面上看起来相似(即,两者都有助于处理预计在不久的将来出现的空间信息,并且都涉及跨空间的位移,Hubbard, Citation2014, Citation2015),但目前的数据表明,注意动量和表征动量在表征和算法层面或实施层面(即:涉及不同的机制)。
{"title":"Are attentional momentum and representational momentum related?","authors":"Timothy L. Hubbard, Susan E. Ruppel","doi":"10.1080/13506285.2023.2263204","DOIUrl":"https://doi.org/10.1080/13506285.2023.2263204","url":null,"abstract":"ABSTRACTIn attentional momentum, detection of a target further ahead in the direction of an ongoing attention shift is faster than detection of a target an equal distance in an orthogonal direction. In representational momentum, memory for the location of a previously viewed target is displaced in the direction of target motion. Hubbard [Hubbard, T. L. (2014). Forms of momentum across space: Representational, operational, and attentional. Psychonomic Bulletin & Review, 21(6), 1371–1403; Hubbard, T. L. (2015). The varieties of momentum-like experience. Psychological Bulletin, 141(6), 1081–1119] hypothesized that attentional momentum and representational momentum might be related or reflect the same mechanism or similar mechanisms. Two experiments collected measures of attentional momentum and representational momentum. In Experiment 1, attentional momentum based on differences between detecting targets opposite or orthogonal to a cued location was not correlated with representational momentum based on M displacement for the final location of a target. In Experiment 2, attentional momentum based on facilitation in detecting a gap on a probe presented in front of the final target location was not correlated with representational momentum based on a weighted mean of the probabilities of a same response in probe judgments of the final target location. Implications of the findings for the relationship of attentional momentum and representational momentum, and for theories of momentum-like effects in general, are considered.KEYWORDS: Attentional momentumrepresentational momentumdisplacementspatial representation AcknowledgementThe authors thank two anonymous reviewers for helpful comments on a previous version of the manuscript.Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 Durations of the different stages of a trial differed slightly from those in Pratt et al. (Citation1999) to ensure that timing in the attentional momentum task was consistent with timing in the representational momentum task.2 Hubbard (Citation2019) suggested that an understanding of momentum-like processes needed to consider all of Marr’s (Citation1982) levels of analysis. Accordingly, although attentional momentum and representational momentum appear similar at the level of computational theory (i.e., both facilitate processing of spatial information expected to be present in the near future and both involve displacement across space, Hubbard, Citation2014, Citation2015), the current data suggest attentional momentum and representational momentum could be different at the level of representation and algorithm or the level of implementation (i.e., involve different mechanisms).","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135895634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Serial and joint processing of conjunctive predictions 连词预测的串行和联合处理
IF 2 4区 心理学 Q1 Arts and Humanities Pub Date : 2023-09-05 DOI: 10.1080/13506285.2023.2250506
R. Yu, Jiaying Zhao
ABSTRACT When two jointly presented cues predict different outcomes, people respond faster to the conjunction/overlap of outcomes. Two explanations exist. In the joint account, people prioritize conjunction. In the serial account, people process cues serially and incidentally respond faster to conjunction. We tested these accounts in three experiments using novel web based attention-tracking tools. Participants learned colour-location associations where colorus predicted target locations (Experiment 1). Afterward, two cues appeared jointly and targets followed randomly. Exploratory data showed participants initially prioritized locations consistent with the conjunction, shifting later. Experiment 2 presented complex color-category associations during exposure. Upon seeing joint cues, participants' responses indicated both serial and joint processing. Experiment 3, with imperfect cue-outcome associations during exposure, surprisingly showed robust conjunctive predictions, likely because people expected exceptions to their predictions. Overall, strong learning led to spontaneous conjunctive predictions, but there were quick shifts to alternatives like serial processing when people were not expecting exceptions.
{"title":"Serial and joint processing of conjunctive predictions","authors":"R. Yu, Jiaying Zhao","doi":"10.1080/13506285.2023.2250506","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250506","url":null,"abstract":"ABSTRACT When two jointly presented cues predict different outcomes, people respond faster to the conjunction/overlap of outcomes. Two explanations exist. In the joint account, people prioritize conjunction. In the serial account, people process cues serially and incidentally respond faster to conjunction. We tested these accounts in three experiments using novel web based attention-tracking tools. Participants learned colour-location associations where colorus predicted target locations (Experiment 1). Afterward, two cues appeared jointly and targets followed randomly. Exploratory data showed participants initially prioritized locations consistent with the conjunction, shifting later. Experiment 2 presented complex color-category associations during exposure. Upon seeing joint cues, participants' responses indicated both serial and joint processing. Experiment 3, with imperfect cue-outcome associations during exposure, surprisingly showed robust conjunctive predictions, likely because people expected exceptions to their predictions. Overall, strong learning led to spontaneous conjunctive predictions, but there were quick shifts to alternatives like serial processing when people were not expecting exceptions.","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44417670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How does exogenous alerting impact endogenous preparation on a temporal cueing task 外源性警报如何影响时间提示任务的内源性准备
IF 2 4区 心理学 Q1 Arts and Humanities Pub Date : 2023-08-30 DOI: 10.1080/13506285.2023.2250530
C. R. McCormick, R. S. Redden, R. Klein
{"title":"How does exogenous alerting impact endogenous preparation on a temporal cueing task","authors":"C. R. McCormick, R. S. Redden, R. Klein","doi":"10.1080/13506285.2023.2250530","DOIUrl":"https://doi.org/10.1080/13506285.2023.2250530","url":null,"abstract":"","PeriodicalId":47961,"journal":{"name":"VISUAL COGNITION","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44806893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
VISUAL COGNITION
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1