Pub Date : 2026-03-23DOI: 10.3758/s13414-026-03240-9
Qinyue Qian, Xiaolan Wei, Tianyang Zhang, Aijun Wang, Ming Zhang
Biological motion exhibits bistable characteristics when presented in the depth dimension, and sound, as an important multisensory cue, can modulate this bistable perception. Previous studies often adopted nonbiological tones and had not fully controlled the inherent bias of visual stimuli. The underlying cognitive mechanism also requires further exploration using computational models. To address these research needs, the present study combined psychophysical methods with the hierarchical drift-diffusion model (HDDM) to investigate the effects of footstep sounds on bistable biological motion processing and its mechanism. A total of 24 naïve participants completed the experiment. Results showed that the proportion of "facing the viewer (FTV)" responses was significantly higher under looming and constant sound conditions relative to the receding sound condition, and reaction time (RT) in the no-sound condition was significantly slower than the other three sound conditions. Further HDDM analysis revealed that sound regulates the processing of bistable biological motion by shortening nondecision time (t) and modulating drift rate (v). The study demonstrates that footstep sounds accelerate the processing of bistable biological motion, and the directional information carried by sound drives visual perception to align with it. This effect is mediated by a two-stage mechanism that modulates nondecision processing (including early perceptual encoding) and strengthens evidence accumulation. This study provides empirical evidence for understanding the role of multisensory interaction in the perception of bistable biological motion. The data, materials and code are available in the Open Science Framework (OSF) repository ( https://osf.io/3vm7p/ ). None of the experiments was preregistered.
{"title":"Footstep sounds influence bistable biological motion perception.","authors":"Qinyue Qian, Xiaolan Wei, Tianyang Zhang, Aijun Wang, Ming Zhang","doi":"10.3758/s13414-026-03240-9","DOIUrl":"https://doi.org/10.3758/s13414-026-03240-9","url":null,"abstract":"<p><p>Biological motion exhibits bistable characteristics when presented in the depth dimension, and sound, as an important multisensory cue, can modulate this bistable perception. Previous studies often adopted nonbiological tones and had not fully controlled the inherent bias of visual stimuli. The underlying cognitive mechanism also requires further exploration using computational models. To address these research needs, the present study combined psychophysical methods with the hierarchical drift-diffusion model (HDDM) to investigate the effects of footstep sounds on bistable biological motion processing and its mechanism. A total of 24 naïve participants completed the experiment. Results showed that the proportion of \"facing the viewer (FTV)\" responses was significantly higher under looming and constant sound conditions relative to the receding sound condition, and reaction time (RT) in the no-sound condition was significantly slower than the other three sound conditions. Further HDDM analysis revealed that sound regulates the processing of bistable biological motion by shortening nondecision time (t) and modulating drift rate (v). The study demonstrates that footstep sounds accelerate the processing of bistable biological motion, and the directional information carried by sound drives visual perception to align with it. This effect is mediated by a two-stage mechanism that modulates nondecision processing (including early perceptual encoding) and strengthens evidence accumulation. This study provides empirical evidence for understanding the role of multisensory interaction in the perception of bistable biological motion. The data, materials and code are available in the Open Science Framework (OSF) repository ( https://osf.io/3vm7p/ ). None of the experiments was preregistered.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147505548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-23DOI: 10.3758/s13414-026-03245-4
Shira Tkacz-Domb, Erez Freud, Sarah Shomstein
Face processing is characterized by idiosyncratic gaze patterns, whereby certain individuals preferentially look at the eyes, while others look at the mouth. Here, we examined whether idiosyncratic gaze preferences toward the upper or lower face modulate recognition when key features of the faces are occluded (masked). Furthermore, we investigated whether attentional cues to the eyes or mouth facilitate performance differentially as a function of idiosyncratic gaze preferences. Using a separate free-viewing task, we assessed each participant's gaze preference index, indicating whether the participant primarily fixated the lower part of the face (down-lookers) or the upper part (up-lookers). Participants also completed the Glasgow Face Matching Test with masked or unmasked faces, and with attentional cues presented around either the eyes or the mouth. Performance was lower when the mouth was occluded than when it was visible, and this decrease was greater for participants who primarily fixated on the lower face in the separate free-viewing task than for those who fixated on the upper face. Additionally, the eye-cue improved performance under mask conditions. However, this effect diminished as lower face preference increased, suggesting that down-lookers may not overcome their bias to fixate on the obscured region, despite the presence of attentional cues. Altogether, individual gaze preferences modulated recognition when faces were occluded, and predicted the degree to which attentional cues benefited face recognition.
{"title":"Individual gaze preferences and attentional cues interact in masked and unmasked face recognition.","authors":"Shira Tkacz-Domb, Erez Freud, Sarah Shomstein","doi":"10.3758/s13414-026-03245-4","DOIUrl":"https://doi.org/10.3758/s13414-026-03245-4","url":null,"abstract":"<p><p>Face processing is characterized by idiosyncratic gaze patterns, whereby certain individuals preferentially look at the eyes, while others look at the mouth. Here, we examined whether idiosyncratic gaze preferences toward the upper or lower face modulate recognition when key features of the faces are occluded (masked). Furthermore, we investigated whether attentional cues to the eyes or mouth facilitate performance differentially as a function of idiosyncratic gaze preferences. Using a separate free-viewing task, we assessed each participant's gaze preference index, indicating whether the participant primarily fixated the lower part of the face (down-lookers) or the upper part (up-lookers). Participants also completed the Glasgow Face Matching Test with masked or unmasked faces, and with attentional cues presented around either the eyes or the mouth. Performance was lower when the mouth was occluded than when it was visible, and this decrease was greater for participants who primarily fixated on the lower face in the separate free-viewing task than for those who fixated on the upper face. Additionally, the eye-cue improved performance under mask conditions. However, this effect diminished as lower face preference increased, suggesting that down-lookers may not overcome their bias to fixate on the obscured region, despite the presence of attentional cues. Altogether, individual gaze preferences modulated recognition when faces were occluded, and predicted the degree to which attentional cues benefited face recognition.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147505540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-18DOI: 10.3758/s13414-026-03227-6
Jennifer C Magerl Fuller, Árni Kristjánsson, Alasdair Clarke, Árni Gunnar Ásgeirsson
To successfully orient ourselves within noisy visual environments, we must focus our attention on items of importance, ignoring sources of distraction. This selective attending is typically thought to be facilitated by templates, tuned towards current goals. However, in real-world scenes, the appearance of objects, such as their colour or luminance, varies greatly due to perceptual interpretation and environmental factors. Therefore, tuning attentional templates probabilistically may be more efficient than tuning them to precise values. This seems particularly important during continuous tasks, that require the selection of multiple objects which share certain properties. We investigated the effects of variability in target identity, using a novel foraging task. Participants (N = 15) had to continuously select 30 target objects, drawn from a truncated Gaussian colour distribution, sampled from a linearized space of 48 isoluminant hues. We adapted a generative model and applied it to the data, within a Bayesian multilevel framework. The model characterizes foraging as a sampling process without replacement and allows us to break foraging down into behavioural patterns that influence individual's target selection, independent of the number of targets present. The modelling results demonstrate increased likelihood of selection of more probable colour values in the scene. This likelihood maps onto the underlying probability distribution, illustrating how observers can acquire knowledge of the distribution's properties through foraging, beyond just the summary statistics.
{"title":"Generative modelling of continuous feature foraging reveals probabilistic representations of target distributions.","authors":"Jennifer C Magerl Fuller, Árni Kristjánsson, Alasdair Clarke, Árni Gunnar Ásgeirsson","doi":"10.3758/s13414-026-03227-6","DOIUrl":"10.3758/s13414-026-03227-6","url":null,"abstract":"<p><p>To successfully orient ourselves within noisy visual environments, we must focus our attention on items of importance, ignoring sources of distraction. This selective attending is typically thought to be facilitated by templates, tuned towards current goals. However, in real-world scenes, the appearance of objects, such as their colour or luminance, varies greatly due to perceptual interpretation and environmental factors. Therefore, tuning attentional templates probabilistically may be more efficient than tuning them to precise values. This seems particularly important during continuous tasks, that require the selection of multiple objects which share certain properties. We investigated the effects of variability in target identity, using a novel foraging task. Participants (N = 15) had to continuously select 30 target objects, drawn from a truncated Gaussian colour distribution, sampled from a linearized space of 48 isoluminant hues. We adapted a generative model and applied it to the data, within a Bayesian multilevel framework. The model characterizes foraging as a sampling process without replacement and allows us to break foraging down into behavioural patterns that influence individual's target selection, independent of the number of targets present. The modelling results demonstrate increased likelihood of selection of more probable colour values in the scene. This likelihood maps onto the underlying probability distribution, illustrating how observers can acquire knowledge of the distribution's properties through foraging, beyond just the summary statistics.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12999875/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147482548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-14DOI: 10.3758/s13414-026-03237-4
Luyao Jiang, Chang Liu, Cheng Gao, Junyi Hao, Jun Ding
In everyday life, aesthetic liking largely depends on the fluency of stimulus-driven, default, rapid, and automatic processing. However, our understanding of how perceptual and conceptual fluency jointly shape aesthetic liking in automatic processing remains limited. In two experiments, the masked priming paradigm was employed to manipulate perceptual and conceptual fluency of target stimuli separately, and participants were instructed to rate liking for colored images of everyday objects based on their initial impressions. The results indicated that the masked matched contours and words significantly reduced response times for liking judgments of the target images and increased liking ratings, whereas mismatched contours and words had no significant effect. Both experiments additionally varied the target duration to investigate whether the effects of perceptual and conceptual priming were influenced by another manipulation of perceptual fluency. We found that the perceptual priming effect diminished with longer target duration, while the conceptual priming effect remained consistent. These findings provide direct evidence that both perceptual and conceptual fluency can enhance aesthetic liking in automatic processing, and their effects are dissociated and not interchangeable.
{"title":"Aesthetic liking in automatic processing: Distinct effects of perceptual and conceptual fluency","authors":"Luyao Jiang, Chang Liu, Cheng Gao, Junyi Hao, Jun Ding","doi":"10.3758/s13414-026-03237-4","DOIUrl":"10.3758/s13414-026-03237-4","url":null,"abstract":"<div><p>In everyday life, aesthetic liking largely depends on the fluency of stimulus-driven, default, rapid, and automatic processing. However, our understanding of how perceptual and conceptual fluency jointly shape aesthetic liking in automatic processing remains limited. In two experiments, the masked priming paradigm was employed to manipulate perceptual and conceptual fluency of target stimuli separately, and participants were instructed to rate liking for colored images of everyday objects based on their initial impressions. The results indicated that the masked matched contours and words significantly reduced response times for liking judgments of the target images and increased liking ratings, whereas mismatched contours and words had no significant effect. Both experiments additionally varied the target duration to investigate whether the effects of perceptual and conceptual priming were influenced by another manipulation of perceptual fluency. We found that the perceptual priming effect diminished with longer target duration, while the conceptual priming effect remained consistent. These findings provide direct evidence that both perceptual and conceptual fluency can enhance aesthetic liking in automatic processing, and their effects are dissociated and not interchangeable.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147441718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-11DOI: 10.3758/s13414-026-03236-5
Brandon J. Carlos, Lindsay A. Santacroce, Benjamin J. Tamber-Rosenau
Working memory (WM) consolidation is the preservation of perceptual information to insulate it from distraction. A decision task following WM sample presentation retroactively disrupts consolidation, apparently regardless of whether the WM sample and decision task rely on the same representational formats. Critically, this representational-format-general interference suggests that consolidation entails “central” executive domain-general processing. However, the evidence for central interference is weak because verbal recoding of nonverbal samples is thought to be ubiquitous in WM. Typical decision tasks used to evoke interference also entail verbal materials, making it possible that the observed interference is really a product of competition for “peripheral” phonological storage resources. Moreover, some WM models suggest a direct connection between perceptual mechanisms and storage, without direct access of central processes to format-specific storage. Thus, it remains unknown whether WM consolidation entails central executive processing or could have a purely storage-buffer (verbal) locus. The current study embedded an established task to measure WM consolidation and its interruption by a decision task within a further within-participants 2 × 2 factorial design, using different WM sample and decision task representational format pairings in each task block. This allowed measurement of central processing contributions to consolidation (cross-format blocks), and evaluation of potential additional interference in format-specific storage buffers (same-format blocks). Interference in cross-format pairings did not increase for same-format pairings, supporting the view that WM consolidation is dependent on central processing. Subject-level data have been made publicly available on the Open Science Framework and can be accessed at the following link: https://osf.io/p387u/.
{"title":"Does working memory consolidation rely on central processing?","authors":"Brandon J. Carlos, Lindsay A. Santacroce, Benjamin J. Tamber-Rosenau","doi":"10.3758/s13414-026-03236-5","DOIUrl":"10.3758/s13414-026-03236-5","url":null,"abstract":"<div><p>Working memory (WM) consolidation is the preservation of perceptual information to insulate it from distraction. A decision task following WM sample presentation retroactively disrupts consolidation, apparently regardless of whether the WM sample and decision task rely on the same representational formats. Critically, this representational-format-general interference suggests that consolidation entails “central” executive domain-general processing. However, the evidence for central interference is weak because verbal recoding of nonverbal samples is thought to be ubiquitous in WM. Typical decision tasks used to evoke interference also entail verbal materials, making it possible that the observed interference is really a product of competition for “peripheral” phonological storage resources. Moreover, some WM models suggest a direct connection between perceptual mechanisms and storage, without direct access of central processes to format-specific storage. Thus, it remains unknown whether WM consolidation entails central executive processing or could have a purely storage-buffer (verbal) locus. The current study embedded an established task to measure WM consolidation and its interruption by a decision task within a further within-participants 2 × 2 factorial design, using different WM sample and decision task representational format pairings in each task block. This allowed measurement of central processing contributions to consolidation (cross-format blocks), and evaluation of potential additional interference in format-specific storage buffers (same-format blocks). Interference in cross-format pairings did not increase for same-format pairings, supporting the view that WM consolidation is dependent on central processing. Subject-level data have been made publicly available on the Open Science Framework and can be accessed at the following link: https://osf.io/p387u/.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147438015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-11DOI: 10.3758/s13414-026-03234-7
Lester C. Loschky, Maverick E. Smith, Prasanth Chandran, John P. Hutson, Tim J. Smith, Joseph P. Magliano
Your understanding of what you see now surely influences what you will look at next. Yet this simple concept has only recently begun to be systematically studied and elaborated within theoretical frameworks. The Scene Perception & Event Comprehension Theory (SPECT) distinguishes between front-end and back-end processes that occur while viewers perceive and comprehend dynamic real-world events. Front-end processes occur during each eye fixation (information extraction, attentional selection) and back-end processes occur in memory (the current event model, the stored event model, prior knowledge, and executive processes). We begin with a selective review of the scene perception literature on bottom-up and top-down effects on attentional selection in scenes, and highlight unanswered questions regarding the impact of the viewer’s event model–their understanding of what is happening now. Then, we outline the SPECT theoretical framework, and review empirical evidence about how the viewer’s current event model influences attentional selection. This influence is contrasted with those of visual saliency (e.g., color, brightness, motion) and task-driven control (i.e., goal setting, attentional control, inhibition). From this review, we specify a hierarchy of factors affecting attentional selection, in the order of task-driven control, visual saliency, and event models. We then propose several mechanisms by which the viewer’s event model influences attentional selection, and propose a systematic approach to investigating how that happens while watching dynamic scenes.
{"title":"The role of event understanding in guiding attentional selection in real-world scenes: The Scene Perception & Event Comprehension Theory (SPECT)","authors":"Lester C. Loschky, Maverick E. Smith, Prasanth Chandran, John P. Hutson, Tim J. Smith, Joseph P. Magliano","doi":"10.3758/s13414-026-03234-7","DOIUrl":"10.3758/s13414-026-03234-7","url":null,"abstract":"<div><p>Your understanding of what you see now surely influences what you will look at next. Yet this simple concept has only recently begun to be systematically studied and elaborated within theoretical frameworks. The Scene Perception & Event Comprehension Theory (SPECT) distinguishes between front-end and back-end processes that occur while viewers perceive and comprehend dynamic real-world events. Front-end processes occur during each eye fixation (information extraction, attentional selection) and back-end processes occur in memory (the current event model, the stored event model, prior knowledge, and executive processes). We begin with a selective review of the scene perception literature on bottom-up and top-down effects on attentional selection in scenes, and highlight unanswered questions regarding the impact of the viewer’s event model–their understanding of what is happening now. Then, we outline the SPECT theoretical framework, and review empirical evidence about how the viewer’s current event model influences attentional selection. This influence is contrasted with those of visual saliency (e.g., color, brightness, motion) and task-driven control (i.e., goal setting, attentional control, inhibition). From this review, we specify a hierarchy of factors affecting attentional selection, in the order of task-driven control, visual saliency, and event models. We then propose several mechanisms by which the viewer’s event model influences attentional selection, and propose a systematic approach to investigating how that happens while watching dynamic scenes.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-026-03234-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147437995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-10DOI: 10.3758/s13414-025-03217-0
Chloe Callahan-Flintoft, Patrick H. Cox, Emma M. Siritzky, Stephen R. Mitroff, Kelvin S. Oie, Dwight J. Kravitz
The human visual system adapts to statistical regularities in the environment to facilitate visual processing. While laboratory-based tasks make clear distinctions between how task-relevant and task-irrelevant visual information can guide this adaptation, such discretization is rarely available in the real world. As such, it remains unclear exactly what information the visual system tracks to flexibly adapt to a given task. The current study used a massive visual search dataset from the mobile game Airport Scanner. Effects of exposure over a range of more task-relevant (e.g., target presence) to less task-relevant (e.g., background context) features were analyzed in an omnibus model to predict response times in both target-present and target-absent trials. As in previous work (Kramer et al., Journal of Experimental Psychology: General, 151 (8), 1854, 2022), increased exposure to target-present trials significantly sped up the detection of targets and slowed the rejection of target-absent trials. Exposure to salient distractors reduced response times for target-present trials, potentially as a result of learned distractor suppression (Gaspelin & Luck, Trends in cognitive sciences, 22 (1), 79-92, 2018) or increased familiarity (Mruczek & Sheinberg, Perception & psychophysics, 67 (6), 1016-1031, 2005), but had no effect on target-absent trials. Exposure to background information decreased response times in both target-present and target-absent trials, with notable interactions between target and background exposure. Specifically, the effect of background information was more pronounced when target exposure was low, suggesting that less task-relevant context information is more likely to be tracked in the absence of more task-relevant information, namely, the presentation of targets. The findings highlight the importance of considering multiple sources of exposure in visual search tasks and demonstrate the value of large datasets in quantifying their complex interactions.
人类的视觉系统适应环境中的统计规律,以方便视觉处理。虽然基于实验室的任务明确区分了与任务相关和与任务无关的视觉信息如何指导这种适应,但这种离散化在现实世界中很少可用。因此,目前还不清楚视觉系统究竟跟踪哪些信息来灵活地适应给定的任务。目前的研究使用了来自手机游戏《Airport Scanner》的大量视觉搜索数据集。在一个综合模型中分析了暴露在任务相关程度较高(例如,目标存在)到任务相关程度较低(例如,背景背景)特征范围内的影响,以预测目标存在和目标不存在试验中的反应时间。与之前的研究一样(Kramer et al., Journal of Experimental Psychology: General, 151(8), 1854, 2022),增加对目标存在的试验的暴露显著加快了对目标的检测,减缓了对目标不存在试验的排斥。暴露于显著干扰物会减少目标在场试验的反应时间,这可能是习得性干扰物抑制的结果(Gaspelin & Luck, Trends in cognitive sciences, 22(1), 79-92, 2018)或熟悉度增加的结果(Mruczek & Sheinberg, Perception & psychophysics, 67(6), 1016-1031, 2005),但对目标缺席试验没有影响。在目标存在和目标不存在的实验中,暴露于背景信息均减少了反应时间,目标和背景暴露之间存在显著的相互作用。具体来说,当目标暴露较低时,背景信息的影响更为明显,这表明在缺乏更多任务相关信息(即目标的呈现)的情况下,较少的任务相关上下文信息更有可能被跟踪。研究结果强调了在视觉搜索任务中考虑多种暴露源的重要性,并证明了大型数据集在量化其复杂相互作用方面的价值。
{"title":"Repeated exposure to task-relevant and task-irrelevant information – and their interaction – affect visual search performance","authors":"Chloe Callahan-Flintoft, Patrick H. Cox, Emma M. Siritzky, Stephen R. Mitroff, Kelvin S. Oie, Dwight J. Kravitz","doi":"10.3758/s13414-025-03217-0","DOIUrl":"10.3758/s13414-025-03217-0","url":null,"abstract":"<div><p>The human visual system adapts to statistical regularities in the environment to facilitate visual processing. While laboratory-based tasks make clear distinctions between how task-relevant and task-irrelevant visual information can guide this adaptation, such discretization is rarely available in the real world. As such, it remains unclear exactly what information the visual system tracks to flexibly adapt to a given task. The current study used a massive visual search dataset from the mobile game <i>Airport Scanner</i>. Effects of exposure over a range of more task-relevant (e.g., target presence) to less task-relevant (e.g., background context) features were analyzed in an omnibus model to predict response times in both target-present and target-absent trials. As in previous work (Kramer et al., <i>Journal of Experimental Psychology: General</i>, 151 (8), 1854, 2022), increased exposure to target-present trials significantly sped up the detection of targets and slowed the rejection of target-absent trials. Exposure to salient distractors reduced response times for target-present trials, potentially as a result of learned distractor suppression (Gaspelin & Luck, <i>Trends in cognitive sciences</i>, 22 (1), 79-92, 2018) or increased familiarity (Mruczek & Sheinberg, <i>Perception & psychophysics</i>, 67 (6), 1016-1031, 2005), but had no effect on target-absent trials. Exposure to background information decreased response times in both target-present and target-absent trials, with notable interactions between target and background exposure. Specifically, the effect of background information was more pronounced when target exposure was low, suggesting that less task-relevant context information is more likely to be tracked in the absence of more task-relevant information, namely, the presentation of targets. The findings highlight the importance of considering multiple sources of exposure in visual search tasks and demonstrate the value of large datasets in quantifying their complex interactions.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03217-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147437993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-09DOI: 10.3758/s13414-026-03242-7
Connor Wessel, Cindy Zhang, Michael Schutz
{"title":"Correction to: Amplitude envelope and subjective duration: Quantifying the role of decaying offsets in timing perception","authors":"Connor Wessel, Cindy Zhang, Michael Schutz","doi":"10.3758/s13414-026-03242-7","DOIUrl":"10.3758/s13414-026-03242-7","url":null,"abstract":"","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147379774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-06DOI: 10.3758/s13414-026-03226-7
Anke Cajar, Jochen Laubrock
When searching visual scenes, we use low-level visual information from objects’ defining features such as color and luminance contrasts. What is the relative influence of color and luminance for saccade target selection? Basic perceptual research suggests that we are not very sensitive to peripheral color, yet color is thought to be an important basic feature guiding visual search. Previous gaze-contingent research shows that targets can be localized faster in color than in grayscale scenes, therefore the availability of color in the visual periphery indeed helps visual search. However, object boundaries are typically defined by both color and luminance contrasts. Here we study the isolated roles of color and luminance during object-in-scene search by presenting either color-only or luminance-only contrasts in peripheral vision, using a gaze-contingent moving-window display with three varying window sizes. We found that peripheral target selection and search performance were more efficient with luminance contrasts, whereas color was used only sparingly beyond the parafovea. We conclude that color contrasts in peripheral vision are only efficiently used in scene search when they are jointly occurring with luminance contrasts.
{"title":"Color needs luminance for visual selection during scene search","authors":"Anke Cajar, Jochen Laubrock","doi":"10.3758/s13414-026-03226-7","DOIUrl":"10.3758/s13414-026-03226-7","url":null,"abstract":"<div><p>When searching visual scenes, we use low-level visual information from objects’ defining features such as color and luminance contrasts. What is the relative influence of color and luminance for saccade target selection? Basic perceptual research suggests that we are not very sensitive to peripheral color, yet color is thought to be an important basic feature guiding visual search. Previous gaze-contingent research shows that targets can be localized faster in color than in grayscale scenes, therefore the availability of color in the visual periphery indeed helps visual search. However, object boundaries are typically defined by both color and luminance contrasts. Here we study the isolated roles of color and luminance during object-in-scene search by presenting either color-only or luminance-only contrasts in peripheral vision, using a gaze-contingent moving-window display with three varying window sizes. We found that peripheral target selection and search performance were more efficient with luminance contrasts, whereas color was used only sparingly beyond the parafovea. We conclude that color contrasts in peripheral vision are only efficiently used in scene search when they are jointly occurring with luminance contrasts.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-026-03226-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-05DOI: 10.3758/s13414-026-03225-8
Virginie Leclercq, Pom Charras
Recent theoretical considerations suggest that preparatory attention mechanisms might play a pivotal role in the anticipation of distractors by proactively suppressing spatial locations and/or specific features. However, this notion stems primarily from studies employing visual search paradigms with simultaneous displays, and recent research using successive displays and isolated distractors challenges this assumption. The efficacy of proactive suppression in setups involving isolated distractors is questioned, underscoring the need for further investigation. The current study seeks to investigate the factors influencing proactive suppression when anticipating isolated distractors in both spatial and temporal dimensions. Across three experiments, we manipulated distractor interference, spatial predictability, and temporal predictability. Results consistently showed that even highly interfering distractors, when fully predictable in time and space, remained difficult to proactively suppress. These findings suggest clear limitations of proactive attentional control mechanisms in the context of isolated distractor anticipation.
{"title":"Exploring spatial and temporal constraints in suppression of distractors","authors":"Virginie Leclercq, Pom Charras","doi":"10.3758/s13414-026-03225-8","DOIUrl":"10.3758/s13414-026-03225-8","url":null,"abstract":"<div><p>Recent theoretical considerations suggest that preparatory attention mechanisms might play a pivotal role in the anticipation of distractors by proactively suppressing spatial locations and/or specific features. However, this notion stems primarily from studies employing visual search paradigms with simultaneous displays, and recent research using successive displays and isolated distractors challenges this assumption. The efficacy of proactive suppression in setups involving isolated distractors is questioned, underscoring the need for further investigation. The current study seeks to investigate the factors influencing proactive suppression when anticipating isolated distractors in both spatial and temporal dimensions. Across three experiments, we manipulated distractor interference, spatial predictability, and temporal predictability. Results consistently showed that even highly interfering distractors, when fully predictable in time and space, remained difficult to proactively suppress. These findings suggest clear limitations of proactive attentional control mechanisms in the context of isolated distractor anticipation.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147357410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}