Pub Date : 2025-11-20DOI: 10.3758/s13414-025-03191-7
Christian N. L. Olivers, Güven Kandemir, Elle van Heusden
It is widely assumed that attention for peripheral visual information is reduced, especially when such information competes with information closer to the fovea. However, existing evidence for such a peripheral attention deficit has suffered from several confounds. Here we reinvestigated how stimuli presented at different eccentricities compete for attention. Human observers saw two equally relevant orientations (Experiment 1) or colors (Experiment 2), presented at close and far eccentricities. The two patterns were presented sequentially (thus competing little for attention), or simultaneously (thus directly competing for attention), after which participants indicated the seen stimuli using a continuous report scale. Errors increased with eccentricity, but increased markedly more so when the items were presented simultaneously. Thus, in situations of competition for attention, the more peripheral item loses out. The findings are not only relevant for fundamental theories of vision, but also for applied situations in which detection of peripheral stimuli is crucial. The data and materials are available online (https://osf.io/m6xjk/).
{"title":"Eccentricity determines the competition for attention","authors":"Christian N. L. Olivers, Güven Kandemir, Elle van Heusden","doi":"10.3758/s13414-025-03191-7","DOIUrl":"10.3758/s13414-025-03191-7","url":null,"abstract":"<div><p>It is widely assumed that attention for peripheral visual information is reduced, especially when such information competes with information closer to the fovea. However, existing evidence for such a peripheral attention deficit has suffered from several confounds. Here we reinvestigated how stimuli presented at different eccentricities compete for attention. Human observers saw two equally relevant orientations (Experiment 1) or colors (Experiment 2), presented at close and far eccentricities. The two patterns were presented sequentially (thus competing little for attention), or simultaneously (thus directly competing for attention), after which participants indicated the seen stimuli using a continuous report scale. Errors increased with eccentricity, but increased markedly more so when the items were presented simultaneously. Thus, in situations of competition for attention, the more peripheral item loses out. The findings are not only relevant for fundamental theories of vision, but also for applied situations in which detection of peripheral stimuli is crucial. The data and materials are available online (https://osf.io/m6xjk/).</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145561631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.3758/s13414-025-03181-9
Saransh Jain, Vijaya Kumar Narne, Vasuki Dattakumar, Sunil Kumar Ravi, Brian C. J. Moore
Auditory stream segregation was measured using ABA-ABA sequences where A and B were the vowel |α| and A and B differed in fundamental frequency by ΔF0. The aim was to assess the impact of ΔF0 and inter-stimulus interval (ISI) on stream segregation for normal-hearing (NH) and hearing-impaired (HI) participants with mild-to-moderate hearing loss. For each ISI, ΔF0 was varied using an adaptive procedure to estimate the temporal coherence boundary (TCB, the ΔF0 for which two streams were mostly reported) and the fission boundary (FB, the ΔF0 for which one stream was mostly reported). The TCB and FB were smallest for an ISI near 50 ms. The increase for smaller ISIs may reflect a tendency for the sequences to be perceived as a single frequency-modulated sound, while the increase for larger ISIs may reflect the effectiveness of F0 differences in promoting stream segregation when a sequence of discrete sounds is perceived. The TCBs were higher for the HI than for the NH participants for all ISIs, while the FBs were higher for the HI participants mainly for ISIs above 50 ms. The higher TCB and FB values for the HI participants probably reflect the reduced salience of fundamental frequency differences.
{"title":"The effect of sensorineural hearing loss on auditory stream segregation measured as a function of interstimulus interval","authors":"Saransh Jain, Vijaya Kumar Narne, Vasuki Dattakumar, Sunil Kumar Ravi, Brian C. J. Moore","doi":"10.3758/s13414-025-03181-9","DOIUrl":"10.3758/s13414-025-03181-9","url":null,"abstract":"<div><p>Auditory stream segregation was measured using ABA-ABA sequences where A and B were the vowel |α| and A and B differed in fundamental frequency by ΔF0. The aim was to assess the impact of ΔF0 and inter-stimulus interval (ISI) on stream segregation for normal-hearing (NH) and hearing-impaired (HI) participants with mild-to-moderate hearing loss. For each ISI, ΔF0 was varied using an adaptive procedure to estimate the temporal coherence boundary (TCB, the ΔF0 for which two streams were mostly reported) and the fission boundary (FB, the ΔF0 for which one stream was mostly reported). The TCB and FB were smallest for an ISI near 50 ms. The increase for smaller ISIs may reflect a tendency for the sequences to be perceived as a single frequency-modulated sound, while the increase for larger ISIs may reflect the effectiveness of F0 differences in promoting stream segregation when a sequence of discrete sounds is perceived. The TCBs were higher for the HI than for the NH participants for all ISIs, while the FBs were higher for the HI participants mainly for ISIs above 50 ms. The higher TCB and FB values for the HI participants probably reflect the reduced salience of fundamental frequency differences.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145558405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12DOI: 10.3758/s13414-025-03161-z
Jingqing Nian, Yu Zhang, Yu Luo
Previous studies have found evidence of adaptive suppression mechanisms for physically salient stimuli. However, it remains unclear whether a similar mechanism exists for threat-history stimuli. This study used a threat conditioning task to generate stimuli with and without a history of threat. In the subsequent visual search task, the spatial probability of distractors was manipulated to examine the influence of threat-history stimuli on distractor suppression. The results showed that response was slower when the threat-history distractor appeared at low-probability locations compared to the no-threat-history distractor, whereas no such difference was observed at high-probability locations. Furthermore, the learned suppression effect for the threat-history distractor was significantly higher than that for the no-threat-history distractor. Our findings suggest that although threat-history distractors readily capture attention at low-probability locations, they remain subject to the same “blanket” suppression at high-probability locations. This pattern demonstrates that statistical learning adaptively modulates attentional selection for threatening stimuli.
{"title":"Adaptive suppression of threat-history stimuli","authors":"Jingqing Nian, Yu Zhang, Yu Luo","doi":"10.3758/s13414-025-03161-z","DOIUrl":"10.3758/s13414-025-03161-z","url":null,"abstract":"<div><p>Previous studies have found evidence of adaptive suppression mechanisms for physically salient stimuli. However, it remains unclear whether a similar mechanism exists for threat-history stimuli. This study used a threat conditioning task to generate stimuli with and without a history of threat. In the subsequent visual search task, the spatial probability of distractors was manipulated to examine the influence of threat-history stimuli on distractor suppression. The results showed that response was slower when the threat-history distractor appeared at low-probability locations compared to the no-threat-history distractor, whereas no such difference was observed at high-probability locations. Furthermore, the learned suppression effect for the threat-history distractor was significantly higher than that for the no-threat-history distractor. Our findings suggest that although threat-history distractors readily capture attention at low-probability locations, they remain subject to the same “blanket” suppression at high-probability locations. This pattern demonstrates that statistical learning adaptively modulates attentional selection for threatening stimuli.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145493434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-15DOI: 10.3758/s13414-025-03140-4
Liqing Liu, Jiaqi Han, Qing Liu, Zhihui Zhao, Xiaoqi Wang
Temporal prediction refers to the conscious perception of the passage of time and the ability to predict when a stimulus will occur. It has been found that different types of temporal predictions and varying foreperiods could impact motor performance. However, there is a lack of discussion on the underlying neural mechanism of temporal prediction. Therefore, in the current study, externally and internally driven temporal predictions, combined with variable foreperiods, were examined to explore the neural mechanism of the effects of temporal cues on motor performance. The results showed that reaction times were significantly faster in the temporal cue condition compared to the neutral cue condition with a foreperiod of 500 ms. Additionally, the amplitude of CNV (contingent negative variation) was greater in the temporal cue condition. These results indicate that motor performance could benefit from the external temporal information with a relatively short preparation time. CNV could reflect movement preparation and anticipatory attention to the target stimulus.
{"title":"Electrophysiological correlates of temporal preparation across different temporal contexts and variable foreperiods","authors":"Liqing Liu, Jiaqi Han, Qing Liu, Zhihui Zhao, Xiaoqi Wang","doi":"10.3758/s13414-025-03140-4","DOIUrl":"10.3758/s13414-025-03140-4","url":null,"abstract":"<div><p>Temporal prediction refers to the conscious perception of the passage of time and the ability to predict when a stimulus will occur. It has been found that different types of temporal predictions and varying foreperiods could impact motor performance. However, there is a lack of discussion on the underlying neural mechanism of temporal prediction. Therefore, in the current study, externally and internally driven temporal predictions, combined with variable foreperiods, were examined to explore the neural mechanism of the effects of temporal cues on motor performance. The results showed that reaction times were significantly faster in the temporal cue condition compared to the neutral cue condition with a foreperiod of 500 ms. Additionally, the amplitude of CNV (contingent negative variation) was greater in the temporal cue condition. These results indicate that motor performance could benefit from the external temporal information with a relatively short preparation time. CNV could reflect movement preparation and anticipatory attention to the target stimulus.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 8","pages":"2455 - 2464"},"PeriodicalIF":1.7,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145304643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-29DOI: 10.3758/s13414-025-03141-3
Huichao Ji, Brian J. Scholl
What we see encompasses not only lower-level properties (such as a ball’s shape or motion) but also categorical events (such as a ball bouncing vs. rolling). Recent work demonstrates that such categorical perception occurs spontaneously during passive scene viewing: observers are better able to identify changes in static or dynamic scenes when the change involves different “visual verbs” (e.g., pouring vs. scooping), even when the within-type changes (e.g., across two different scenes of pouring) are objectively greater in magnitude. Might this occur as a part of visual processing itself, even without explicit verbal encoding? To find out, we discouraged verbal labeling via explicit instructions, a concurrent verbal suppression task, or both. In all cases, we continued to observe robust cross-event-type advantages for change detection, while carefully controlling lower-level visual features—in contrasts including pouring versus scooping, bouncing versus rolling, and rotating versus twisting. This suggests that we spontaneously see the world in terms of different “visual verbs” even without explicit verbal labeling.
{"title":"Pouring, scooping, bouncing, rolling, twisting, and rotating: Does spontaneous categorical perception of dynamic event types reflect verbal encoding or visual processing?","authors":"Huichao Ji, Brian J. Scholl","doi":"10.3758/s13414-025-03141-3","DOIUrl":"10.3758/s13414-025-03141-3","url":null,"abstract":"<div><p>What we see encompasses not only lower-level properties (such as a ball’s shape or motion) but also categorical events (such as a ball bouncing vs. rolling). Recent work demonstrates that such categorical perception occurs spontaneously during passive scene viewing: observers are better able to identify changes in static or dynamic scenes when the change involves different “visual verbs” (e.g., pouring vs. scooping), even when the within-type changes (e.g., across two different scenes of pouring) are objectively greater in magnitude. Might this occur as a part of visual processing itself, even without explicit verbal encoding? To find out, we discouraged verbal labeling via explicit instructions, a concurrent verbal suppression task, or both. In all cases, we continued to observe robust cross-event-type advantages for change detection, while carefully controlling lower-level visual features—in contrasts including pouring versus scooping, bouncing versus rolling, and rotating versus twisting. This suggests that we spontaneously see the world in terms of different “visual verbs” even without explicit verbal labeling.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 8","pages":"2430 - 2441"},"PeriodicalIF":1.7,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145193973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-25DOI: 10.3758/s13414-025-03116-4
Brett Bahle, Kurt Winsler, John E Kiat, Steven J Luck
When we search for an object in the natural visual environment, we sometimes know exactly what the object looks like. At other times, however, we know only the category of the object. For example, if we are looking for our own bath towel, we might know that it is brown and is folded into a rectangle. However, if we are looking for a towel in a friend's house, we might not know its color or whether it is folded or lying in a clump. Consequently, we may sometimes be able to use specific perceptual features to guide search, but some search tasks are so conceptual in nature that the relevant visual features are difficult to specify. Here, we found that eye-movement patterns during visual search could be predicted by perceptual dimensions derived from crowd-sourced data (THINGS), but only when observers had previously seen the specific target object. When only the category of the desired object was known (because the observer had never seen the specific target), eye-movement patterns were predicted by conceptual dimensions derived from a natural language processing model (ConceptNet), and perceptual features had no significant predictive ability once the conceptual information was statistically controlled. In addition, as observers gained experience searching for a specific exemplar of a category, they became progressively more reliant on perceptual features and less reliant on conceptual features. Together, these findings provide novel evidence that conceptual information can influence search, especially when the precise perceptual features of an object are unknown.
{"title":"Combined conceptual and perceptual control of visual attention in search for real-world objects.","authors":"Brett Bahle, Kurt Winsler, John E Kiat, Steven J Luck","doi":"10.3758/s13414-025-03116-4","DOIUrl":"10.3758/s13414-025-03116-4","url":null,"abstract":"<p><p>When we search for an object in the natural visual environment, we sometimes know exactly what the object looks like. At other times, however, we know only the category of the object. For example, if we are looking for our own bath towel, we might know that it is brown and is folded into a rectangle. However, if we are looking for a towel in a friend's house, we might not know its color or whether it is folded or lying in a clump. Consequently, we may sometimes be able to use specific perceptual features to guide search, but some search tasks are so conceptual in nature that the relevant visual features are difficult to specify. Here, we found that eye-movement patterns during visual search could be predicted by perceptual dimensions derived from crowd-sourced data (THINGS), but only when observers had previously seen the specific target object. When only the category of the desired object was known (because the observer had never seen the specific target), eye-movement patterns were predicted by conceptual dimensions derived from a natural language processing model (ConceptNet), and perceptual features had no significant predictive ability once the conceptual information was statistically controlled. In addition, as observers gained experience searching for a specific exemplar of a category, they became progressively more reliant on perceptual features and less reliant on conceptual features. Together, these findings provide novel evidence that conceptual information can influence search, especially when the precise perceptual features of an object are unknown.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":"59"},"PeriodicalIF":1.7,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12864220/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-02DOI: 10.3758/s13414-025-03151-1
Ananya Passi, S. P. Arun
Languages have evolved in part due to cross-modal associations between shape and sound. A famous example is the Bouba–Kiki effect, wherein humans associate words like bouba/kiki to round/angular shapes. How does the Bouba–Kiki effect work for natural words and shapes that contain a mixture of features? If the effect is holistic, the effect for a composite stimulus would not be explainable using the parts. If the effect is compositional, it will be. Here we provide evidence for the latter possibility. In Experiments 1 and 2, we standardized bouba-like and kiki-like shapes and words for use in subsequent experiments. In Experiments 3–5, we created composite shapes/words by combining bouba-like & kiki-like parts. In all experiments, the Bouba–Kiki effect strength for composite shapes/words was predicted remarkably well as a linear sum of the contributions of the constituent parts. Our results greatly simplify our understanding of the Bouba–Kiki effect, leaving little room for holism.
{"title":"Shape and word parts combine linearly in the Bouba–Kiki effect","authors":"Ananya Passi, S. P. Arun","doi":"10.3758/s13414-025-03151-1","DOIUrl":"10.3758/s13414-025-03151-1","url":null,"abstract":"<div><p>Languages have evolved in part due to cross-modal associations between shape and sound. A famous example is the Bouba–Kiki effect, wherein humans associate words like bouba/kiki to round/angular shapes. How does the Bouba–Kiki effect work for natural words and shapes that contain a mixture of features? If the effect is holistic, the effect for a composite stimulus would not be explainable using the parts. If the effect is compositional, it will be. Here we provide evidence for the latter possibility. In Experiments 1 and 2, we standardized bouba-like and kiki-like shapes and words for use in subsequent experiments. In Experiments 3–5, we created composite shapes/words by combining bouba-like & kiki-like parts. In all experiments, the Bouba–Kiki effect strength for composite shapes/words was predicted remarkably well as a linear sum of the contributions of the constituent parts. Our results greatly simplify our understanding of the Bouba–Kiki effect, leaving little room for holism.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 8","pages":"2562 - 2578"},"PeriodicalIF":1.7,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-29DOI: 10.3758/s13414-025-03144-0
Xue Jun Cheng, Daniel R. Little
In contrast to claims of holistic processing, upright aligned composite face morphs were recently shown to be processed in the same manner as inverted or misaligned composite face morphs (Cheng et al. 2018. Journal of Experimental Psychology: Learning, Memory and Cognition, 44, 833–862). In the present paper, we replicate that work, using a set of schematic faces which vary second-order features (e.g., lip height and eye separation) in the top and bottom halves of the schematic face. We find that the present stimuli show the hallmarks of holistic processing in a complete composite face task, but differ from composite face morphs in that the best fitting MDS metric is more commensurate with an assumption of integrality (i.e., Euclidean distance). Nevertheless, we also find that, as with morph faces, the processing of upright aligned and upright misaligned faces is consistent with a mixture of serial and parallel processing. Importantly, we found little evidence of any strong holistic pooling of the top and bottom face halves into a single object. These results remain consistent with the idea that composite faces are not processed differently from other objects with separable dimensions but instead that composite faces allow more parallel processing when aligned than when misaligned. Data and code are available from: http://github.com/knowlabUnimelb/SCHEMATICFACERULES.
与整体处理的主张相反,直立对齐的复合面部变形最近被证明以与倒置或不对齐的复合面部变形相同的方式处理(Cheng et al. 2018)。实验心理学杂志:学习,记忆和认知,44,833-862)。在本文中,我们复制了这项工作,使用了一组原理图面孔,这些面孔在原理图面孔的上半部分和下半部分具有不同的二阶特征(例如,嘴唇高度和眼睛间距)。我们发现,当前的刺激在完整的复合人脸任务中显示出整体处理的特征,但与复合人脸形态不同的是,最佳拟合MDS度量更符合完整性假设(即欧几里得距离)。然而,我们也发现,与变形面一样,直立对齐和直立不对齐面的处理与串行和并行处理的混合是一致的。重要的是,我们发现很少有证据表明,上下两半的脸有很强的整体池化成一个单一的对象。这些结果与复合面与其他具有可分离维度的对象的处理方式不同的观点是一致的,相反,复合面在对齐时比在不对齐时允许更多的并行处理。数据和代码可从http://github.com/knowlabUnimelb/SCHEMATICFACERULES获得。
{"title":"Second-order facial features are processed analytically in composite faces","authors":"Xue Jun Cheng, Daniel R. Little","doi":"10.3758/s13414-025-03144-0","DOIUrl":"10.3758/s13414-025-03144-0","url":null,"abstract":"<div><p>In contrast to claims of holistic processing, upright aligned composite face morphs were recently shown to be processed in the same manner as inverted or misaligned composite face morphs (Cheng et al. 2018. <i>Journal of Experimental Psychology: Learning, Memory and Cognition, 44</i>, 833–862). In the present paper, we replicate that work, using a set of schematic faces which vary second-order features (e.g., lip height and eye separation) in the top and bottom halves of the schematic face. We find that the present stimuli show the hallmarks of holistic processing in a complete composite face task, but differ from composite face morphs in that the best fitting MDS metric is more commensurate with an assumption of integrality (i.e., Euclidean distance). Nevertheless, we also find that, as with morph faces, the processing of upright aligned and upright misaligned faces is consistent with a mixture of serial and parallel processing. Importantly, we found little evidence of any strong holistic pooling of the top and bottom face halves into a single object. These results remain consistent with the idea that composite faces are not processed differently from other objects with separable dimensions but instead that composite faces allow more parallel processing when aligned than when misaligned. Data and code are available from: http://github.com/knowlabUnimelb/SCHEMATICFACERULES.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 8","pages":"2388 - 2416"},"PeriodicalIF":1.7,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03144-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-26DOI: 10.3758/s13414-025-03146-y
Jie Wang, Jiangtong Li, Yi Xiao, Kang Song
The visual perception and steering behavior of drivers are known to be influenced by environmental lighting, but the underlying perception mechanisms, particularly the role of optical flow under low-luminance conditions, remain insufficiently understood. In a simulated driving experiment, 32 participants were exposed to five controlled luminance levels while their eye movements and driving performance were recorded. The results indicate that lower environmental luminance leads to prolonged gaze duration, a wider distribution of gaze points, and an increase in lateral steering errors. At moderate luminance, drivers exhibited enhanced optical flow perception and improved steering accuracy. However, under low luminance, degraded optical flow weakened the coupling between gaze and self-motion, caused a misalignment between gaze and vehicle motion, leading to reduced steering accuracy. These findings advance previous work by demonstrating that luminance not only affects gaze behavior but also modulates visual perception through its impact on optical flow processing. These insights may support the development of adaptive driver training programs and human-centered driver assistance systems that respond to perceptual challenges in low-luminance environments.
{"title":"Steering in the dark: The impact of environmental luminance on driver behavior through optical flow analysis","authors":"Jie Wang, Jiangtong Li, Yi Xiao, Kang Song","doi":"10.3758/s13414-025-03146-y","DOIUrl":"10.3758/s13414-025-03146-y","url":null,"abstract":"<div><p>The visual perception and steering behavior of drivers are known to be influenced by environmental lighting, but the underlying perception mechanisms, particularly the role of optical flow under low-luminance conditions, remain insufficiently understood. In a simulated driving experiment, 32 participants were exposed to five controlled luminance levels while their eye movements and driving performance were recorded. The results indicate that lower environmental luminance leads to prolonged gaze duration, a wider distribution of gaze points, and an increase in lateral steering errors. At moderate luminance, drivers exhibited enhanced optical flow perception and improved steering accuracy. However, under low luminance, degraded optical flow weakened the coupling between gaze and self-motion, caused a misalignment between gaze and vehicle motion, leading to reduced steering accuracy. These findings advance previous work by demonstrating that luminance not only affects gaze behavior but also modulates visual perception through its impact on optical flow processing. These insights may support the development of adaptive driver training programs and human-centered driver assistance systems that respond to perceptual challenges in low-luminance environments.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 8","pages":"2417 - 2429"},"PeriodicalIF":1.7,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-26DOI: 10.3758/s13414-025-03153-z
Daniel Bratzke
{"title":"Correction to: Do “auditory” and “visual” time really feel the same? Effects of stimulus modality on duration and passage-of-time judgements","authors":"Daniel Bratzke","doi":"10.3758/s13414-025-03153-z","DOIUrl":"10.3758/s13414-025-03153-z","url":null,"abstract":"","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 8","pages":"2579 - 2579"},"PeriodicalIF":1.7,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03153-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}