Pub Date : 2025-11-26DOI: 10.3758/s13414-025-03180-w
John J. McDonald, Daniel Tay, Rebecca Carson
Salient-but-irrelevant color singletons often elicit a positive component in the event-related potential (the PD) rather than a negative component associated with attentional selection (the N2pc). The positivity is often assumed to reflect inhibitory control processes that prevent salience-driven distraction, particularly when the positivity emerges before the time range of the N2pc. To be certain that this “early PD” is associated with inhibition, it is necessary to show that the positivity is absent when participants search for the color singleton. Here, we replicated a seminal letter-search task in which a singleton distractor was found to elicit an early positivity (Experiment 1) and then instructed participants to detect the presence of the same singleton (Experiment 2). We discovered that the early positivity is present both when participants ignored the singleton and when they searched for the singleton. These results suggest that the early positivity is associated with salience processing rather than inhibition that prevents distraction.
{"title":"Re-examining electrophysiological evidence for proactive suppression of salient visual distractors","authors":"John J. McDonald, Daniel Tay, Rebecca Carson","doi":"10.3758/s13414-025-03180-w","DOIUrl":"10.3758/s13414-025-03180-w","url":null,"abstract":"<div><p>Salient-but-irrelevant color singletons often elicit a positive component in the event-related potential (the P<sub>D</sub>) rather than a negative component associated with attentional selection (the N2pc). The positivity is often assumed to reflect inhibitory control processes that prevent salience-driven distraction, particularly when the positivity emerges before the time range of the N2pc. To be certain that this “early P<sub>D</sub>” is associated with inhibition, it is necessary to show that the positivity is absent when participants search for the color singleton. Here, we replicated a seminal letter-search task in which a singleton distractor was found to elicit an early positivity (Experiment 1) and then instructed participants to detect the presence of the same singleton (Experiment 2). We discovered that the early positivity is present both when participants ignored the singleton and when they searched for the singleton. These results suggest that the early positivity is associated with salience processing rather than inhibition that prevents distraction.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03180-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-26DOI: 10.3758/s13414-025-03156-w
Mahbod Mehrvarz, Hrithik Popat, Jeffrey N. Rouder
Are people who are susceptible to one illusion also susceptible to others? Previous research has shown small correlations, but might small values reflect attenuation from measurement error from trial-to-trial variation? To assess measurement error, we develop a set of novel data visualizations and hierarchical models. Data from 149 participants on two variants of the five illusions were collected using an adjustment paradigm. The results showed low trial-noise and strong between-subject variability (e.g., signal-to-noise ratio (approx 1.14), reliability (approx 0.93)). Correlations across illusions are low, around (0.22 pm 0.07). A Bayesian hierarchical analysis reveals minimal attenuation from measurement error in these values. Though correlations are low, latent variable analysis reveals a common latent factor that loads on all tasks and explains about 23.3% of the variance in illusion susceptibility.
易受一种错觉影响的人是否也易受其他错觉影响?先前的研究显示了小的相关性,但是小的值是否反映了试验到试验变化的测量误差的衰减?为了评估测量误差,我们开发了一套新颖的数据可视化和分层模型。使用调整范式收集了来自149名参与者的关于五种错觉的两种变体的数据。结果显示低试验噪声和受试者之间的强变异性(例如,信噪比(approx 1.14),信度(approx 0.93))。幻觉之间的相关性很低,在(0.22 pm 0.07)左右。贝叶斯层次分析揭示了这些值中测量误差的最小衰减。虽然相关性很低,但潜在变量分析揭示了一个共同的潜在因素,它对所有任务都有影响,并解释了大约23.3% of the variance in illusion susceptibility.
{"title":"Localizing structure in individual differences: A visual illusion case study","authors":"Mahbod Mehrvarz, Hrithik Popat, Jeffrey N. Rouder","doi":"10.3758/s13414-025-03156-w","DOIUrl":"10.3758/s13414-025-03156-w","url":null,"abstract":"<div><p>Are people who are susceptible to one illusion also susceptible to others? Previous research has shown small correlations, but might small values reflect attenuation from measurement error from trial-to-trial variation? To assess measurement error, we develop a set of novel data visualizations and hierarchical models. Data from 149 participants on two variants of the five illusions were collected using an adjustment paradigm. The results showed low trial-noise and strong between-subject variability (e.g., signal-to-noise ratio <span>(approx 1.14)</span>, reliability <span>(approx 0.93)</span>). Correlations across illusions are low, around <span>(0.22 pm 0.07)</span>. A Bayesian hierarchical analysis reveals minimal attenuation from measurement error in these values. Though correlations are low, latent variable analysis reveals a common latent factor that loads on all tasks and explains about 23.3% of the variance in illusion susceptibility.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-26DOI: 10.3758/s13414-025-03187-3
Jacob Zepp, Chad Dubé
This report targets the claim that gist representations of visual stimuli, called “ensemble averages”, are perceptual representations of statistics pertaining to stimuli. We report predictions of a mathematical model based on classical memory architectures which assumes ensemble averages are statistical approximations to stimuli, and that those approximations are constructed within short-term memory. We report results of three new experiments that test those predictions. The results support the memory model and contradict the view that representations of ensemble averages are computed early in perceptual processing via parallel processing or neural pooling, suggesting instead that they are computed via control processes acting on item representations held in visual short-term memory. We conclude that the flight toward new mechanisms that has occurred within the ensemble representation literature is ill-advised, and suggest that one first carefully consider what well-established memory models can accomplish in the ensemble “perception” domain.
{"title":"The perceptual average in ensemble representation: Neither perceptual nor an average","authors":"Jacob Zepp, Chad Dubé","doi":"10.3758/s13414-025-03187-3","DOIUrl":"10.3758/s13414-025-03187-3","url":null,"abstract":"<div><p>This report targets the claim that gist representations of visual stimuli, called “ensemble averages”, are perceptual representations of statistics pertaining to stimuli. We report predictions of a mathematical model based on classical memory architectures which assumes ensemble averages are statistical approximations to stimuli, and that those approximations are constructed within short-term memory. We report results of three new experiments that test those predictions. The results support the memory model and contradict the view that representations of ensemble averages are computed early in perceptual processing via parallel processing or neural pooling, suggesting instead that they are computed via control processes acting on item representations held in visual short-term memory. We conclude that the flight toward new mechanisms that has occurred within the ensemble representation literature is ill-advised, and suggest that one first carefully consider what well-established memory models can accomplish in the ensemble “perception” domain.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03187-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-26DOI: 10.3758/s13414-025-03154-y
Maria Nemeth, Christian Frings, Birte Moeller
In theories of human action, it is assumed that individual actions are nested within higher-order action plans. This hierarchical structure oftentimes allows for the anticipatory planning of multiple future actions even before the current action is fully executed or situational cues demand this specific action. However, much of the existing research on basic action control processes has focused on isolated actions, that is, sequentially planned and executed actions, leaving it unclear whether these findings generalize to more naturalistic, preplanned action contexts. In particular, although the binding of individual responses into common representations and their retrieval from memory have been proposed as key mechanisms supporting action control of action sequences, it remains poorly understood how these processes operate when multiple responses can be planned in advance as part of an action sequence. In this study, we compared action contexts in which individual responses were planned and executed sequentially to contexts in which response sequences allowed for the preplanning of individual responses. Crucially, response-response binding effects of comparable strength were observed in both action contexts. Thus, binding and retrieval of responses seem not only to influence current performance during sequential action planning and execution but also to influence ongoing behavior within action sequences that could be preplanned.
{"title":"In the flow of action: Anticipated action sequences in response-response binding","authors":"Maria Nemeth, Christian Frings, Birte Moeller","doi":"10.3758/s13414-025-03154-y","DOIUrl":"10.3758/s13414-025-03154-y","url":null,"abstract":"<div><p>In theories of human action, it is assumed that individual actions are nested within higher-order action plans. This hierarchical structure oftentimes allows for the anticipatory planning of multiple future actions even before the current action is fully executed or situational cues demand this specific action. However, much of the existing research on basic action control processes has focused on isolated actions, that is, sequentially planned and executed actions, leaving it unclear whether these findings generalize to more naturalistic, preplanned action contexts. In particular, although the binding of individual responses into common representations and their retrieval from memory have been proposed as key mechanisms supporting action control of action sequences, it remains poorly understood how these processes operate when multiple responses can be planned in advance as part of an action sequence. In this study, we compared action contexts in which individual responses were planned and executed sequentially to contexts in which response sequences allowed for the preplanning of individual responses. Crucially, response-response binding effects of comparable strength were observed in both action contexts. Thus, binding and retrieval of responses seem not only to influence current performance during sequential action planning and execution but also to influence ongoing behavior within action sequences that could be preplanned.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03154-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.3758/s13414-025-03160-0
Anuenue Kukona
Do comprehenders predict (i.e., what will come next) when hearing rapid speech? Two mouse cursor tracking experiments investigated association-based predictions, which may be suited to speeded processing. Participants heard predictive sentences (e.g., “What the pilot will fly, which is shown here, is the . . .”) while viewing visual arrays with predictable objects (e.g., helicopter) and unpredictable but verb-associated objects (e.g., kite) or unrelated objects (e.g., book). Experiment 1 compared predictive and nonpredictive (e.g., “What everyone will discuss, which is shown here, is the . . .”) sentences at a normal speech rate, and Experiment 2 compared predictive sentences at a normal and fast speech rate (e.g., averaging ~4 and 9 syllables per second). In addition to making mouse cursor movements to predictable objects before hearing predictable words (e.g., “helicopter”), participants’ mouse cursor movements at both speech rates were attracted to unpredictable but verb-associated objects, providing evidence of association-based prediction. These results suggest that when hearing rapid speech, associations support but do not dominate comprehenders’ predictions.
{"title":"Speech rate and associations in predictive sentence processing","authors":"Anuenue Kukona","doi":"10.3758/s13414-025-03160-0","DOIUrl":"10.3758/s13414-025-03160-0","url":null,"abstract":"<div><p>Do comprehenders predict (i.e., what will come next) when hearing rapid speech? Two mouse cursor tracking experiments investigated association-based predictions, which may be suited to speeded processing. Participants heard predictive sentences (e.g., “What the pilot will fly, which is shown here, is the . . .”) while viewing visual arrays with predictable objects (e.g., helicopter) and unpredictable but verb-associated objects (e.g., kite) or unrelated objects (e.g., book). Experiment 1 compared predictive and nonpredictive (e.g., “What everyone will discuss, which is shown here, is the . . .”) sentences at a normal speech rate, and Experiment 2 compared predictive sentences at a normal and fast speech rate (e.g., averaging ~4 and 9 syllables per second). In addition to making mouse cursor movements to predictable objects before hearing predictable words (e.g., “helicopter”), participants’ mouse cursor movements at both speech rates were attracted to unpredictable but verb-associated objects, providing evidence of association-based prediction. These results suggest that when hearing rapid speech, associations support but do not dominate comprehenders’ predictions.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03160-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145607404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.3758/s13414-025-03182-8
Tashauna L. Blankenship, Roger Strong, Melissa M. Kibbe
Adults can effectively divide visual attention across non-contiguous spatial locations. However, it is currently unknown whether the ability to deploy multifocal attention is a hallmark of human endogenous attention, or whether this ability develops with maturation of the neural areas that support deployment of attention across multiple locations. Across two experiments we investigated children’s and adults’ ability to split attention in an adaptation of Awh and Pashler’s (Journal of Experimental Psychology, 26[2], 834–846, 2000) task. Participants were cued to attend to two non-contiguous spatial locations in an array of six locations. In Valid trials, participants were probed to report the identity of the digit that appeared briefly in one or both of the cued locations. In Invalid trials, participants were probed to report the identity of the digit that appeared in an uncued location either between the two cued locations or outside the two cued locations. We reasoned that if participants are able to divide their attention between the two non-contiguous cued locations, they should perform better on Valid compared with Invalid trials, and should perform equally on both types of Invalid trials. We found evidence for multifocal spatial attention in 8-year-olds and adults. However, 6-year-olds appeared to use a strategy consistent with a single focus of attention. Overall, these findings suggest that the ability to divide attention between noncontiguous locations develops during middle childhood.
成年人可以有效地将视觉注意力分散到不同的空间位置。然而,目前尚不清楚部署多焦点注意力的能力是否是人类内源性注意力的标志,或者这种能力是否随着支持跨多个位置部署注意力的神经区域的成熟而发展。在两个实验中,我们研究了儿童和成人在Awh和Pashler (Journal of Experimental Psychology, 26 bbb, 834-846, 2000)的任务中分散注意力的能力。参与者被提示注意六个地点中的两个不相邻的空间位置。在有效的试验中,参与者被要求报告在一个或两个提示位置短暂出现的数字的身份。在无效试验中,参与者被要求报告出现在两个提示位置之间或两个提示位置之外的未提示位置的数字的身份。我们推断,如果参与者能够在两个不相邻的提示位置之间分配他们的注意力,他们在有效试验上的表现应该比无效试验好,并且在两种类型的无效试验上的表现应该相同。我们在8岁儿童和成人中发现了多焦点空间注意力的证据。然而,6岁的孩子似乎使用了一种与单一注意力焦点一致的策略。总的来说,这些发现表明,在儿童中期,在不连续的地点之间分配注意力的能力得到了发展。
{"title":"The ability to divide spatial attention across non-contiguous locations develops in middle childhood","authors":"Tashauna L. Blankenship, Roger Strong, Melissa M. Kibbe","doi":"10.3758/s13414-025-03182-8","DOIUrl":"10.3758/s13414-025-03182-8","url":null,"abstract":"<div><p>Adults can effectively divide visual attention across non-contiguous spatial locations. However, it is currently unknown whether the ability to deploy multifocal attention is a hallmark of human endogenous attention, or whether this ability develops with maturation of the neural areas that support deployment of attention across multiple locations. Across two experiments we investigated children’s and adults’ ability to split attention in an adaptation of Awh and Pashler’s (<i>Journal of Experimental Psychology, 26</i>[2], 834–846, 2000) task. Participants were cued to attend to two non-contiguous spatial locations in an array of six locations. In Valid trials, participants were probed to report the identity of the digit that appeared briefly in one or both of the cued locations. In Invalid trials, participants were probed to report the identity of the digit that appeared in an uncued location either <i>between</i> the two cued locations or <i>outside</i> the two cued locations. We reasoned that if participants are able to divide their attention between the two non-contiguous cued locations, they should perform better on Valid compared with Invalid trials, and should perform equally on both types of Invalid trials. We found evidence for multifocal spatial attention in 8-year-olds and adults. However, 6-year-olds appeared to use a strategy consistent with a single focus of attention. Overall, these findings suggest that the ability to divide attention between noncontiguous locations develops during middle childhood.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03182-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145607516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.3758/s13414-025-03195-3
Mengzhu Fu, Emmanuella Asabere, Michael D. Dodd
Previous research examining dynamic visual search showed that pop-out effects can be observed for color targets, though it is unclear whether these effects are attributable to the same pre-attentive mechanisms driving pop-out in static displays (Fu et al., Attention, Perception, & Psychophysics, 82, 3329–3339, 2020). Other research examining multiple-object tracking (MOT) demonstrated that people can track three to five objects simultaneously, with some uncertainty about the flexibility of attentional allocation during tracking (Meyerhoff et al., Attention, Perception, & Psychophysics, 79, 1255–1274, 2017). In three experiments, the present study combined a dynamic pop-out display with the MOT task. Participants saw moving objects with colors changing continuously and responded when a uniquely shaded target popped out among identical items. Experiment 1 examined the mechanisms driving dynamic visual search efficiency and dual task interference on both tracking and searching performance. Experiment 2 explored the effect of processing orientations (i.e., global/local). Experiment 3 incorporated an abrupt color change to examine performance. Results showed that search for a unique target in dynamic contexts required attention, with an interference effect observed for both searching and tracking in the dual task. Making the color change more abrupt improved performance but remained less efficient than static pop-out. Moreover, there is some evidence suggesting that adopting a global processing orientation may be more advantageous for task performance than a local processing orientation. Taken together, the current findings suggest that search for a unique target in dynamic contexts requires focal attention and that tracking and searching appear to involve similar processing mechanism that likely compete to draw from a shared pool of resources.
{"title":"Attentional processing in a modified multiple object-tracking paradigm","authors":"Mengzhu Fu, Emmanuella Asabere, Michael D. Dodd","doi":"10.3758/s13414-025-03195-3","DOIUrl":"10.3758/s13414-025-03195-3","url":null,"abstract":"<div><p>Previous research examining dynamic visual search showed that pop-out effects can be observed for color targets, though it is unclear whether these effects are attributable to the same pre-attentive mechanisms driving pop-out in static displays (Fu et al., <i>Attention, Perception, & Psychophysics</i>, <i>82</i>, 3329–3339, 2020). Other research examining multiple-object tracking (MOT) demonstrated that people can track three to five objects simultaneously, with some uncertainty about the flexibility of attentional allocation during tracking (Meyerhoff et al., <i>Attention, Perception, & Psychophysics</i>, <i>79</i>, 1255–1274, 2017). In three experiments, the present study combined a dynamic pop-out display with the MOT task. Participants saw moving objects with colors changing continuously and responded when a uniquely shaded target popped out among identical items. Experiment 1 examined the mechanisms driving dynamic visual search efficiency and dual task interference on both tracking and searching performance. Experiment 2 explored the effect of processing orientations (i.e., global/local). Experiment 3 incorporated an abrupt color change to examine performance. Results showed that search for a unique target in dynamic contexts required attention, with an interference effect observed for both searching and tracking in the dual task. Making the color change more abrupt improved performance but remained less efficient than static pop-out. Moreover, there is some evidence suggesting that adopting a global processing orientation may be more advantageous for task performance than a local processing orientation. Taken together, the current findings suggest that search for a unique target in dynamic contexts requires focal attention and that tracking and searching appear to involve similar processing mechanism that likely compete to draw from a shared pool of resources.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145598438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.3758/s13414-025-03155-x
Yusei Yoshimura, Tomohiro Kizuka, Seiji Ono
Relative eye height, defined as the vertical distance between the observer’s eye level and a target on the horizontal plane, provides a geometric cue to depth through the angle between eye level and the line of sight. In three-dimensional space, objects often move in depth at varying heights relative to the observer’s eye level, requiring integration of multiple cues for accurate velocity judgments. However, the contribution of relative eye height to velocity perception remains unclear. This study examined how relative eye height influences perceived velocity for an approaching target moving along a horizontal plane. Participants performed a two-alternative forced-choice task in which they judged whether the current target appeared faster or slower than targets presented in previous trials, under two eye-height conditions (5 cm and 10 cm from the target level). Greater relative eye height led to faster perceived velocities and improved discrimination performance. Model comparisons further indicated that elevation angular velocity during the early phase of motion (100–400 ms post-onset), a direct cue derived from changes in elevation angle, was the key predictor of velocity judgments, suggesting that the effect of relative eye height is mediated through this cue. These findings highlight the critical role of spatial information, particularly relative eye height, in shaping motion-in-depth perception.
{"title":"Relative eye height modulates perceived velocity of targets approaching along a horizontal plane","authors":"Yusei Yoshimura, Tomohiro Kizuka, Seiji Ono","doi":"10.3758/s13414-025-03155-x","DOIUrl":"10.3758/s13414-025-03155-x","url":null,"abstract":"<div><p>Relative eye height, defined as the vertical distance between the observer’s eye level and a target on the horizontal plane, provides a geometric cue to depth through the angle between eye level and the line of sight. In three-dimensional space, objects often move in depth at varying heights relative to the observer’s eye level, requiring integration of multiple cues for accurate velocity judgments. However, the contribution of relative eye height to velocity perception remains unclear. This study examined how relative eye height influences perceived velocity for an approaching target moving along a horizontal plane. Participants performed a two-alternative forced-choice task in which they judged whether the current target appeared faster or slower than targets presented in previous trials, under two eye-height conditions (5 cm and 10 cm from the target level). Greater relative eye height led to faster perceived velocities and improved discrimination performance. Model comparisons further indicated that elevation angular velocity during the early phase of motion (100–400 ms post-onset), a direct cue derived from changes in elevation angle, was the key predictor of velocity judgments, suggesting that the effect of relative eye height is mediated through this cue. These findings highlight the critical role of spatial information, particularly relative eye height, in shaping motion-in-depth perception.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145561608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.3758/s13414-025-03197-1
Bryan R. Burnham
Selection history effects occur when visual search is facilitated after previous target features are repeated during subsequent searches relative to when target features switch with non-target distractor features. Selection history on visual search is likely due to a combination of feature activation (increased salience), bias in attentional decisions over target selection, and facilitated post-selection retrieval, and likely reflects both target activation and distractor suppression. The present study used a probe detection task within a standard priming of popout (PoP) visual search task to examine how target activation and distractor suppression influence attentional decisions to select a previous target’s features. PoP was observed in response times and importantly in recall of probes appearing on both color singleton targets and non-singleton distractors. Relative to baseline conditions, more probes were recalled from color singleton targets on color repeat trials, and fewer probes were recalled from targets on color switch trials; and more probes were recalled form the non-targets on switch trials than baseline trials. The results suggest that target activation and distractor suppression contribute to the attentional decision bias that arises due to selection history.
{"title":"Target activation and distractor inhibition on attentional bias in priming of popout search","authors":"Bryan R. Burnham","doi":"10.3758/s13414-025-03197-1","DOIUrl":"10.3758/s13414-025-03197-1","url":null,"abstract":"<div><p>Selection history effects occur when visual search is facilitated after previous target features are repeated during subsequent searches relative to when target features switch with non-target distractor features. Selection history on visual search is likely due to a combination of feature activation (increased salience), bias in attentional decisions over target selection, and facilitated post-selection retrieval, and likely reflects both target activation and distractor suppression. The present study used a probe detection task within a standard priming of popout (PoP) visual search task to examine how target activation and distractor suppression influence attentional decisions to select a previous target’s features. PoP was observed in response times and importantly in recall of probes appearing on both color singleton targets and non-singleton distractors. Relative to baseline conditions, more probes were recalled from color singleton targets on color repeat trials, and fewer probes were recalled from targets on color switch trials; and more probes were recalled form the non-targets on switch trials than baseline trials. The results suggest that target activation and distractor suppression contribute to the attentional decision bias that arises due to selection history.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03197-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145561607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-20DOI: 10.3758/s13414-025-03192-6
E. R. Robbins, J. C. Nah, D. Dubbelde, S. Shomstein
High-level features of objects, such as meaning, bias attention even when task-irrelevant. We hypothesize that task-irrelevant semantic features bias attention via a grouping mechanism, organizing visual input by semantic relatedness in a manner similar to low-level grouping. Specifically, when presented with a task-irrelevant visual array of items, visual search is more efficient within a group of semantically related items. Participants were shown four stimuli of either color squares (low-level grouping), grayscale real-world objects (high-level grouping), or color real-world objects (low- and high-level grouping). On each trial two or three of the four items belonged to one category (e.g., clothing, blue squares). A target appeared randomly on one of the items, independent of relatedness (task-irrelevant). For all manipulations, search was equally efficient in groups of equal size. For unequal size groups, in color squares and grayscale objects, large groups yielded less efficient search – consistent with low-level perceptual grouping. In color objects, however, search was more efficient in a larger semantically related group – consistent with semantic bias. Results show that single features group displays, but complex stimuli bias attention beyond simple grouping.
{"title":"Task-irrelevant semantic grouping influences attentional allocation","authors":"E. R. Robbins, J. C. Nah, D. Dubbelde, S. Shomstein","doi":"10.3758/s13414-025-03192-6","DOIUrl":"10.3758/s13414-025-03192-6","url":null,"abstract":"<div><p>High-level features of objects, such as meaning, bias attention even when task-irrelevant. We hypothesize that task-irrelevant semantic features bias attention via a grouping mechanism, organizing visual input by semantic relatedness in a manner similar to low-level grouping. Specifically, when presented with a task-irrelevant visual array of items, visual search is more efficient within a group of semantically related items. Participants were shown four stimuli of either color squares (low-level grouping), grayscale real-world objects (high-level grouping), or color real-world objects (low- and high-level grouping). On each trial two or three of the four items belonged to one category (e.g., clothing, blue squares). A target appeared randomly on one of the items, independent of relatedness (task-irrelevant). For all manipulations, search was equally efficient in groups of equal size. For unequal size groups, in color squares and grayscale objects, large groups yielded less efficient search – consistent with low-level perceptual grouping. In color objects, however, search was more efficient in a larger semantically related group – consistent with semantic bias. Results show that single features group displays, but complex stimuli bias attention beyond simple grouping.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145561630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}