Pub Date : 2025-12-04DOI: 10.3758/s13414-025-03174-8
Eric Ruthruff, Dominick A. Tolomeo, Sunil Jain, Kristina-Maria Reitan, Mei-Ching Lien
Belopolsky et al. (2007) provided evidence that capture occurs only when objects fall within the attentional window. This attentional window hypothesis was subsequently used to explain how salient stimuli can be powerful yet often have little or no observable effect. In the present study, we attempted to replicate their findings. Participants made a go/no-go decision based on the shape of the overall search array (diffuse attention) or based on the central fixation point (focused attention). Whereas Belopolsky et al. found larger capture effects from a color singleton distractor in the diffuse condition than the focused condition (where the color singleton is assumed to fall outside the attentional window), we found no such effect (Experiment 1). When we changed the task from a feature search task in Experiment 1 to a singleton search task in Experiment 2, capture effects increased overall but were once again similar for the diffuse and focused conditions. This pattern persisted even when we closely replicated Belopolsky et al.’s original design (Experiment 3). Our findings call into question the attentional window account and support an alternative account of why capture sometimes occurs: singleton search mode makes color singletons capture attention because participants are looking for singletons.
{"title":"Does the attentional window shed light on the attentional capture debate?","authors":"Eric Ruthruff, Dominick A. Tolomeo, Sunil Jain, Kristina-Maria Reitan, Mei-Ching Lien","doi":"10.3758/s13414-025-03174-8","DOIUrl":"10.3758/s13414-025-03174-8","url":null,"abstract":"<div><p>Belopolsky et al. (2007) provided evidence that capture occurs only when objects fall within the attentional window. This attentional window hypothesis was subsequently used to explain how salient stimuli can be powerful yet often have little or no observable effect. In the present study, we attempted to replicate their findings. Participants made a go/no-go decision based on the shape of the overall search array (diffuse attention) or based on the central fixation point (focused attention). Whereas Belopolsky et al. found larger capture effects from a color singleton distractor in the diffuse condition than the focused condition (where the color singleton is assumed to fall outside the attentional window), we found no such effect (Experiment 1). When we changed the task from a feature search task in Experiment 1 to a singleton search task in Experiment 2, capture effects increased overall but were once again similar for the diffuse and focused conditions. This pattern persisted even when we closely replicated Belopolsky et al.’s original design (Experiment 3). Our findings call into question the attentional window account and support an alternative account of why capture sometimes occurs: singleton search mode makes color singletons capture attention because participants are looking for singletons.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.3758/s13414-025-03166-8
Hao-Lun Fu, Yu-Chin Chiu, Kanthika Latthirun, Cheng-Ta Yang
Navigating the world requires accurate categorization of objects around us, which often involves processing multiple sources of information. The predictiveness of a source plays an important role in accurate categorization. This study aims to investigate how the predictiveness of features modulates the processing strategies of two features that are generally considered more integral than separable: color and luminance. Participants categorized a set of visual stimuli, created by varying levels of color and luminance, into two categories defined by logical rules. The stimulus–category mapping was 100% in Experiment 1, but it was reduced to 95% in Experiment 2. In both experiments, the predictiveness of both features was equal. Lastly, in Experiment 3, we introduced unequal predictiveness such that color was more predictive for some participants, while luminance was more predictive for others. These manipulations were designed to test whether, as predicted by the strong version of the relative saliency hypothesis, even integral features such as color and luminance could be processed serially if one were made more predictive of the category. Across the three experiments, we employed both system factorial technology (SFT) and computational modeling to infer processing strategies in nonparametric and parametric manners, respectively. Although some variability existed at the individual subject level, both non-parametric and parametric modeling revealed robust evidence for coactive processing for the aggregated group data, regardless of the varied stimulus–category mapping and feature predictiveness. These findings suggest that the processing of color and luminance within an object involves obligatory coactive processing, thereby challenging the strategic adjustment relative saliency hypothesis.
{"title":"Obligatory coactive processing of color and luminance challenges strategic modulation by predictiveness","authors":"Hao-Lun Fu, Yu-Chin Chiu, Kanthika Latthirun, Cheng-Ta Yang","doi":"10.3758/s13414-025-03166-8","DOIUrl":"10.3758/s13414-025-03166-8","url":null,"abstract":"<div><p>Navigating the world requires accurate categorization of objects around us, which often involves processing multiple sources of information. The predictiveness of a source plays an important role in accurate categorization. This study aims to investigate how the predictiveness of features modulates the processing strategies of two features that are generally considered more integral than separable: color and luminance. Participants categorized a set of visual stimuli, created by varying levels of color and luminance, into two categories defined by logical rules. The stimulus–category mapping was 100% in Experiment 1, but it was reduced to 95% in Experiment 2. In both experiments, the predictiveness of both features was equal. Lastly, in Experiment 3, we introduced unequal predictiveness such that color was more predictive for some participants, while luminance was more predictive for others. These manipulations were designed to test whether, as predicted by the strong version of the relative saliency hypothesis, even integral features such as color and luminance could be processed serially if one were made more predictive of the category. Across the three experiments, we employed both system factorial technology (SFT) and computational modeling to infer processing strategies in nonparametric and parametric manners, respectively. Although some variability existed at the individual subject level, both non-parametric and parametric modeling revealed robust evidence for coactive processing for the aggregated group data, regardless of the varied stimulus–category mapping and feature predictiveness. These findings suggest that the processing of color and luminance within an object involves obligatory coactive processing, thereby challenging the strategic adjustment relative saliency hypothesis.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.3758/s13414-025-03200-9
Deanna L. Strayer, Nash Unsworth
Attention lapses occur when focus shifts away from the task at hand towards internal or external distractions and can lead to failures in completing intended actions. Goal-setting theory proposes that setting specific, difficult goals leads to better task performance over vague goals. The present study examined whether goal setting increased attentional effort and reduced attention lapses during a four-choice reaction time task. The control condition received the vague goal: “respond as quickly as possible while keeping your accuracy above 95%.” The goal condition received specific goals that became progressively harder over time (450 ms, 400 ms, and 350 ms) with the same accuracy goal. Pupillary responses were recorded throughout and subjects answered randomly presented thought probes to determine whether they were experiencing task-unrelated thoughts (TUTs). The goal condition displayed larger preparatory and phasic pupil responses, suggesting more attentional effort was exerted during the task. In addition, the goal condition displayed fewer attention lapses both behaviorally and with TUTs. Further, several typical time-on-task effects were mitigated or eliminated (shown in behavioral, subjective, and physiological measures). The results reinforce previous findings that goal-setting techniques can reduce attention lapses and indicate attentional effort is a mechanism behind the efficacy of goal setting.
{"title":"Investigating the role of attentional effort in the efficacy of goal-setting in reducing attention lapses","authors":"Deanna L. Strayer, Nash Unsworth","doi":"10.3758/s13414-025-03200-9","DOIUrl":"10.3758/s13414-025-03200-9","url":null,"abstract":"<div><p>Attention lapses occur when focus shifts away from the task at hand towards internal or external distractions and can lead to failures in completing intended actions. Goal-setting theory proposes that setting specific, difficult goals leads to better task performance over vague goals. The present study examined whether goal setting increased attentional effort and reduced attention lapses during a four-choice reaction time task. The control condition received the vague goal: “respond as quickly as possible while keeping your accuracy above 95%.” The goal condition received specific goals that became progressively harder over time (450 ms, 400 ms, and 350 ms) with the same accuracy goal. Pupillary responses were recorded throughout and subjects answered randomly presented thought probes to determine whether they were experiencing task-unrelated thoughts (TUTs). The goal condition displayed larger preparatory and phasic pupil responses, suggesting more attentional effort was exerted during the task. In addition, the goal condition displayed fewer attention lapses both behaviorally and with TUTs. Further, several typical time-on-task effects were mitigated or eliminated (shown in behavioral, subjective, and physiological measures). The results reinforce previous findings that goal-setting techniques can reduce attention lapses and indicate attentional effort is a mechanism behind the efficacy of goal setting.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03200-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.3758/s13414-025-03194-4
Marzie Samimifar, Federica Bulgarelli
Processing speech that is non-canonical (i.e., child-produced speech) and/or presented in background noise can pose challenges for listeners. We investigated how listening to child-produced speech affects young adults’ word recognition under varying noise conditions. Participants (n = 121) completed a two-picture eye-tracking task in one of three conditions: no background noise, pink background noise, and real-world background noise from LENA recordings. Participants heard a child or adult (Speaker-Age) direct attention to a generic (e.g., keys) or child-specific (e.g., potty; Item-Type) item. We examined the effect of Speaker-Age and Item-Type on participants’ looking time. In no background noise, increases in target looking were high, with greater increases when adults produced generic items. Both pink noise and real-world noise increased task difficulty, but patterns of results varied as a function of speaker gender. For female speech, background noise resulted in an effect of Speaker-Age, with participants increasing their looking time more for adult relative to child speech. The type of background noise did not influence this pattern. For male speech, there was an effect of Speaker-Age in the opposite direction, with participants increasing their looking time more for child relative to adult speech. For male speech, real-world background noise resulted in higher increases in target looking for child-specific items. Together, results suggest that child-produced speech may be more difficult to process than female-adult produced speech in noise, and that listeners can use background noise to predict who will speak and what they might speak about under more challenging conditions, such as processing male speech.
{"title":"Decoding child speech in silence and noise: The type of background noise shapes adults’ processing","authors":"Marzie Samimifar, Federica Bulgarelli","doi":"10.3758/s13414-025-03194-4","DOIUrl":"10.3758/s13414-025-03194-4","url":null,"abstract":"<div><p>Processing speech that is non-canonical (i.e., child-produced speech) and/or presented in background noise can pose challenges for listeners. We investigated how listening to child-produced speech affects young adults’ word recognition under varying noise conditions. Participants (<i>n</i> = 121) completed a two-picture eye-tracking task in one of three conditions: no background noise, pink background noise, and real-world background noise from LENA recordings. Participants heard a child or adult (Speaker-Age) direct attention to a generic (e.g., keys) or child-specific (e.g., potty; Item-Type) item. We examined the effect of Speaker-Age and Item-Type on participants’ looking time. In no background noise, increases in target looking were high, with greater increases when adults produced generic items. Both pink noise and real-world noise increased task difficulty, but patterns of results varied as a function of speaker gender. For female speech, background noise resulted in an effect of Speaker-Age, with participants increasing their looking time more for adult relative to child speech. The type of background noise did not influence this pattern. For male speech, there was an effect of Speaker-Age in the opposite direction, with participants increasing their looking time more for child relative to adult speech. For male speech, real-world background noise resulted in higher increases in target looking for child-specific items. Together, results suggest that child-produced speech may be more difficult to process than female-adult produced speech in noise, and that listeners can use background noise to predict who will speak and what they might speak about under more challenging conditions, such as processing male speech.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03194-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.3758/s13414-025-03175-7
Meike C. Kriegeskorte, Bettina Rolke, Elisabeth Hein
A crucial ability of our cognition is the perception of objects and their motions. We can perceive objects as moving by connecting them across space and time. This is possible even when the objects are not present continuously, as in the case of apparent motion displays like the Ternus display, consisting of two sets of stimuli, shifted to the left or right, separated by a variable inter-stimulus interval (ISI). This is an ambiguous display, which can be perceived as both stimuli moving uniformly to the right (group motion) or one stimulus moving across the stationary center stimulus (element motion), depending on which stimuli are connected over time. Which percept is seen can be influenced by the ISI and the stimulus features. Previous experiments have shown that the Ternus effect also exists in the auditory modality and that the auditory Ternus is also dependent on the ISI. This is a first indication that correspondence might work similarly in the visual and auditory modality. To test this idea further, we investigated whether the auditory Ternus effect is dependent on the stimulus features by creating a frequency-based bias using a high and a low sinewave tone as Ternus stimuli. This bias was compatible either with the element-motion or with the group-motion percept. Our results showed an influence of this feature bias in addition to an ISI effect, suggesting that the visual and the auditory modalities might both use the same mechanism to connect objects across space and time.
{"title":"Object correspondence in audition echoes vision: Not only spatiotemporal but also feature information influences auditory apparent motion","authors":"Meike C. Kriegeskorte, Bettina Rolke, Elisabeth Hein","doi":"10.3758/s13414-025-03175-7","DOIUrl":"10.3758/s13414-025-03175-7","url":null,"abstract":"<div><p>A crucial ability of our cognition is the perception of objects and their motions. We can perceive objects as moving by connecting them across space and time. This is possible even when the objects are not present continuously, as in the case of apparent motion displays like the Ternus display, consisting of two sets of stimuli, shifted to the left or right, separated by a variable inter-stimulus interval (ISI). This is an ambiguous display, which can be perceived as both stimuli moving uniformly to the right (group motion) or one stimulus moving across the stationary center stimulus (element motion), depending on which stimuli are connected over time. Which percept is seen can be influenced by the ISI and the stimulus features. Previous experiments have shown that the Ternus effect also exists in the auditory modality and that the auditory Ternus is also dependent on the ISI. This is a first indication that correspondence might work similarly in the visual and auditory modality. To test this idea further, we investigated whether the auditory Ternus effect is dependent on the stimulus features by creating a frequency-based bias using a high and a low sinewave tone as Ternus stimuli. This bias was compatible either with the element-motion or with the group-motion percept. Our results showed an influence of this feature bias in addition to an ISI effect, suggesting that the visual and the auditory modalities might both use the same mechanism to connect objects across space and time.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03175-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.3758/s13414-025-03173-9
Kim M. Curby, Sarah Lau, Chloe Pack
One account of the characteristic holistic processing of faces and objects of expertise posits that it arises from a learned attention to the whole, rendering it difficult to attend only to parts of stimuli. We tested whether task-context-induced attentional biases for the top or bottom part of a stimulus alter holistic processing of faces. We induced attentional biases by manipulating the probability (75% or 25%) that the top or bottom part would be task-relevant in a modified composite part-judgement task. Manipulating the proportion of trials in which the top/bottom region was task-relevant (i.e., whether the top/bottom was cued) induced the expected attention bias, with increased sensitivity for the part more likely to be cued. Despite this, there was limited evidence of an impact on holistic face processing, with the probabilistic cueing manipulation failing to impact the congruency effect. In a second experiment, we investigated whether this finding extends to stimulus-driven holistic processing of line patterns rich in Gestalt cues. Here, the only evidence of an impact on holistic processing was the attenuation of a greater congruency effect for bottom, over top, judgements in the bottom-bias condition. However, this was primarily the result of a reduction in a general bias to process the top region, present for face and non-face stimuli, rather than a direct impact on holistic processing. Thus, holistic processing for both stimulus types was relatively robust to the influence of task context-based attentional biases. However, there was some evidence of greater flexibility in stimulus-driven, compared to more experience-driven, processing more generally.
{"title":"Holistic processing is robust in the face of task-context-induced spatial attention biases","authors":"Kim M. Curby, Sarah Lau, Chloe Pack","doi":"10.3758/s13414-025-03173-9","DOIUrl":"10.3758/s13414-025-03173-9","url":null,"abstract":"<div><p>One account of the characteristic holistic processing of faces and objects of expertise posits that it arises from a learned attention to the whole, rendering it difficult to attend only to parts of stimuli. We tested whether task-context-induced attentional biases for the top or bottom part of a stimulus alter holistic processing of faces. We induced attentional biases by manipulating the probability (75% or 25%) that the top or bottom part would be task-relevant in a modified composite part-judgement task. Manipulating the proportion of trials in which the top/bottom region was task-relevant (i.e., whether the top/bottom was cued) induced the expected attention bias, with increased sensitivity for the part more likely to be cued. Despite this, there was limited evidence of an impact on holistic face processing, with the probabilistic cueing manipulation failing to impact the congruency effect. In a second experiment, we investigated whether this finding extends to stimulus-driven holistic processing of line patterns rich in Gestalt cues. Here, the only evidence of an impact on holistic processing was the attenuation of a greater congruency effect for bottom, over top, judgements in the bottom-bias condition. However, this was primarily the result of a reduction in a general bias to process the top region, present for face and non-face stimuli, rather than a direct impact on holistic processing. Thus, holistic processing for both stimulus types was relatively robust to the influence of task context-based attentional biases. However, there was some evidence of greater flexibility in stimulus-driven, compared to more experience-driven, processing more generally.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.3758/s13414-025-03157-9
Nils Kloeckner, Ronja Mueller, Marie Buerling, Claus-Christian Carbon, Tilo Strobach
The process of adapting facial representations plays a critical role in face perception and memory, representing an interplay of bottom-up and top-down mechanisms. This process allows individuals to recognize faces despite dynamic changes, for example, aging. However, a full understanding of the adaptation characteristics of non-configural facial information is still lacking in the face-processing literature. The present study investigates face aftereffects in response to facial contrast information, extending the research beyond recent studies on adaptation regarding brightness and color saturation information to a new non-configural facial information type. The research involved four experiments using celebrity face images manipulated for facial contrast, with intervals ranging from 300 ms (Experiment 1) to 5 min (Experiment 2) between adaptation and test phases. Experiment 3 used inverted adaptation faces to investigate whether adaptation effects transfer to upright test faces. The results demonstrate adaptation effects for facial contrast that are robust over time and do not transfer from inverted to upright faces. In addition, these effect sizes were compared to those of brightness and saturation information (Experiment 4), revealing no significant differences in magnitude. In general, the present findings suggest that non-configural facial contrast information is an integral part of face representations, representing an interplay of bottom-up and top-down mechanisms in face processing. All data are available on the Open Science Framework.
{"title":"Face adaptation: Investigating non-configural contrast alterations","authors":"Nils Kloeckner, Ronja Mueller, Marie Buerling, Claus-Christian Carbon, Tilo Strobach","doi":"10.3758/s13414-025-03157-9","DOIUrl":"10.3758/s13414-025-03157-9","url":null,"abstract":"<div><p>The process of adapting facial representations plays a critical role in face perception and memory, representing an interplay of bottom-up and top-down mechanisms. This process allows individuals to recognize faces despite dynamic changes, for example, aging. However, a full understanding of the adaptation characteristics of non-configural facial information is still lacking in the face-processing literature. The present study investigates face aftereffects in response to facial contrast information, extending the research beyond recent studies on adaptation regarding brightness and color saturation information to a new non-configural facial information type. The research involved four experiments using celebrity face images manipulated for facial contrast, with intervals ranging from 300 ms (Experiment 1) to 5 min (Experiment 2) between adaptation and test phases. Experiment 3 used inverted adaptation faces to investigate whether adaptation effects transfer to upright test faces. The results demonstrate adaptation effects for facial contrast that are robust over time and do not transfer from inverted to upright faces. In addition, these effect sizes were compared to those of brightness and saturation information (Experiment 4), revealing no significant differences in magnitude. In general, the present findings suggest that non-configural facial contrast information is an integral part of face representations, representing an interplay of bottom-up and top-down mechanisms in face processing. All data are available on the Open Science Framework.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03157-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145662883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.3758/s13414-025-03202-7
Belgüzar Nilay Türkan, Lars-Michael Schöpper, Lari Vainio, Christian Frings
Humans prepare motor actions when perceiving objects that afford specific behaviors, highlighting the tight link between perception and action. For example, seeing a graspable object like a mug can trigger hand movements aligned to its handle – a phenomenon known as the object affordance effect. Vainio et al. (Quarterly Journal of Experimental Psychology64, 1094–1110, 2011) demonstrated this can produce a negative compatibility effect (NCE). This occurs when a spatially compatible prime object eliciting an affordance (e.g., a mug), but to be ignored, precedes a target requiring a spatial response. Given that task demands shape response execution (e.g., Schöpper & Frings, Attention, Perception & Psychophysics, 86, 171–185, 2024), we hypothesized that the effect of affordance would vary accordingly. In Experiment 1, participants performed three tasks: arrow direction discrimination, shape discrimination, and circle localization. In all tasks, the time interval between the affordance object (a mug) and the onset of the target, as well as the compatibility between the mug and the response, varied. The arrow task replicated the NCE – responses were slower in compatible trials at short intervals. No compatibility effects were observed in the shape task. Notably, the localization task revealed a positive compatibility effect (PCE). The variation in compatibility effects suggests task-dependent affordances. Experiment 2 manipulated the target position relative to the fixation to investigate the PCE in the localization task and explore the differences in the compatibility effect. Although the PCE was not replicated, the NCE now also appeared for location tasks. Our results suggest that task constraints shape the compatibility effect, and distractor-induced affordances engage inhibitory mechanisms only when spatial features are relevant.
{"title":"When affordances are not universal: The negative compatibility effect is modulated by task type and spatial association","authors":"Belgüzar Nilay Türkan, Lars-Michael Schöpper, Lari Vainio, Christian Frings","doi":"10.3758/s13414-025-03202-7","DOIUrl":"10.3758/s13414-025-03202-7","url":null,"abstract":"<div><p>Humans prepare motor actions when perceiving objects that afford specific behaviors, highlighting the tight link between perception and action. For example, seeing a graspable object like a mug can trigger hand movements aligned to its handle – a phenomenon known as the object affordance effect. Vainio et al. (<i>Quarterly Journal of Experimental Psychology</i> <i>64</i>, 1094–1110, 2011) demonstrated this can produce a negative compatibility effect (NCE). This occurs when a spatially compatible prime object eliciting an affordance (e.g., a mug), but to be ignored, precedes a target requiring a spatial response. Given that task demands shape response execution (e.g., Schöpper & Frings, <i>Attention, Perception & Psychophysics</i>, <i>86</i>, 171–185, 2024), we hypothesized that the effect of affordance would vary accordingly. In Experiment 1, participants performed three tasks: arrow direction discrimination, shape discrimination, and circle localization. In all tasks, the time interval between the affordance object (a mug) and the onset of the target, as well as the compatibility between the mug and the response, varied. The arrow task replicated the NCE – responses were slower in compatible trials at short intervals. No compatibility effects were observed in the shape task. Notably, the localization task revealed a positive compatibility effect (PCE). The variation in compatibility effects suggests task-dependent affordances. Experiment 2 manipulated the target position relative to the fixation to investigate the PCE in the localization task and explore the differences in the compatibility effect. Although the PCE was not replicated, the NCE now also appeared for location tasks. Our results suggest that task constraints shape the compatibility effect, and distractor-induced affordances engage inhibitory mechanisms only when spatial features are relevant.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145662906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.3758/s13414-025-03163-x
Caterina Foglino, Agnieszka Wykowska
This study investigated whether attentional orienting in response to gaze cues enhances visual working memory (WM) automatically, or whether engagement of top-down processes is necessary for such effects to emerge. Building on an existing gaze-cueing paradigm, we tested whether joint attention supports WM under two conditions. In Experiment 1, participants viewed centrally presented static images of human faces displaying directional gaze cues without any instruction to use gaze direction, and gaze validity was set at 50%, making the cue spatially uninformative of stimuli location. Following the cue, a memory array was presented, followed by a retention interval and a single-probe recall. Participants had to indicate whether the probe had appeared in the initial memory set. No WM advantage was found for validly cued items. In Experiment 2, we increased cue validity to 75% and explicitly informed participants that gaze direction was highly predictive of stimuli location. Under this condition, which presumably elicited higher engagement of top-down processes, valid gaze cues significantly enhanced WM performance relative to invalid cues. Interestingly, as cognitive load increased, the limited capacity of WM slightly constrained the extent to which this strategic orienting could translate into improved memory sensitivity. These results highlight the interplay between cue reliability, attentional control, and WM capacity in determining the efficacy of gaze cues. Our findings clarify the conditions under which joint attention facilitates WM and contribute to a growing literature showing that social attention effects on higher-level cognition are context-sensitive and cognitively mediated.
{"title":"Joint attention supports working memory when gaze cues are reliable and task-relevant","authors":"Caterina Foglino, Agnieszka Wykowska","doi":"10.3758/s13414-025-03163-x","DOIUrl":"10.3758/s13414-025-03163-x","url":null,"abstract":"<div><p>This study investigated whether attentional orienting in response to gaze cues enhances visual working memory (WM) automatically, or whether engagement of top-down processes is necessary for such effects to emerge. Building on an existing gaze-cueing paradigm, we tested whether joint attention supports WM under two conditions. In Experiment 1, participants viewed centrally presented static images of human faces displaying directional gaze cues without any instruction to use gaze direction, and gaze validity was set at 50%, making the cue spatially uninformative of stimuli location. Following the cue, a memory array was presented, followed by a retention interval and a single-probe recall. Participants had to indicate whether the probe had appeared in the initial memory set. No WM advantage was found for validly cued items. In Experiment 2, we increased cue validity to 75% and explicitly informed participants that gaze direction was highly predictive of stimuli location. Under this condition, which presumably elicited higher engagement of top-down processes, valid gaze cues significantly enhanced WM performance relative to invalid cues. Interestingly, as cognitive load increased, the limited capacity of WM slightly constrained the extent to which this strategic orienting could translate into improved memory sensitivity. These results highlight the interplay between cue reliability, attentional control, and WM capacity in determining the efficacy of gaze cues. Our findings clarify the conditions under which joint attention facilitates WM and contribute to a growing literature showing that social attention effects on higher-level cognition are context-sensitive and cognitively mediated.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03163-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145662890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.3758/s13414-025-03178-4
Niya Yan, Richard Jiang, Brian A. Anderson
While previous studies have shown memory enhancement for items with statistical regularities, it remains unclear whether this advantage persists when people are not anticipating the need to recall that information. Here, we used the attribute amnesia paradigm to examine whether statistical regularities influence working memory encoding in the absence of intentional memorization. In Experiment 1, participants reported the location of a colored target that appeared more frequently in one color. On a surprise trial probing target color, participants who saw the target in the frequent color were significantly more likely to answer correctly than those who saw it in a less frequent color. More importantly, regardless of which color was actually shown, participants across both groups tended to choose the frequent color as target color, suggesting a response bias, rather than enhanced encoding, driven by statistical regularities. Experiment 2 inserted a separate visual search task with equalized color probabilities and found an attentional bias toward the frequent color, confirming its attentional prioritization. Experiment 3 extended the above findings to task-irrelevant, yet physically salient and attention-grabbing distractors. Together, these findings indicate that although statistical regularities do not enhance working memory encoding, participants implicitly extract summary statistics of attended item attributes across trials, which in turn shapes their subsequent decisions.
{"title":"Statistical regularities bias memory decisions without enhancing working memory encoding: Insights from attribute amnesia","authors":"Niya Yan, Richard Jiang, Brian A. Anderson","doi":"10.3758/s13414-025-03178-4","DOIUrl":"10.3758/s13414-025-03178-4","url":null,"abstract":"<div><p>While previous studies have shown memory enhancement for items with statistical regularities, it remains unclear whether this advantage persists when people are not anticipating the need to recall that information. Here, we used the attribute amnesia paradigm to examine whether statistical regularities influence working memory encoding in the absence of intentional memorization. In Experiment 1, participants reported the location of a colored target that appeared more frequently in one color. On a surprise trial probing target color, participants who saw the target in the frequent color were significantly more likely to answer correctly than those who saw it in a less frequent color. More importantly, regardless of which color was actually shown, participants across both groups tended to choose the frequent color as target color, suggesting a response bias, rather than enhanced encoding, driven by statistical regularities. Experiment 2 inserted a separate visual search task with equalized color probabilities and found an attentional bias toward the frequent color, confirming its attentional prioritization. Experiment 3 extended the above findings to task-irrelevant, yet physically salient and attention-grabbing distractors. Together, these findings indicate that although statistical regularities do not enhance working memory encoding, participants implicitly extract summary statistics of attended item attributes across trials, which in turn shapes their subsequent decisions.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"88 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145662982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}