Pub Date : 2024-11-05DOI: 10.3758/s13414-024-02959-7
Anupama Nair, Jared Medina
Our tactile perception is shaped not only by somatosensory input but also by visual information. Prior research on the effect of viewing touch on tactile processing has found higher tactile detection rates when paired with viewed touch versus a control visual stimulus. Therefore, some have proposed a vicarious tactile system that activates somatosensory areas when viewing touch, resulting in enhanced tactile perception. However, we propose an alternative explanation: Viewing touch makes the observer more liberal in their decision to report a tactile stimulus relative to not viewing touch, also resulting in higher tactile detection rates. To disambiguate between the two explanations, we examined the effect of viewed touch on tactile sensitivity and decision criterion using signal detection theory. In three experiments, participants engaged in a tactile detection task while viewing a hand being touched or approached by a finger, a red dot, or no stimulus. We found that viewing touch led to a consistent, liberal criterion shift but inconsistent enhancement in tactile sensitivity relative to not viewing touch. Moreover, observing a finger approach the hand was sufficient to bias the criterion. These findings suggest that viewing touch influences tactile performance by altering tactile decision mechanisms rather than the tactile perceptual signal.
{"title":"Viewed touch influences tactile detection by altering decision criterion.","authors":"Anupama Nair, Jared Medina","doi":"10.3758/s13414-024-02959-7","DOIUrl":"https://doi.org/10.3758/s13414-024-02959-7","url":null,"abstract":"<p><p>Our tactile perception is shaped not only by somatosensory input but also by visual information. Prior research on the effect of viewing touch on tactile processing has found higher tactile detection rates when paired with viewed touch versus a control visual stimulus. Therefore, some have proposed a vicarious tactile system that activates somatosensory areas when viewing touch, resulting in enhanced tactile perception. However, we propose an alternative explanation: Viewing touch makes the observer more liberal in their decision to report a tactile stimulus relative to not viewing touch, also resulting in higher tactile detection rates. To disambiguate between the two explanations, we examined the effect of viewed touch on tactile sensitivity and decision criterion using signal detection theory. In three experiments, participants engaged in a tactile detection task while viewing a hand being touched or approached by a finger, a red dot, or no stimulus. We found that viewing touch led to a consistent, liberal criterion shift but inconsistent enhancement in tactile sensitivity relative to not viewing touch. Moreover, observing a finger approach the hand was sufficient to bias the criterion. These findings suggest that viewing touch influences tactile performance by altering tactile decision mechanisms rather than the tactile perceptual signal.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-31DOI: 10.3758/s13414-024-02969-5
Hanna Haponenko, Noah Britt, Brett Cochrane, Hong-Jin Sun
Inhibition of return (IOR) is a phenomenon that reflects slower target detection when the target appears at a previously cued rather than uncued location. In the present study, we investigated the extent to which IOR occurs in three-dimensional (3D) scenes comprising pictorial depth information. Peripheral cues and targets appeared on top of 3D rectangular boxes placed on the surface of a textured ground plane in virtual space. When the target appeared at a farther location than the cue, the magnitude of the IOR effect in the 3D condition remained similar to the values found in the two-dimensional (2D) control condition (IOR was depth-blind). When the target appeared at a nearer location than the cue, the magnitude of the IOR effect was significantly attenuated (IOR was depth-specific). The present findings address inconsistencies in the literature on the effect of depth on IOR and support the notion that visuospatial attention exhibits a near-space advantage even in 3D scenes consisting entirely of pictorial depth information.
{"title":"Inhibition of return in a 3D scene depends on the direction of depth switch between cue and target.","authors":"Hanna Haponenko, Noah Britt, Brett Cochrane, Hong-Jin Sun","doi":"10.3758/s13414-024-02969-5","DOIUrl":"https://doi.org/10.3758/s13414-024-02969-5","url":null,"abstract":"<p><p>Inhibition of return (IOR) is a phenomenon that reflects slower target detection when the target appears at a previously cued rather than uncued location. In the present study, we investigated the extent to which IOR occurs in three-dimensional (3D) scenes comprising pictorial depth information. Peripheral cues and targets appeared on top of 3D rectangular boxes placed on the surface of a textured ground plane in virtual space. When the target appeared at a farther location than the cue, the magnitude of the IOR effect in the 3D condition remained similar to the values found in the two-dimensional (2D) control condition (IOR was depth-blind). When the target appeared at a nearer location than the cue, the magnitude of the IOR effect was significantly attenuated (IOR was depth-specific). The present findings address inconsistencies in the literature on the effect of depth on IOR and support the notion that visuospatial attention exhibits a near-space advantage even in 3D scenes consisting entirely of pictorial depth information.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-26DOI: 10.3758/s13414-024-02972-w
Binglong Li, Xiaoyu Wang, Ke Zhang, Jiehui Qian
Ensemble perception is an important ability of human beings that allows one to extract summary information for scenes and environments that contain information that far exceeds the processing limit of the visual system. Although attention has been shown to bias ensemble perception, two important questions remain unclear: (1) whether direct manipulations on different types of spatial attention could produce similar effects on ensembles and (2) whether factors potentially influencing the attention distribution, such as depth perception, could evoke an indirect effect of attention on ensemble representation. This study aims to address these questions. In Experiments 1 and 2, two types of precues were used to evoke exogenous and endogenous attention, respectively, and the ensemble color perceptions were examined. We found that both exogenous and endogenous attention biased ensemble representation towards the attended items, and the latter produced a greater effect. In Experiments 3 and 4, we examined whether depth perception could affect color ensembles by indirectly influencing attention allocation in 3D space. The items were separated in two depth planes, and no explicit cues were applied. The results showed that color ensemble was biased to closer items when depth information was task relevant. This suggests that ensemble perception is naturally biased in 3D space, probably through the mechanism of attention. Computational modeling consistently showed that attention exerted a direct shift on the ensemble statistics rather than averaging the feature values over the cued and noncued items, providing evidence against an averaging process of individual perception.
{"title":"Effect of attention on ensemble perception: Comparison between exogenous attention, endogenous attention, and depth.","authors":"Binglong Li, Xiaoyu Wang, Ke Zhang, Jiehui Qian","doi":"10.3758/s13414-024-02972-w","DOIUrl":"https://doi.org/10.3758/s13414-024-02972-w","url":null,"abstract":"<p><p>Ensemble perception is an important ability of human beings that allows one to extract summary information for scenes and environments that contain information that far exceeds the processing limit of the visual system. Although attention has been shown to bias ensemble perception, two important questions remain unclear: (1) whether direct manipulations on different types of spatial attention could produce similar effects on ensembles and (2) whether factors potentially influencing the attention distribution, such as depth perception, could evoke an indirect effect of attention on ensemble representation. This study aims to address these questions. In Experiments 1 and 2, two types of precues were used to evoke exogenous and endogenous attention, respectively, and the ensemble color perceptions were examined. We found that both exogenous and endogenous attention biased ensemble representation towards the attended items, and the latter produced a greater effect. In Experiments 3 and 4, we examined whether depth perception could affect color ensembles by indirectly influencing attention allocation in 3D space. The items were separated in two depth planes, and no explicit cues were applied. The results showed that color ensemble was biased to closer items when depth information was task relevant. This suggests that ensemble perception is naturally biased in 3D space, probably through the mechanism of attention. Computational modeling consistently showed that attention exerted a direct shift on the ensemble statistics rather than averaging the feature values over the cued and noncued items, providing evidence against an averaging process of individual perception.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142513424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-26DOI: 10.3758/s13414-024-02975-7
John McEwan, Ada Kritikos, Mick Zeljko
Crossmodal correspondences are consistent associations between sensory features from different modalities, with some theories suggesting they may either reflect environmental correlations or stem from innate neural structures. This study investigates this question by examining whether retinotopic or representational features of stimuli induce crossmodal congruency effects. Participants completed an auditory pitch discrimination task paired with visual stimuli varying in their sensory (retinotopic) or representational (scene integrated) nature, for both the elevation/pitch and size/pitch correspondences. Results show that only representational visual stimuli produced crossmodal congruency effects on pitch discrimination. These results support an environmental statistics hypothesis, suggesting crossmodal correspondences rely on real-world features rather than on sensory representations.
{"title":"Crossmodal correspondence of elevation/pitch and size/pitch is driven by real-world features.","authors":"John McEwan, Ada Kritikos, Mick Zeljko","doi":"10.3758/s13414-024-02975-7","DOIUrl":"https://doi.org/10.3758/s13414-024-02975-7","url":null,"abstract":"<p><p>Crossmodal correspondences are consistent associations between sensory features from different modalities, with some theories suggesting they may either reflect environmental correlations or stem from innate neural structures. This study investigates this question by examining whether retinotopic or representational features of stimuli induce crossmodal congruency effects. Participants completed an auditory pitch discrimination task paired with visual stimuli varying in their sensory (retinotopic) or representational (scene integrated) nature, for both the elevation/pitch and size/pitch correspondences. Results show that only representational visual stimuli produced crossmodal congruency effects on pitch discrimination. These results support an environmental statistics hypothesis, suggesting crossmodal correspondences rely on real-world features rather than on sensory representations.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142513423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-26DOI: 10.3758/s13414-024-02971-x
Michel Bürgel, Diana Mares, Kai Siedenburg
Within musical scenes or textures, sounds from certain instruments capture attention more prominently than others, hinting at biases in the perception of multisource mixtures. Besides musical factors, these effects might be related to frequency biases in auditory perception. Using an auditory pattern-recognition task, we studied the existence of such frequency biases. Mixtures of pure tone melodies were presented in six frequency bands. Listeners were instructed to assess whether the target melody was part of the mixture or not, with the target melody presented either before or after the mixture. In Experiment 1, the mixture always contained melodies in five out of the six bands. In Experiment 2, the mixture contained three bands that stemmed from the lower or the higher part of the range. As expected, Experiments 1 and 2 both highlighted strong effects of presentation order, with higher accuracies for the target presented before the mixture. Notably, Experiment 1 showed that edge frequencies yielded superior accuracies compared with center frequencies. Experiment 2 corroborated this finding by yielding enhanced accuracies for edge frequencies irrespective of the absolute frequency region. Our results highlight the salience of sound elements located at spectral edges within complex musical scenes. Overall, this implies that neither the high voice superiority effect nor the insensitivity to bass instruments observed by previous research can be explained by absolute frequency biases in auditory perception.
{"title":"Enhanced salience of edge frequencies in auditory pattern recognition.","authors":"Michel Bürgel, Diana Mares, Kai Siedenburg","doi":"10.3758/s13414-024-02971-x","DOIUrl":"https://doi.org/10.3758/s13414-024-02971-x","url":null,"abstract":"<p><p>Within musical scenes or textures, sounds from certain instruments capture attention more prominently than others, hinting at biases in the perception of multisource mixtures. Besides musical factors, these effects might be related to frequency biases in auditory perception. Using an auditory pattern-recognition task, we studied the existence of such frequency biases. Mixtures of pure tone melodies were presented in six frequency bands. Listeners were instructed to assess whether the target melody was part of the mixture or not, with the target melody presented either before or after the mixture. In Experiment 1, the mixture always contained melodies in five out of the six bands. In Experiment 2, the mixture contained three bands that stemmed from the lower or the higher part of the range. As expected, Experiments 1 and 2 both highlighted strong effects of presentation order, with higher accuracies for the target presented before the mixture. Notably, Experiment 1 showed that edge frequencies yielded superior accuracies compared with center frequencies. Experiment 2 corroborated this finding by yielding enhanced accuracies for edge frequencies irrespective of the absolute frequency region. Our results highlight the salience of sound elements located at spectral edges within complex musical scenes. Overall, this implies that neither the high voice superiority effect nor the insensitivity to bass instruments observed by previous research can be explained by absolute frequency biases in auditory perception.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142513425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-26DOI: 10.3758/s13414-024-02948-w
Thomas Cherian, S P Arun
When a spiky object is occluded, we expect its spiky features to continue behind the occluder. Although many real-world objects contain complex features, it is unclear how more complex features are amodally completed and whether this process is automatic. To investigate this issue, we created pairs of displays with identical contour edges up to the point of occlusion, but with occluded portions exchanged. We then asked participants to search for oddball targets among distractors and asked whether relations between searches involving occluded displays would match better with relations between searches involving completions that are either globally consistent or inconsistent with the visible portions of these displays. Across two experiments involving simple and complex shapes, search times involving occluded displays matched better with those involving globally consistent compared with inconsistent displays. Analogous analyses on deep networks pretrained for object categorization revealed a similar pattern of results for simple but not complex shapes. Thus, deep networks seem to extrapolate simple occluded contours but not more complex contours. Taken together, our results show that amodal completion in humans is sophisticated and can be based on extrapolating global statistical properties.
{"title":"What do we see behind an occluder? Amodal completion of statistical properties in complex objects.","authors":"Thomas Cherian, S P Arun","doi":"10.3758/s13414-024-02948-w","DOIUrl":"https://doi.org/10.3758/s13414-024-02948-w","url":null,"abstract":"<p><p>When a spiky object is occluded, we expect its spiky features to continue behind the occluder. Although many real-world objects contain complex features, it is unclear how more complex features are amodally completed and whether this process is automatic. To investigate this issue, we created pairs of displays with identical contour edges up to the point of occlusion, but with occluded portions exchanged. We then asked participants to search for oddball targets among distractors and asked whether relations between searches involving occluded displays would match better with relations between searches involving completions that are either globally consistent or inconsistent with the visible portions of these displays. Across two experiments involving simple and complex shapes, search times involving occluded displays matched better with those involving globally consistent compared with inconsistent displays. Analogous analyses on deep networks pretrained for object categorization revealed a similar pattern of results for simple but not complex shapes. Thus, deep networks seem to extrapolate simple occluded contours but not more complex contours. Taken together, our results show that amodal completion in humans is sophisticated and can be based on extrapolating global statistical properties.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142513426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-18DOI: 10.3758/s13414-024-02967-7
Tutku Öztel, Fuat Balcı
Error monitoring is the ability to report one's errors without relying on feedback. Although error monitoring is investigated mostly with choice tasks, recent studies have discovered that participants parametrically also keep track of the magnitude and direction of their temporal, spatial, and numerical judgment errors. We investigated whether temporal error monitoring relies on internal generative processes that lead to the to-be-judged first-order timing performance. We hypothesized that if the endogenous processes underlie temporal error monitoring, one can monitor timing errors in emitted but not observed timing behaviors. We conducted six experiments to test this hypothesis. The first two experiments showed that confidence ratings were negatively related to error magnitude only in emitted behaviors, but error directionality judgments of observed behaviors were more precise. Experiment 3 replicated these effects even after controlling for the motor aspects of first-order timing performance. The last three experiments demonstrated that belief of agency (i.e., believing that the error belongs to the self or someone else) was critical in accounting for the confidence rating effects observed in the first two experiments. The precision of error directionality judgments was higher in the non-agency condition. These results show that confidence is sensitive to belief, and short-long judgment is sensitive to the actual agency of timing behavior (i.e., whether the behavior was emitted by the self or someone else).
{"title":"Temporal error monitoring: Does agency matter?","authors":"Tutku Öztel, Fuat Balcı","doi":"10.3758/s13414-024-02967-7","DOIUrl":"https://doi.org/10.3758/s13414-024-02967-7","url":null,"abstract":"<p><p>Error monitoring is the ability to report one's errors without relying on feedback. Although error monitoring is investigated mostly with choice tasks, recent studies have discovered that participants parametrically also keep track of the magnitude and direction of their temporal, spatial, and numerical judgment errors. We investigated whether temporal error monitoring relies on internal generative processes that lead to the to-be-judged first-order timing performance. We hypothesized that if the endogenous processes underlie temporal error monitoring, one can monitor timing errors in emitted but not observed timing behaviors. We conducted six experiments to test this hypothesis. The first two experiments showed that confidence ratings were negatively related to error magnitude only in emitted behaviors, but error directionality judgments of observed behaviors were more precise. Experiment 3 replicated these effects even after controlling for the motor aspects of first-order timing performance. The last three experiments demonstrated that belief of agency (i.e., believing that the error belongs to the self or someone else) was critical in accounting for the confidence rating effects observed in the first two experiments. The precision of error directionality judgments was higher in the non-agency condition. These results show that confidence is sensitive to belief, and short-long judgment is sensitive to the actual agency of timing behavior (i.e., whether the behavior was emitted by the self or someone else).</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142481599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-18DOI: 10.3758/s13414-024-02970-y
Simon Grondin, Antoine Demers, Pier-Alexandre Rioux, Nicola Thibault, Giovanna Mioni
The aim of the study was to assess the ability to maintain a steady pace during a counting task, aloud or silently, when a fast (28 counts every 900 ms) or slow (18 counts every 1,400 ms) pace is adopted (target = 25,200 ms), and to test whether ability is the same for musician and nonmusicians. The study analyzes the mean and variability of 30 temporal productions. The results show more variability (a larger coefficient of variation: standard deviation/mean production) in the condition where the pace is slow, a finding consistent with previous reports with this task. This finding applies here in both the aloud and silent counting conditions and, most importantly, applies to both musicians and nonmusicians. The results also indicate that there is no significant difference for the absolute error (|mean production - target duration|). In brief, the capacity to keep variability low when maintaining a pace seems to gain benefit from musical training, and this training difference does not depend on counting aloud versus silently and is not restricted to brief intervals.
{"title":"Influence of musical training on temporal productions when using fast and slow counting paces.","authors":"Simon Grondin, Antoine Demers, Pier-Alexandre Rioux, Nicola Thibault, Giovanna Mioni","doi":"10.3758/s13414-024-02970-y","DOIUrl":"https://doi.org/10.3758/s13414-024-02970-y","url":null,"abstract":"<p><p>The aim of the study was to assess the ability to maintain a steady pace during a counting task, aloud or silently, when a fast (28 counts every 900 ms) or slow (18 counts every 1,400 ms) pace is adopted (target = 25,200 ms), and to test whether ability is the same for musician and nonmusicians. The study analyzes the mean and variability of 30 temporal productions. The results show more variability (a larger coefficient of variation: standard deviation/mean production) in the condition where the pace is slow, a finding consistent with previous reports with this task. This finding applies here in both the aloud and silent counting conditions and, most importantly, applies to both musicians and nonmusicians. The results also indicate that there is no significant difference for the absolute error (|mean production - target duration|). In brief, the capacity to keep variability low when maintaining a pace seems to gain benefit from musical training, and this training difference does not depend on counting aloud versus silently and is not restricted to brief intervals.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142481598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.3758/s13414-024-02966-8
Tom Beesley, Louise Earl, Hope Butler, Inez Sharp, Ieva Jaceviciute, David Luque
Three experiments explored how the repetition of a visual search display guides search during contextual cuing under conditions in which the search process is interrupted by an instructional (endogenous) cue for attention. In Experiment 1, participants readily learned about repeated configurations of visual search, before being presented with an endogenous cue for attention towards the target on every trial. Participants used this cue to improve search times, but the repeated contexts continued to guide attention. Experiment 2 demonstrated that the presence of the endogenous cue did not impede the acquisition of contextual cuing. Experiment 3 confirmed the hypothesis that the contextual cuing effect relies largely on localized distractor contexts, following the guidance of attention. Together, the experiments point towards an interplay between two drivers of attention: after the initial guidance of attention, memory representations of the context continue to guide attention towards the target. This suggests that the early part of visual search is inconsequential for the development and maintenance of the contextual cuing effect, and that memory representations are flexibly deployed when the search procedure is dramatically interrupted.
{"title":"Contextual cuing survives an interruption from an endogenous cue for attention.","authors":"Tom Beesley, Louise Earl, Hope Butler, Inez Sharp, Ieva Jaceviciute, David Luque","doi":"10.3758/s13414-024-02966-8","DOIUrl":"https://doi.org/10.3758/s13414-024-02966-8","url":null,"abstract":"<p><p>Three experiments explored how the repetition of a visual search display guides search during contextual cuing under conditions in which the search process is interrupted by an instructional (endogenous) cue for attention. In Experiment 1, participants readily learned about repeated configurations of visual search, before being presented with an endogenous cue for attention towards the target on every trial. Participants used this cue to improve search times, but the repeated contexts continued to guide attention. Experiment 2 demonstrated that the presence of the endogenous cue did not impede the acquisition of contextual cuing. Experiment 3 confirmed the hypothesis that the contextual cuing effect relies largely on localized distractor contexts, following the guidance of attention. Together, the experiments point towards an interplay between two drivers of attention: after the initial guidance of attention, memory representations of the context continue to guide attention towards the target. This suggests that the early part of visual search is inconsequential for the development and maintenance of the contextual cuing effect, and that memory representations are flexibly deployed when the search procedure is dramatically interrupted.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142481597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}