Pub Date : 2024-11-20DOI: 10.3758/s13414-024-02984-6
Sara Milligan, Milca Jaime Brunet, Neslihan Caliskan, Elizabeth R Schotter
Readers are able to begin processing upcoming words before directly fixating them, and in some cases skip words altogether (i.e., never fixated). However, the exact mechanisms and recognition thresholds underlying skipping decisions are not entirely clear. In the current study, we test whether skipping decisions reflect instances of more extensive lexical processing by recording neural language processing (via electroencephalography; EEG) and eye movements simultaneously, and we split trials based on target word-skipping behavior. To test lexical processing of the words, we manipulated the orthographic and phonological relationship between upcoming preview words and a semantically correct (and in some cases, expected) target word using the gaze-contingent display change paradigm. We also manipulated the constraint of the sentences to investigate the extent to which the identification of sublexical features of words depends on a reader's expectations. We extracted fixation-related brain potentials (FRPs) during the fixation on the preceding word (i.e., in response to parafoveal viewing of the manipulated previews). We found that word skipping is associated with larger neural responses (i.e., N400 amplitudes) to semantically incongruous words that did not share a phonological representation with the correct word, and this effect was only observed in high-constraint sentences. These findings suggest that word skipping can be reflective of more extensive linguistic processing, but in the absence of expectations, word skipping may occur based on less fine-grained linguistic processing and be more reflective of identification of plausible or expected sublexical features rather than higher-level lexical processing (e.g., semantic access).
{"title":"Parafoveal N400 effects reveal that word skipping is associated with deeper lexical processing in the presence of context-driven expectations.","authors":"Sara Milligan, Milca Jaime Brunet, Neslihan Caliskan, Elizabeth R Schotter","doi":"10.3758/s13414-024-02984-6","DOIUrl":"https://doi.org/10.3758/s13414-024-02984-6","url":null,"abstract":"<p><p>Readers are able to begin processing upcoming words before directly fixating them, and in some cases skip words altogether (i.e., never fixated). However, the exact mechanisms and recognition thresholds underlying skipping decisions are not entirely clear. In the current study, we test whether skipping decisions reflect instances of more extensive lexical processing by recording neural language processing (via electroencephalography; EEG) and eye movements simultaneously, and we split trials based on target word-skipping behavior. To test lexical processing of the words, we manipulated the orthographic and phonological relationship between upcoming preview words and a semantically correct (and in some cases, expected) target word using the gaze-contingent display change paradigm. We also manipulated the constraint of the sentences to investigate the extent to which the identification of sublexical features of words depends on a reader's expectations. We extracted fixation-related brain potentials (FRPs) during the fixation on the preceding word (i.e., in response to parafoveal viewing of the manipulated previews). We found that word skipping is associated with larger neural responses (i.e., N400 amplitudes) to semantically incongruous words that did not share a phonological representation with the correct word, and this effect was only observed in high-constraint sentences. These findings suggest that word skipping can be reflective of more extensive linguistic processing, but in the absence of expectations, word skipping may occur based on less fine-grained linguistic processing and be more reflective of identification of plausible or expected sublexical features rather than higher-level lexical processing (e.g., semantic access).</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.3758/s13414-024-02974-8
Solveig Tonn, Moritz Schaaf, Wilfried Kunde, Roland Pfister
Mouse-tracking is regarded as a powerful technique to investigate latent cognitive and emotional states. However, drawing inferences from this manifold data source carries the risk of several pitfalls, especially when using aggregated data rather than single-trial trajectories. Researchers might reach wrong conclusions because averages lump together two distinct contributions that speak towards fundamentally different mechanisms underlying between-condition differences: influences from online-processing during action execution and influences from incomplete decision processes. Here, we propose a simple method to assess these factors, thus allowing us to probe whether process-pure interpretations are appropriate. By applying this method to data from 12 published experiments on ideomotor action control, we show that the interpretation of previous results changes when dissociating online processing from decision and initiation errors. Researchers using mouse-tracking to investigate cognition and emotion are therefore well advised to conduct detailed trial-by-trial analyses, particularly when they test for direct leakage of ongoing processing into movement trajectories.
{"title":"Disentangling decision errors from action execution in mouse-tracking studies: The case of effect-based action control.","authors":"Solveig Tonn, Moritz Schaaf, Wilfried Kunde, Roland Pfister","doi":"10.3758/s13414-024-02974-8","DOIUrl":"https://doi.org/10.3758/s13414-024-02974-8","url":null,"abstract":"<p><p>Mouse-tracking is regarded as a powerful technique to investigate latent cognitive and emotional states. However, drawing inferences from this manifold data source carries the risk of several pitfalls, especially when using aggregated data rather than single-trial trajectories. Researchers might reach wrong conclusions because averages lump together two distinct contributions that speak towards fundamentally different mechanisms underlying between-condition differences: influences from online-processing during action execution and influences from incomplete decision processes. Here, we propose a simple method to assess these factors, thus allowing us to probe whether process-pure interpretations are appropriate. By applying this method to data from 12 published experiments on ideomotor action control, we show that the interpretation of previous results changes when dissociating online processing from decision and initiation errors. Researchers using mouse-tracking to investigate cognition and emotion are therefore well advised to conduct detailed trial-by-trial analyses, particularly when they test for direct leakage of ongoing processing into movement trajectories.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-19DOI: 10.3758/s13414-024-02987-3
Derek Besner, Torin Young
{"title":"Correction to: On the relationship between spatial attention and semantics in the context of a Stroop paradigm.","authors":"Derek Besner, Torin Young","doi":"10.3758/s13414-024-02987-3","DOIUrl":"10.3758/s13414-024-02987-3","url":null,"abstract":"","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-18DOI: 10.3758/s13414-024-02983-7
Rachael L Taylor, Neil McLatchie, Sally A Linkenauger
Right-handed individuals (RHIs) demonstrate perceptual biases towards their right hand, estimating it to be larger and longer than their left. In addition, RHIs estimate that they can grasp larger objects with their right hand than their left. This study investigated whether visual information specifying handedness enhances biases in RHIs' perceptions of their action capabilities. Twenty-two participants were placed in an immersive virtual environment in which self-animated, virtual hands were either presented congruently to their physical hand or mirrored. Following a calibration task, participants estimated their maximum grasp size by adjusting the size of a virtual block until it reached the largest size they thought they could grasp. The results showed that, consistent with research outside of virtual reality, RHIs gave larger estimates of maximum grasp when using their right physical hand than their left. However, this difference remained regardless of how the hand was virtually presented. This finding suggests that proprioceptive feedback may be more important than visual feedback when estimating maximum grasp. In addition, visual feedback on handedness does not appear to enhance biases in perceptions of maximum grasp with the right hand. Considerations for further research into the embodiment of mirrored virtual limbs are discussed.
{"title":"Can the left hand benefit from being right? The influence of body side on perceived grasping ability.","authors":"Rachael L Taylor, Neil McLatchie, Sally A Linkenauger","doi":"10.3758/s13414-024-02983-7","DOIUrl":"10.3758/s13414-024-02983-7","url":null,"abstract":"<p><p>Right-handed individuals (RHIs) demonstrate perceptual biases towards their right hand, estimating it to be larger and longer than their left. In addition, RHIs estimate that they can grasp larger objects with their right hand than their left. This study investigated whether visual information specifying handedness enhances biases in RHIs' perceptions of their action capabilities. Twenty-two participants were placed in an immersive virtual environment in which self-animated, virtual hands were either presented congruently to their physical hand or mirrored. Following a calibration task, participants estimated their maximum grasp size by adjusting the size of a virtual block until it reached the largest size they thought they could grasp. The results showed that, consistent with research outside of virtual reality, RHIs gave larger estimates of maximum grasp when using their right physical hand than their left. However, this difference remained regardless of how the hand was virtually presented. This finding suggests that proprioceptive feedback may be more important than visual feedback when estimating maximum grasp. In addition, visual feedback on handedness does not appear to enhance biases in perceptions of maximum grasp with the right hand. Considerations for further research into the embodiment of mirrored virtual limbs are discussed.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-18DOI: 10.3758/s13414-024-02978-4
Roy S Hessels, Peitong Li, Sofia Balali, Martin K Teunisse, Ronald Poppe, Diederick C Niehorster, Marcus Nyström, Jeroen S Benjamins, Atsushi Senju, Albert A Salah, Ignace T C Hooge
In human interactions, gaze may be used to acquire information for goal-directed actions, to acquire information related to the interacting partner's actions, and in the context of multimodal communication. At present, there are no models of gaze behavior in the context of vision that adequately incorporate these three components. In this study, we aimed to uncover and quantify patterns of within-person gaze-action coupling, gaze-gesture and gaze-speech coupling, and coupling between one person's gaze and another person's manual actions, gestures, or speech (or exogenous attraction of gaze) during dyadic collaboration. We showed that in the context of a collaborative Lego Duplo-model copying task, within-person gaze-action coupling is strongest, followed by within-person gaze-gesture coupling, and coupling between gaze and another person's actions. When trying to infer gaze location from one's own manual actions, gestures, or speech or that of the other person, only one's own manual actions were found to lead to better inference compared to a baseline model. The improvement in inferring gaze location was limited, contrary to what might be expected based on previous research. We suggest that inferring gaze location may be most effective for constrained tasks in which different manual actions follow in a quick sequence, while gaze-gesture and gaze-speech coupling may be stronger in unconstrained conversational settings or when the collaboration requires more negotiation. Our findings may serve as an empirical foundation for future theory and model development, and may further be relevant in the context of action/intention prediction for (social) robotics and effective human-robot interaction.
{"title":"Gaze-action coupling, gaze-gesture coupling, and exogenous attraction of gaze in dyadic interactions.","authors":"Roy S Hessels, Peitong Li, Sofia Balali, Martin K Teunisse, Ronald Poppe, Diederick C Niehorster, Marcus Nyström, Jeroen S Benjamins, Atsushi Senju, Albert A Salah, Ignace T C Hooge","doi":"10.3758/s13414-024-02978-4","DOIUrl":"10.3758/s13414-024-02978-4","url":null,"abstract":"<p><p>In human interactions, gaze may be used to acquire information for goal-directed actions, to acquire information related to the interacting partner's actions, and in the context of multimodal communication. At present, there are no models of gaze behavior in the context of vision that adequately incorporate these three components. In this study, we aimed to uncover and quantify patterns of within-person gaze-action coupling, gaze-gesture and gaze-speech coupling, and coupling between one person's gaze and another person's manual actions, gestures, or speech (or exogenous attraction of gaze) during dyadic collaboration. We showed that in the context of a collaborative Lego Duplo-model copying task, within-person gaze-action coupling is strongest, followed by within-person gaze-gesture coupling, and coupling between gaze and another person's actions. When trying to infer gaze location from one's own manual actions, gestures, or speech or that of the other person, only one's own manual actions were found to lead to better inference compared to a baseline model. The improvement in inferring gaze location was limited, contrary to what might be expected based on previous research. We suggest that inferring gaze location may be most effective for constrained tasks in which different manual actions follow in a quick sequence, while gaze-gesture and gaze-speech coupling may be stronger in unconstrained conversational settings or when the collaboration requires more negotiation. Our findings may serve as an empirical foundation for future theory and model development, and may further be relevant in the context of action/intention prediction for (social) robotics and effective human-robot interaction.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-18DOI: 10.3758/s13414-024-02977-5
Michael J Beran, Maisy D Englund, Elizabeth L Haseltine, Christian Agrillo, Audrey E Parrish
Humans and many other species show consistent patterns of responding when making relative quantity ("more or less") judgments of stimuli. This includes the well-established ratio effect that determines the degree of discriminability among sets of items according to Weber's Law. However, humans and other species also are susceptible to some errors in accurately representing quantity, and these illusions reflect important aspects of the relation of perception to quantity representation. One newly described illusion in humans is the connectedness illusion, in which arrays with items that are connected to each other tend to be underestimated relative to arrays without such connection. In this pre-registered report, we assessed whether this illusion occurred in other species, testing rhesus macaque monkeys and capuchin monkeys. Contrary to our pre-registered predictions, monkeys showed an opposite bias to humans, preferring to select arrays with connected items as being more numerous. Thus, monkeys do not show this illusion to the same extent as humans.
{"title":"Monkeys overestimate connected arrays in a relative quantity task: A reverse connectedness illusion.","authors":"Michael J Beran, Maisy D Englund, Elizabeth L Haseltine, Christian Agrillo, Audrey E Parrish","doi":"10.3758/s13414-024-02977-5","DOIUrl":"10.3758/s13414-024-02977-5","url":null,"abstract":"<p><p>Humans and many other species show consistent patterns of responding when making relative quantity (\"more or less\") judgments of stimuli. This includes the well-established ratio effect that determines the degree of discriminability among sets of items according to Weber's Law. However, humans and other species also are susceptible to some errors in accurately representing quantity, and these illusions reflect important aspects of the relation of perception to quantity representation. One newly described illusion in humans is the connectedness illusion, in which arrays with items that are connected to each other tend to be underestimated relative to arrays without such connection. In this pre-registered report, we assessed whether this illusion occurred in other species, testing rhesus macaque monkeys and capuchin monkeys. Contrary to our pre-registered predictions, monkeys showed an opposite bias to humans, preferring to select arrays with connected items as being more numerous. Thus, monkeys do not show this illusion to the same extent as humans.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-12DOI: 10.3758/s13414-024-02979-3
Peter Wühr, Herbert Heuer
Previous studies revealed an S-R compatibility effect between physical stimulus size and response location, with faster left (right) responses to small (large) stimuli, respectively, as compared to the reverse assignments. Here, we investigated the locus of interactions between the processing of size and spatial locations. In Experiment 1, we explored whether stimulus size and stimulus location interact at a perceptual level of processing when responses lack spatiality. The stimuli varied on three feature dimensions (color, size, location), and participants responded vocally to each feature in a separate task. Most importantly, we failed to observe a size-location congruency effect in the color-naming task where S-R compatibility effects were excluded. In Experiment 2, responses to color were spatial, that is, key-presses with the left and right hand. With these responses there was a congruency effect. In addition, we tested the interaction of the size-location compatibility effect with the Simon effect, which is known to originate at the stage of response selection. We observed an interaction between the two effects only with a subsample of participants with slower reaction times (RTs) and a larger size-location compatibility effect in a control condition. Together, the results suggest that the size-location compatibility effect arises at the response selection stage. An extended leaky, competing accumulator model with independent staggered impacts of stimulus size and stimulus location on response selection fits the data of Experiment 2 and specifies how the size-location compatibility effect and the Simon effect can arise during response selection.
{"title":"Where does the processing of size meet the processing of space?","authors":"Peter Wühr, Herbert Heuer","doi":"10.3758/s13414-024-02979-3","DOIUrl":"https://doi.org/10.3758/s13414-024-02979-3","url":null,"abstract":"<p><p>Previous studies revealed an S-R compatibility effect between physical stimulus size and response location, with faster left (right) responses to small (large) stimuli, respectively, as compared to the reverse assignments. Here, we investigated the locus of interactions between the processing of size and spatial locations. In Experiment 1, we explored whether stimulus size and stimulus location interact at a perceptual level of processing when responses lack spatiality. The stimuli varied on three feature dimensions (color, size, location), and participants responded vocally to each feature in a separate task. Most importantly, we failed to observe a size-location congruency effect in the color-naming task where S-R compatibility effects were excluded. In Experiment 2, responses to color were spatial, that is, key-presses with the left and right hand. With these responses there was a congruency effect. In addition, we tested the interaction of the size-location compatibility effect with the Simon effect, which is known to originate at the stage of response selection. We observed an interaction between the two effects only with a subsample of participants with slower reaction times (RTs) and a larger size-location compatibility effect in a control condition. Together, the results suggest that the size-location compatibility effect arises at the response selection stage. An extended leaky, competing accumulator model with independent staggered impacts of stimulus size and stimulus location on response selection fits the data of Experiment 2 and specifies how the size-location compatibility effect and the Simon effect can arise during response selection.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142633289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-05DOI: 10.3758/s13414-024-02959-7
Anupama Nair, Jared Medina
Our tactile perception is shaped not only by somatosensory input but also by visual information. Prior research on the effect of viewing touch on tactile processing has found higher tactile detection rates when paired with viewed touch versus a control visual stimulus. Therefore, some have proposed a vicarious tactile system that activates somatosensory areas when viewing touch, resulting in enhanced tactile perception. However, we propose an alternative explanation: Viewing touch makes the observer more liberal in their decision to report a tactile stimulus relative to not viewing touch, also resulting in higher tactile detection rates. To disambiguate between the two explanations, we examined the effect of viewed touch on tactile sensitivity and decision criterion using signal detection theory. In three experiments, participants engaged in a tactile detection task while viewing a hand being touched or approached by a finger, a red dot, or no stimulus. We found that viewing touch led to a consistent, liberal criterion shift but inconsistent enhancement in tactile sensitivity relative to not viewing touch. Moreover, observing a finger approach the hand was sufficient to bias the criterion. These findings suggest that viewing touch influences tactile performance by altering tactile decision mechanisms rather than the tactile perceptual signal.
{"title":"Viewed touch influences tactile detection by altering decision criterion.","authors":"Anupama Nair, Jared Medina","doi":"10.3758/s13414-024-02959-7","DOIUrl":"https://doi.org/10.3758/s13414-024-02959-7","url":null,"abstract":"<p><p>Our tactile perception is shaped not only by somatosensory input but also by visual information. Prior research on the effect of viewing touch on tactile processing has found higher tactile detection rates when paired with viewed touch versus a control visual stimulus. Therefore, some have proposed a vicarious tactile system that activates somatosensory areas when viewing touch, resulting in enhanced tactile perception. However, we propose an alternative explanation: Viewing touch makes the observer more liberal in their decision to report a tactile stimulus relative to not viewing touch, also resulting in higher tactile detection rates. To disambiguate between the two explanations, we examined the effect of viewed touch on tactile sensitivity and decision criterion using signal detection theory. In three experiments, participants engaged in a tactile detection task while viewing a hand being touched or approached by a finger, a red dot, or no stimulus. We found that viewing touch led to a consistent, liberal criterion shift but inconsistent enhancement in tactile sensitivity relative to not viewing touch. Moreover, observing a finger approach the hand was sufficient to bias the criterion. These findings suggest that viewing touch influences tactile performance by altering tactile decision mechanisms rather than the tactile perceptual signal.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-31DOI: 10.3758/s13414-024-02969-5
Hanna Haponenko, Noah Britt, Brett Cochrane, Hong-Jin Sun
Inhibition of return (IOR) is a phenomenon that reflects slower target detection when the target appears at a previously cued rather than uncued location. In the present study, we investigated the extent to which IOR occurs in three-dimensional (3D) scenes comprising pictorial depth information. Peripheral cues and targets appeared on top of 3D rectangular boxes placed on the surface of a textured ground plane in virtual space. When the target appeared at a farther location than the cue, the magnitude of the IOR effect in the 3D condition remained similar to the values found in the two-dimensional (2D) control condition (IOR was depth-blind). When the target appeared at a nearer location than the cue, the magnitude of the IOR effect was significantly attenuated (IOR was depth-specific). The present findings address inconsistencies in the literature on the effect of depth on IOR and support the notion that visuospatial attention exhibits a near-space advantage even in 3D scenes consisting entirely of pictorial depth information.
{"title":"Inhibition of return in a 3D scene depends on the direction of depth switch between cue and target.","authors":"Hanna Haponenko, Noah Britt, Brett Cochrane, Hong-Jin Sun","doi":"10.3758/s13414-024-02969-5","DOIUrl":"https://doi.org/10.3758/s13414-024-02969-5","url":null,"abstract":"<p><p>Inhibition of return (IOR) is a phenomenon that reflects slower target detection when the target appears at a previously cued rather than uncued location. In the present study, we investigated the extent to which IOR occurs in three-dimensional (3D) scenes comprising pictorial depth information. Peripheral cues and targets appeared on top of 3D rectangular boxes placed on the surface of a textured ground plane in virtual space. When the target appeared at a farther location than the cue, the magnitude of the IOR effect in the 3D condition remained similar to the values found in the two-dimensional (2D) control condition (IOR was depth-blind). When the target appeared at a nearer location than the cue, the magnitude of the IOR effect was significantly attenuated (IOR was depth-specific). The present findings address inconsistencies in the literature on the effect of depth on IOR and support the notion that visuospatial attention exhibits a near-space advantage even in 3D scenes consisting entirely of pictorial depth information.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}