Pub Date : 2023-07-01DOI: 10.1177/03010066231175014
Poutasi W B Urale, Dietrich Samuel Schwarzkopf
The Ebbinghaus and Delboeuf illusions affect the perceived size of a target circle depending on the size and proximity of circular inducers or a ring. Converging evidence suggests that these illusions are driven by interactions between contours mediated by their cortical distance in primary visual cortex. We tested the effect of cortical distance on these illusions using two methods: First, we manipulated retinal distance between target and inducers in a two-interval forced choice design, finding that targets appeared larger with a closer surround. Next, we predicted that targets presented peripherally should appear larger due to cortical magnification. Hence, we tested the illusion strength when positioning the stimuli at various eccentricities, with results supporting this hypothesis. We calculated estimated cortical distances between illusion elements in each experiment and used these estimates to compare the relationship between cortical distance and illusion strength across our experiments. In a final experiment, we modified the Delboeuf illusion to test whether the influence of the inducers/annuli in this illusion is influenced by an inhibitory surround. We found evidence that an additional outer ring makes targets appear smaller compared to a single-ring condition, suggesting that near and distal contours have antagonistic effects on perceived target size.
{"title":"Effects of cortical distance on the Ebbinghaus and Delboeuf illusions.","authors":"Poutasi W B Urale, Dietrich Samuel Schwarzkopf","doi":"10.1177/03010066231175014","DOIUrl":"https://doi.org/10.1177/03010066231175014","url":null,"abstract":"<p><p>The Ebbinghaus and Delboeuf illusions affect the perceived size of a target circle depending on the size and proximity of circular inducers or a ring. Converging evidence suggests that these illusions are driven by interactions between contours mediated by their cortical distance in primary visual cortex. We tested the effect of cortical distance on these illusions using two methods: First, we manipulated retinal distance between target and inducers in a two-interval forced choice design, finding that targets appeared larger with a closer surround. Next, we predicted that targets presented peripherally should appear larger due to cortical magnification. Hence, we tested the illusion strength when positioning the stimuli at various eccentricities, with results supporting this hypothesis. We calculated estimated cortical distances between illusion elements in each experiment and used these estimates to compare the relationship between cortical distance and illusion strength across our experiments. In a final experiment, we modified the Delboeuf illusion to test whether the influence of the inducers/annuli in this illusion is influenced by an inhibitory surround. We found evidence that an additional outer ring makes targets appear smaller compared to a single-ring condition, suggesting that near and distal contours have antagonistic effects on perceived target size.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":"52 7","pages":"459-483"},"PeriodicalIF":1.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10291393/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10084947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1177/03010066231175829
Jing Li, Shijie Shang, Man Zhang, Pinqing Yue, Weicong Ren, Pan Zhang, Zeng Wang, Di Wu
Potential cognitive and physiological alterations due to space environments have been investigated in long-term space flight and various microgravity-like conditions, for example, head-down tilt (HDT), confinement, isolation, and immobilization. However, little is known about the influence of simulated microgravity environments on visual function. Contrast sensitivity (CS), which indicates how much contrast a person requires to see a target, is a fundamental feature of human vision. Here, we investigated how the CS changed by 1-h -30° HDT and determined the corresponding mechanisms with a perceptual template model. A quick contrast sensitivity function procedure was used to assess the CS at ten spatial frequencies and three external noise levels. We found that (1) relative to the + 30° head-up tilt (HUT) position, 1-h -30° HDT significantly deteriorated the CS at intermediate frequencies when external noise was present; (2) CS loss was not detected in zero- or high-noise conditions; (3) HDT-induced CS loss was characterized by impaired perceptual template; and (4) self-reported questionnaires indicated that subjects felt less pleasure and more excitement, less comfort and more fatigued by screen light, less comfort in the area around the eye, and serious symptoms such as piercing pain, blur acid, strain, eye burning, and dizziness after HDT. These findings improve our understanding of the negative effects of simulated microgravity on visual function and elucidate the potential risks of astronauts during space flight.
{"title":"Effects of short-term -30° HDT on contrast sensitivity.","authors":"Jing Li, Shijie Shang, Man Zhang, Pinqing Yue, Weicong Ren, Pan Zhang, Zeng Wang, Di Wu","doi":"10.1177/03010066231175829","DOIUrl":"https://doi.org/10.1177/03010066231175829","url":null,"abstract":"<p><p>Potential cognitive and physiological alterations due to space environments have been investigated in long-term space flight and various microgravity-like conditions, for example, head-down tilt (HDT), confinement, isolation, and immobilization. However, little is known about the influence of simulated microgravity environments on visual function. Contrast sensitivity (CS), which indicates how much contrast a person requires to see a target, is a fundamental feature of human vision. Here, we investigated how the CS changed by 1-h -30° HDT and determined the corresponding mechanisms with a perceptual template model. A quick contrast sensitivity function procedure was used to assess the CS at ten spatial frequencies and three external noise levels. We found that (1) relative to the + 30° head-up tilt (HUT) position, 1-h -30° HDT significantly deteriorated the CS at intermediate frequencies when external noise was present; (2) CS loss was not detected in zero- or high-noise conditions; (3) HDT-induced CS loss was characterized by impaired perceptual template; and (4) self-reported questionnaires indicated that subjects felt less pleasure and more excitement, less comfort and more fatigued by screen light, less comfort in the area around the eye, and serious symptoms such as piercing pain, blur acid, strain, eye burning, and dizziness after HDT. These findings improve our understanding of the negative effects of simulated microgravity on visual function and elucidate the potential risks of astronauts during space flight.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":"52 7","pages":"502-513"},"PeriodicalIF":1.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10058427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1177/03010066231175599
Jeffrey B Wagman, Tyler Duffrin, Christopher C Pagano, Brian M Day
We performed four experiments to investigate whether people can perceive the length of a target object (a "fish") that is attached to a freely wielded object (the "fishing pole") by a length of string, and if so, whether this ability is grounded in the sensitivity of the touch system to invariant mechanical parameters that describe the forces and torques required to move the target object. In particular, we investigated sensitivity to mass, static moment, and rotational inertia-the forces required to keep an object from falling due to gravity, the torque required to keep an object from rotating due to gravity, and the torques required to actively rotate an object in different directions, respectively. We manipulated the length of the target object (Experiment 1), the mass of the target object (Experiment 2), and the mass distribution of the target object (Experiments 3 and 4). Overall, the results of the four experiments showed that participants can perform this task. Moreover, when the task is configured such that it more closely approximates a wielding at a distance task, the ability to do so is grounded in sensitivity to such forces and torques.
{"title":"Gone Fishin': Perceiving the length of one object that is non-rigidly attached to a wielded object.","authors":"Jeffrey B Wagman, Tyler Duffrin, Christopher C Pagano, Brian M Day","doi":"10.1177/03010066231175599","DOIUrl":"https://doi.org/10.1177/03010066231175599","url":null,"abstract":"<p><p>We performed four experiments to investigate whether people can perceive the length of a target object (a \"fish\") that is attached to a freely wielded object (the \"fishing pole\") by a length of string, and if so, whether this ability is grounded in the sensitivity of the touch system to invariant mechanical parameters that describe the forces and torques required to move the target object. In particular, we investigated sensitivity to mass, static moment, and rotational inertia-the forces required to keep an object from falling due to gravity, the torque required to keep an object from rotating due to gravity, and the torques required to actively rotate an object in different directions, respectively. We manipulated the length of the target object (Experiment 1), the mass of the target object (Experiment 2), and the mass distribution of the target object (Experiments 3 and 4). Overall, the results of the four experiments showed that participants can perform this task. Moreover, when the task is configured such that it more closely approximates a wielding at a distance task, the ability to do so is grounded in sensitivity to such forces and torques.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":"52 7","pages":"484-501"},"PeriodicalIF":1.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9681899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1177/03010066231172087
Feriel Zoghlami, Matteo Toscani
Peripheral vision is characterized by poor resolution. Recent evidence from brightness perception suggests that missing information is filled out with information at fixation. Here we show a novel filling-out mechanism: when participants are presented with a crowd of faces, the perceived emotion of faces in peripheral vision is biased towards the emotion of the face at fixation. This mechanism is particularly important in social situations where people often need to perceive the overall mood of a crowd. Some faces in the crowd are more likely to catch people's attention and be looked at directly, while others are only seen peripherally. Our findings suggest that the perceived emotion of these peripheral faces, and the overall perceived mood of the crowd, is biased by the emotions of the faces that people look at directly.
{"title":"Foveal to peripheral extrapolation of facial emotion.","authors":"Feriel Zoghlami, Matteo Toscani","doi":"10.1177/03010066231172087","DOIUrl":"https://doi.org/10.1177/03010066231172087","url":null,"abstract":"<p><p>Peripheral vision is characterized by poor resolution. Recent evidence from brightness perception suggests that missing information is filled out with information at fixation. Here we show a novel filling-out mechanism: when participants are presented with a crowd of faces, the perceived emotion of faces in peripheral vision is biased towards the emotion of the face at fixation. This mechanism is particularly important in social situations where people often need to perceive the overall mood of a crowd. Some faces in the crowd are more likely to catch people's attention and be looked at directly, while others are only seen peripherally. Our findings suggest that the perceived emotion of these peripheral faces, and the overall perceived mood of the crowd, is biased by the emotions of the faces that people look at directly.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":"52 7","pages":"514-523"},"PeriodicalIF":1.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/f3/75/10.1177_03010066231172087.PMC10291354.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9707292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1177/03010066231178664
Holly Bridge, Ifan Betina Ip, Andrew J Parker
Having two forward-facing eyes with slightly different viewpoints enables animals, including humans, to discriminate fine differences in depth (disparities), which can facilitate interaction with the world. The binocular visual system starts in the primary visual cortex because that is where information from the eyes is integrated for the first time. Magnetic resonance imaging (MRI) is an ideal tool to non-invasively investigate this system since it can provide a range of detailed measures about structure, function, neurochemistry and connectivity of the human brain. Since binocular disparity is used for both action and object recognition, the binocular visual system is a valuable model system in neuroscience for understanding how basic sensory cues are transformed into behaviourally relevant signals. In this review, we consider how MRI has contributed to the understanding of binocular vision and depth perception in the human brain. Firstly, MRI provides the ability to image the entire brain simultaneously to compare the contribution of specific visual areas to depth perception. A large body of work using functional MRI has led to an understanding of the extensive networks of brain areas involved in depth perception, but also the fine-scale macro-organisation for binocular processing within individual visual areas. Secondly, MRI can uncover mechanistic information underlying binocular combination with the use of MR spectroscopy. This method can quantify neurotransmitters including GABA and glutamate within restricted regions of the brain, and evaluate the role of these inhibitory and excitatory neurochemicals in binocular vision. Thirdly, it is possible to measure the nature and microstructure of pathways underlying depth perception using diffusion MRI. Understanding these pathways provides insight into the importance of the connections between areas implicated in depth perception. Finally, MRI can help to understand changes in the visual system resulting from amblyopia, a neural condition where binocular vision does not develop correctly in childhood.
{"title":"Investigating the human binocular visual system using multi-modal magnetic resonance imaging.","authors":"Holly Bridge, Ifan Betina Ip, Andrew J Parker","doi":"10.1177/03010066231178664","DOIUrl":"https://doi.org/10.1177/03010066231178664","url":null,"abstract":"<p><p>Having two forward-facing eyes with slightly different viewpoints enables animals, including humans, to discriminate fine differences in depth (disparities), which can facilitate interaction with the world. The binocular visual system starts in the primary visual cortex because that is where information from the eyes is integrated for the first time. Magnetic resonance imaging (MRI) is an ideal tool to non-invasively investigate this system since it can provide a range of detailed measures about structure, function, neurochemistry and connectivity of the human brain. Since binocular disparity is used for both action and object recognition, the binocular visual system is a valuable model system in neuroscience for understanding how basic sensory cues are transformed into behaviourally relevant signals. In this review, we consider how MRI has contributed to the understanding of binocular vision and depth perception in the human brain. Firstly, MRI provides the ability to image the entire brain simultaneously to compare the contribution of specific visual areas to depth perception. A large body of work using functional MRI has led to an understanding of the extensive networks of brain areas involved in depth perception, but also the fine-scale macro-organisation for binocular processing within individual visual areas. Secondly, MRI can uncover mechanistic information underlying binocular combination with the use of MR spectroscopy. This method can quantify neurotransmitters including GABA and glutamate within restricted regions of the brain, and evaluate the role of these inhibitory and excitatory neurochemicals in binocular vision. Thirdly, it is possible to measure the nature and microstructure of pathways underlying depth perception using diffusion MRI. Understanding these pathways provides insight into the importance of the connections between areas implicated in depth perception. Finally, MRI can help to understand changes in the visual system resulting from amblyopia, a neural condition where binocular vision does not develop correctly in childhood.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":"52 7","pages":"441-458"},"PeriodicalIF":1.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9682352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-13DOI: 10.1177/03010066231181324
Vebjørn Ekroll
{"title":"Book Review: Performing Deception: Learning, Skill and the Art of Conjuring by B. Rappert","authors":"Vebjørn Ekroll","doi":"10.1177/03010066231181324","DOIUrl":"https://doi.org/10.1177/03010066231181324","url":null,"abstract":"","PeriodicalId":49708,"journal":{"name":"Perception","volume":"52 1","pages":"608 - 609"},"PeriodicalIF":1.7,"publicationDate":"2023-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46517193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1177/03010066231165915
Grigol Keshelava
We tested to see how Ruben's copy of "The Battle of Anghiari" by Leonardo da Vinci would be interpreted by AI in a neuroanatomical aspect. We used WOMBO Dream, an artificial intelligence (AI)-based algorithm that creates images based on words and figures. The keyword we provided for the algorithm was "brain" and the reference image was Ruben's drawing. AI interpreted the whole drawing as a representation of the brain. The image generated by the algorithm was similar to our interpretation of the same painting.
我们测试了人工智能如何从神经解剖学的角度解读鲁本的莱昂纳多·达·芬奇(Leonardo da Vinci)的《昂吉亚里之战》(The Battle of Anghiari)。我们使用了基于人工智能(AI)的算法“WOMBO Dream”,该算法可以根据文字和图形创建图像。我们为算法提供的关键词是“大脑”,参考图像是Ruben的图纸。人工智能将整幅画解释为大脑的代表。该算法生成的图像与我们对同一幅画的解释相似。
{"title":"Artificial intelligence's interpretation of the neuroanatomical aspect of Peter Paul Rubens's copy of \"The Battle of Anghiari\" by Leonardo da Vinci.","authors":"Grigol Keshelava","doi":"10.1177/03010066231165915","DOIUrl":"https://doi.org/10.1177/03010066231165915","url":null,"abstract":"<p><p>We tested to see how Ruben's copy of \"The Battle of Anghiari\" by Leonardo da Vinci would be interpreted by AI in a neuroanatomical aspect. We used WOMBO Dream, an artificial intelligence (AI)-based algorithm that creates images based on words and figures. The keyword we provided for the algorithm was \"brain\" and the reference image was Ruben's drawing. AI interpreted the whole drawing as a representation of the brain. The image generated by the algorithm was similar to our interpretation of the same painting.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":"52 6","pages":"432-435"},"PeriodicalIF":1.7,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9501586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1177/03010066231171357
Kin-Pou Lie
Spatial contextual cueing refers to the facilitation of visual search when invariant spatial configurations of the target and distractors are learned. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of spatial contextual cueing. The findings support the reverse hierarchy theory, which predicts that the acquisition of spatial contextual cueing progresses in an easy-to-difficult cascading manner. However, these findings are inconsistent with instance theory, which predicts that the acquisition of spatial contextual cueing in easy-half-repeated trials would keep pace with that in difficult-half-repeated trials. This study concludes that compared with instance theory, reverse hierarchy theory more plausibly explains the acquisition of spatial contextual cueing.
{"title":"Cascaded acquisition of spatial contextual cueing.","authors":"Kin-Pou Lie","doi":"10.1177/03010066231171357","DOIUrl":"https://doi.org/10.1177/03010066231171357","url":null,"abstract":"<p><p>Spatial contextual cueing refers to the facilitation of visual search when invariant spatial configurations of the target and distractors are learned. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of spatial contextual cueing. The findings support the reverse hierarchy theory, which predicts that the acquisition of spatial contextual cueing progresses in an easy-to-difficult cascading manner. However, these findings are inconsistent with instance theory, which predicts that the acquisition of spatial contextual cueing in easy-half-repeated trials would keep pace with that in difficult-half-repeated trials. This study concludes that compared with instance theory, reverse hierarchy theory more plausibly explains the acquisition of spatial contextual cueing.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":"52 6","pages":"423-431"},"PeriodicalIF":1.7,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9856791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1177/03010066231170071
Hsin-Yi Chao, Marta Wnuczko, John M Kennedy
In Piaget's 3-mountains task, 3D objects - a cube, cone and sphere - sit on a square tabletop. They are portrayed in 2D pictures as elevations (projections to the sides) such as one with a square on the left, a triangle in the middle and a circle on the right. Three objects offer six elevations, of which four are possible and two impossible. The possibles are elevations from the sides of the table - front, left, right and rear. In the impossibles, an object in the corner of the table is shown in the middle of an elevation. Sighted, sighted-blindfolded, early- and late-blind adults judged the elevations as to side of the table or impossible. The results suggest similar spatial abilities across groups. The impossible options had mid-range accuracy for all groups, with reaction times like possible options. The sighted and blind participants may consider possible and impossible options sequentially, one item at a time.
{"title":"Piaget's 3-mountains task with impossible options: Sighted, blindfolded, early and late blind participants.","authors":"Hsin-Yi Chao, Marta Wnuczko, John M Kennedy","doi":"10.1177/03010066231170071","DOIUrl":"https://doi.org/10.1177/03010066231170071","url":null,"abstract":"<p><p>In Piaget's 3-mountains task, 3D objects - a cube, cone and sphere - sit on a square tabletop. They are portrayed in 2D pictures as <i>elevations</i> (projections to the sides) such as one with a square on the left, a triangle in the middle and a circle on the right. Three objects offer six elevations, of which four are possible and two impossible. The possibles are elevations from the sides of the table - front, left, right and rear. In the impossibles, an object in the corner of the table is shown in the middle of an elevation. Sighted, sighted-blindfolded, early- and late-blind adults judged the elevations as to side of the table or impossible. The results suggest similar spatial abilities across groups. The impossible options had mid-range accuracy for all groups, with reaction times like possible options. The sighted and blind participants may consider possible and impossible options sequentially, one item at a time.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":"52 6","pages":"385-399"},"PeriodicalIF":1.7,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9856793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1177/03010066231175016
Qinyue Qian, Meihua Lu, Delin Sun, Aijun Wang, Ming Zhang
Previous studies have shown that rewards weaken visual inhibition of return (IOR). However, the specific mechanisms underlying the influence of rewards on cross-modal IOR remain unclear. Based on the Posner exogenous cue-target paradigm, the present study was conducted to investigate the effect of rewards on exogenous spatial cross-modal IOR in both visual cue with auditory target (VA) and auditory cue with visual target (AV) conditions. The results showed the following: in the AV condition, the IOR effect size in the high-reward condition was significantly lower than that in the low-reward condition. However, in the VA condition, there was no significant IOR in either the high- or low-reward condition and there was no significant difference between the two conditions. In other words, the use of rewards modulated exogenous spatial cross-modal IOR with visual targets; specifically, high rewards may have weakened IOR in the AV condition. Taken together, our study extended the effect of rewards on IOR to cross-modal attention conditions and demonstrated for the first time that higher motivation among individuals under high-reward conditions weakened the cross-modal IOR with visual targets. Moreover, the present study provided evidence for future research on the relationship between reward and attention.
{"title":"Rewards weaken cross-modal inhibition of return with visual targets.","authors":"Qinyue Qian, Meihua Lu, Delin Sun, Aijun Wang, Ming Zhang","doi":"10.1177/03010066231175016","DOIUrl":"https://doi.org/10.1177/03010066231175016","url":null,"abstract":"<p><p>Previous studies have shown that rewards weaken visual inhibition of return (IOR). However, the specific mechanisms underlying the influence of rewards on cross-modal IOR remain unclear. Based on the Posner exogenous cue-target paradigm, the present study was conducted to investigate the effect of rewards on exogenous spatial cross-modal IOR in both visual cue with auditory target (VA) and auditory cue with visual target (AV) conditions. The results showed the following: in the AV condition, the IOR effect size in the high-reward condition was significantly lower than that in the low-reward condition. However, in the VA condition, there was no significant IOR in either the high- or low-reward condition and there was no significant difference between the two conditions. In other words, the use of rewards modulated exogenous spatial cross-modal IOR with visual targets; specifically, high rewards may have weakened IOR in the AV condition. Taken together, our study extended the effect of rewards on IOR to cross-modal attention conditions and demonstrated for the first time that higher motivation among individuals under high-reward conditions weakened the cross-modal IOR with visual targets. Moreover, the present study provided evidence for future research on the relationship between reward and attention.</p>","PeriodicalId":49708,"journal":{"name":"Perception","volume":"52 6","pages":"400-411"},"PeriodicalIF":1.7,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9856827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}