Recent studies have suggested that autistic perception can be attributed to atypical Bayesian inference; however, it remains unclear whether the atypical Bayesian inference originates in the perceptual or post-perceptual stage or both. This study examined serial dependence in orientation at the perceptual and response stages in autistic and neurotypical adult groups. Participants comprised 17 autistic and 23 neurotypical adults. They reproduced the orientation of a Gabor stimulus in every odd trial or its mirror in every even trial. In the similar-stimulus session, a right-tilted Gabor stimulus was always presented; hence, serial dependence at the perceptual stage was presumed to occur because the perceived orientation was similar throughout the session. In the similar-response session, right- and left-tilted Gabor patches were alternately presented; thus serial dependence was presumed to occur because the response orientations were similar. Significant serial dependence was observed only in neurotypical adults for the similar-stimulus session, whereas it was observed in both groups for the similar-response session. Moreover, no significant correlation was observed between serial dependence and sensory profile. These findings suggest that autistic individuals possess atypical Bayesian inference at the perceptual stage and that sensory experiences in their daily lives are not attributable only to atypical Bayesian inference.
{"title":"Serial dependence in orientation is weak at the perceptual stage but intact at the response stage in autistic adults.","authors":"Masaki Tsujita, Naoko Inada, Ayako H Saneyoshi, Tomoe Hayakawa, Shin-Ichiro Kumagaya","doi":"10.1167/jov.25.1.13","DOIUrl":"10.1167/jov.25.1.13","url":null,"abstract":"<p><p>Recent studies have suggested that autistic perception can be attributed to atypical Bayesian inference; however, it remains unclear whether the atypical Bayesian inference originates in the perceptual or post-perceptual stage or both. This study examined serial dependence in orientation at the perceptual and response stages in autistic and neurotypical adult groups. Participants comprised 17 autistic and 23 neurotypical adults. They reproduced the orientation of a Gabor stimulus in every odd trial or its mirror in every even trial. In the similar-stimulus session, a right-tilted Gabor stimulus was always presented; hence, serial dependence at the perceptual stage was presumed to occur because the perceived orientation was similar throughout the session. In the similar-response session, right- and left-tilted Gabor patches were alternately presented; thus serial dependence was presumed to occur because the response orientations were similar. Significant serial dependence was observed only in neurotypical adults for the similar-stimulus session, whereas it was observed in both groups for the similar-response session. Moreover, no significant correlation was observed between serial dependence and sensory profile. These findings suggest that autistic individuals possess atypical Bayesian inference at the perceptual stage and that sensory experiences in their daily lives are not attributable only to atypical Bayesian inference.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"13"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11745202/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual perception has been described as a dynamic process where incoming visual information is combined with what has been seen before to form the current percept. Such a process can result in multiple visual aftereffects that can be attractive toward or repulsive away from past visual stimulation. A lot of research has been conducted on what functional role the mechanisms that produce these aftereffects may play. However, there is a lack of understanding of the role of stimulus uncertainty on these aftereffects. In this study, we investigate how the contrast of a stimulus affects the serial aftereffects it induces and how the stimulus itself is affected by these effects depending on its contrast. We presented human observers with a series of Gabor patches and monitored how the perceived orientation of stimuli changed over time with the systematic manipulation of orientation and contrast of presented stimuli. We hypothesized that repulsive serial effects would be stronger for the judgment of high-contrast than low-contrast stimuli, but the other way around for attractive serial effects. Our experimental findings confirm such a strong interaction between contrast and sign of aftereffects. We present a Bayesian model observer that can explain this interaction based on two principles, the dynamic changes of orientation-tuned channels in short timescales and the slow integration of prior information over long timescales. Our findings have strong implications for our understanding of orientation perception and can inspire further work on the identification of its neural mechanisms.
{"title":"Attractive and repulsive visual aftereffects depend on stimulus contrast.","authors":"Nikos Gekas, Pascal Mamassian","doi":"10.1167/jov.25.1.10","DOIUrl":"10.1167/jov.25.1.10","url":null,"abstract":"<p><p>Visual perception has been described as a dynamic process where incoming visual information is combined with what has been seen before to form the current percept. Such a process can result in multiple visual aftereffects that can be attractive toward or repulsive away from past visual stimulation. A lot of research has been conducted on what functional role the mechanisms that produce these aftereffects may play. However, there is a lack of understanding of the role of stimulus uncertainty on these aftereffects. In this study, we investigate how the contrast of a stimulus affects the serial aftereffects it induces and how the stimulus itself is affected by these effects depending on its contrast. We presented human observers with a series of Gabor patches and monitored how the perceived orientation of stimuli changed over time with the systematic manipulation of orientation and contrast of presented stimuli. We hypothesized that repulsive serial effects would be stronger for the judgment of high-contrast than low-contrast stimuli, but the other way around for attractive serial effects. Our experimental findings confirm such a strong interaction between contrast and sign of aftereffects. We present a Bayesian model observer that can explain this interaction based on two principles, the dynamic changes of orientation-tuned channels in short timescales and the slow integration of prior information over long timescales. Our findings have strong implications for our understanding of orientation perception and can inspire further work on the identification of its neural mechanisms.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11725992/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ajay Subramanian, Sara Price, Omkar Kumbhar, Elena Sizikova, Najib J Majaj, Denis G Pelli
Active object recognition, fundamental to tasks like reading and driving, relies on the ability to make time-sensitive decisions. People exhibit a flexible tradeoff between speed and accuracy, a crucial human skill. However, current computational models struggle to incorporate time. To address this gap, we present the first dataset (with 148 observers) exploring the speed-accuracy tradeoff (SAT) in ImageNet object recognition. Participants performed a 16-way ImageNet categorization task where their responses counted only if they occurred near the time of a fixed-delay beep. Each block of trials allowed one reaction time. As expected, human accuracy increases with reaction time. We compare human performance with that of dynamic neural networks that adapt their computation to the available inference time. Time is a scarce resource for human object recognition, and finding an appropriate analog in neural networks is challenging. Networks can repeat operations by using layers, recurrent cycles, or early exits. We use the repetition count as a network's analog for time. In our analysis, the number of layers, recurrent cycles, and early exits correlates strongly with floating-point operations, making them suitable time analogs. Comparing networks and humans on SAT-fit error, category-wise correlation, and SAT-curve steepness, we find cascaded dynamic neural networks most promising in modeling human speed and accuracy. Surprisingly, convolutional recurrent networks, typically favored in human object recognition modeling, perform the worst on our benchmark.
{"title":"Benchmarking the speed-accuracy tradeoff in object recognition by humans and neural networks.","authors":"Ajay Subramanian, Sara Price, Omkar Kumbhar, Elena Sizikova, Najib J Majaj, Denis G Pelli","doi":"10.1167/jov.25.1.4","DOIUrl":"10.1167/jov.25.1.4","url":null,"abstract":"<p><p>Active object recognition, fundamental to tasks like reading and driving, relies on the ability to make time-sensitive decisions. People exhibit a flexible tradeoff between speed and accuracy, a crucial human skill. However, current computational models struggle to incorporate time. To address this gap, we present the first dataset (with 148 observers) exploring the speed-accuracy tradeoff (SAT) in ImageNet object recognition. Participants performed a 16-way ImageNet categorization task where their responses counted only if they occurred near the time of a fixed-delay beep. Each block of trials allowed one reaction time. As expected, human accuracy increases with reaction time. We compare human performance with that of dynamic neural networks that adapt their computation to the available inference time. Time is a scarce resource for human object recognition, and finding an appropriate analog in neural networks is challenging. Networks can repeat operations by using layers, recurrent cycles, or early exits. We use the repetition count as a network's analog for time. In our analysis, the number of layers, recurrent cycles, and early exits correlates strongly with floating-point operations, making them suitable time analogs. Comparing networks and humans on SAT-fit error, category-wise correlation, and SAT-curve steepness, we find cascaded dynamic neural networks most promising in modeling human speed and accuracy. Surprisingly, convolutional recurrent networks, typically favored in human object recognition modeling, perform the worst on our benchmark.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"4"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11706240/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agostino Gibaldi, Yinghua Liu, Christos Kaspiris-Rousellis, Madhumitha S Mahadevan, Jenny C A Read, Björn N S Vlaskamp, Gerrit W Maus
When rendering the visual scene for near-eye head-mounted displays, accurate knowledge of the geometry of the displays, scene objects, and eyes is required for the correct generation of the binocular images. Despite possible design and calibration efforts, these quantities are subject to positional and measurement errors, resulting in some misalignment of the images projected to each eye. Previous research investigated the effects in virtual reality (VR) setups that triggered such symptoms as eye strain and nausea. This work aimed at investigating the effects of binocular vertical misalignment (BVM) in see-through augmented reality (AR). In such devices, two conflicting environments coexist. One environment corresponds to the real world, which lies in the background and forms geometrically aligned images on the retinas. The other environment corresponds to the augmented content, which stands out as foreground and might be subject to misalignment. We simulated a see-through AR environment using a standard three-dimensional (3D) stereoscopic display to have full control and high accuracy of the real and augmented contents. Participants were involved in a visual search task that forced them to alternatively interact with the real and the augmented contents while being exposed to different amounts of BVM. The measured eye posture indicated that the compensation for vertical misalignment is equally shared by the sensory (binocular fusion) and the motor (vertical vergence) components of binocular vision. The sensitivity of each participant varied, both in terms of perceived discomfort and misalignment tolerance, suggesting that a per-user calibration might be useful for a comfortable visual experience.
{"title":"Eye posture and screen alignment with simulated see-through head-mounted displays.","authors":"Agostino Gibaldi, Yinghua Liu, Christos Kaspiris-Rousellis, Madhumitha S Mahadevan, Jenny C A Read, Björn N S Vlaskamp, Gerrit W Maus","doi":"10.1167/jov.25.1.9","DOIUrl":"10.1167/jov.25.1.9","url":null,"abstract":"<p><p>When rendering the visual scene for near-eye head-mounted displays, accurate knowledge of the geometry of the displays, scene objects, and eyes is required for the correct generation of the binocular images. Despite possible design and calibration efforts, these quantities are subject to positional and measurement errors, resulting in some misalignment of the images projected to each eye. Previous research investigated the effects in virtual reality (VR) setups that triggered such symptoms as eye strain and nausea. This work aimed at investigating the effects of binocular vertical misalignment (BVM) in see-through augmented reality (AR). In such devices, two conflicting environments coexist. One environment corresponds to the real world, which lies in the background and forms geometrically aligned images on the retinas. The other environment corresponds to the augmented content, which stands out as foreground and might be subject to misalignment. We simulated a see-through AR environment using a standard three-dimensional (3D) stereoscopic display to have full control and high accuracy of the real and augmented contents. Participants were involved in a visual search task that forced them to alternatively interact with the real and the augmented contents while being exposed to different amounts of BVM. The measured eye posture indicated that the compensation for vertical misalignment is equally shared by the sensory (binocular fusion) and the motor (vertical vergence) components of binocular vision. The sensitivity of each participant varied, both in terms of perceived discomfort and misalignment tolerance, suggesting that a per-user calibration might be useful for a comfortable visual experience.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"9"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11725991/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vivien Chopurian, Anni Kienke, Christoph Bledowski, Thomas B Christophel
Previous research has shown that, when multiple similar items are maintained in working memory, recall precision declines. Less is known about how heterogeneous sets of items across different features within and between modalities impact recall precision. In two experiments, we investigated modality (Experiment 1, n = 79) and feature-specific (Experiment 2, n = 154) load effects on working memory performance. First, we found a cross-modal advantage in continuous recall: Orientations that are memorized together with a pitch are recalled more precisely than orientations that are memorized together with another orientation. The results of our second experiment, however, suggest that this is not a pure effect of sensory modality but rather a feature-dependent effect. We combined orientations, pitches, and colors in pairs. We found that memorizing orientations together with a color benefits orientation recall to a similar extent as the cross-modal benefit. To investigate this absence of interference between orientations and colors held in working memory, we analyzed subjective reports of strategies used for the different features. We found that, although orientations and pitches rely almost exclusively on sensory strategies, colors are memorized not only visually but also with abstract and verbal strategies. Thus, although color stimuli are also visually presented, they might be represented by independent neural circuits. Our results suggest that working memory storage is organized in a modality-, feature-, and strategy-dependent way.
{"title":"Modality-, feature-, and strategy-dependent organization of low-level working memory.","authors":"Vivien Chopurian, Anni Kienke, Christoph Bledowski, Thomas B Christophel","doi":"10.1167/jov.25.1.16","DOIUrl":"10.1167/jov.25.1.16","url":null,"abstract":"<p><p>Previous research has shown that, when multiple similar items are maintained in working memory, recall precision declines. Less is known about how heterogeneous sets of items across different features within and between modalities impact recall precision. In two experiments, we investigated modality (Experiment 1, n = 79) and feature-specific (Experiment 2, n = 154) load effects on working memory performance. First, we found a cross-modal advantage in continuous recall: Orientations that are memorized together with a pitch are recalled more precisely than orientations that are memorized together with another orientation. The results of our second experiment, however, suggest that this is not a pure effect of sensory modality but rather a feature-dependent effect. We combined orientations, pitches, and colors in pairs. We found that memorizing orientations together with a color benefits orientation recall to a similar extent as the cross-modal benefit. To investigate this absence of interference between orientations and colors held in working memory, we analyzed subjective reports of strategies used for the different features. We found that, although orientations and pitches rely almost exclusively on sensory strategies, colors are memorized not only visually but also with abstract and verbal strategies. Thus, although color stimuli are also visually presented, they might be represented by independent neural circuits. Our results suggest that working memory storage is organized in a modality-, feature-, and strategy-dependent way.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"16"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11781326/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143054133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans can estimate the time and position of a moving object's arrival. However, numerous studies have demonstrated superior position estimation accuracy for descending objects compared with ascending objects. We tested whether the accuracy of position estimation for ascending and descending objects differs between the upper and lower visual fields. Using a head-mounted display, participants observed a target object ascending or descending toward a goal located at 8.7° or 17.1° above or below from the center of the monitor in the upper and lower visual fields, respectively. Participants pressed a key to match the time of the target's arrival at the goal, with the gaze kept centered. For goals (8.7°) close to the center, ascending and descending objects were equally accurate, whereas for goals (17.1°) far from the center, the ascending target's position estimation in the upper visual field was inferior to the others. Targets moved away from the center for goals further from the center and closer to the center for goals nearer to the center. As the positional accuracy of ascending and descending objects was not assessed for each of the four goals, it remains unclear which was more important for impaired accuracy: the proximity of the target position or direction of the upward or downward motion. However, taken together with previous studies, we suggest that estimating the position of objects moving further away from the central fovea of the upper visual field may have contributed to the asymmetry in position estimation for ascending and descending objects.
{"title":"Impaired visual perceptual accuracy in the upper visual field induces asymmetric performance in position estimation for falling and rising objects.","authors":"Takashi Hirata, Nobuyuki Kawai","doi":"10.1167/jov.25.1.1","DOIUrl":"https://doi.org/10.1167/jov.25.1.1","url":null,"abstract":"<p><p>Humans can estimate the time and position of a moving object's arrival. However, numerous studies have demonstrated superior position estimation accuracy for descending objects compared with ascending objects. We tested whether the accuracy of position estimation for ascending and descending objects differs between the upper and lower visual fields. Using a head-mounted display, participants observed a target object ascending or descending toward a goal located at 8.7° or 17.1° above or below from the center of the monitor in the upper and lower visual fields, respectively. Participants pressed a key to match the time of the target's arrival at the goal, with the gaze kept centered. For goals (8.7°) close to the center, ascending and descending objects were equally accurate, whereas for goals (17.1°) far from the center, the ascending target's position estimation in the upper visual field was inferior to the others. Targets moved away from the center for goals further from the center and closer to the center for goals nearer to the center. As the positional accuracy of ascending and descending objects was not assessed for each of the four goals, it remains unclear which was more important for impaired accuracy: the proximity of the target position or direction of the upward or downward motion. However, taken together with previous studies, we suggest that estimating the position of objects moving further away from the central fovea of the upper visual field may have contributed to the asymmetry in position estimation for ascending and descending objects.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142916008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A salience map is a topographic map that has inputs at each x,y location from many different feature maps and summarizes the combined salience of all those inputs as a real number, salience, which is represented in the map. Of the more than 1 million Google references to salience maps, nearly all use the map for computing the relative priority of visual image components for subsequent processing. We observe that salience processing is an instance of substance-invariant processing, analogous to household measuring cups, weight scales, and measuring tapes, all of which make single-number substance-invariant measurements. Like these devices, the brain also collects material for substance-invariant measurements but by a different mechanism: salience maps that collect visual substances for subsequent measurement. Each salience map can be used by many different measurements. The instruction to attend is implemented by increasing the salience of the to-be-attended items so they can be collected in a salience map and then further processed. Here we show that, beyond processing priority, the following measurement tasks are substance invariant and therefore use salience maps: computing distance in the frontal plane, computing centroids (center of a cluster of items), computing the numerosity of a collection of items, and identifying alphabetic letters. We painstakingly demonstrate that defining items exclusively by color or texture not only is sufficient for these tasks, but that light-dark luminance information significantly improves performance only for letter recognition. Obviously, visual features are represented in the brain but their salience alone is sufficient for these four judgments.
{"title":"Salience maps for judgments of frontal plane distance, centroids, numerosity, and letter identity inferred from substance-invariant processing.","authors":"Lingyu Gan, George Sperling","doi":"10.1167/jov.25.1.8","DOIUrl":"10.1167/jov.25.1.8","url":null,"abstract":"<p><p>A salience map is a topographic map that has inputs at each x,y location from many different feature maps and summarizes the combined salience of all those inputs as a real number, salience, which is represented in the map. Of the more than 1 million Google references to salience maps, nearly all use the map for computing the relative priority of visual image components for subsequent processing. We observe that salience processing is an instance of substance-invariant processing, analogous to household measuring cups, weight scales, and measuring tapes, all of which make single-number substance-invariant measurements. Like these devices, the brain also collects material for substance-invariant measurements but by a different mechanism: salience maps that collect visual substances for subsequent measurement. Each salience map can be used by many different measurements. The instruction to attend is implemented by increasing the salience of the to-be-attended items so they can be collected in a salience map and then further processed. Here we show that, beyond processing priority, the following measurement tasks are substance invariant and therefore use salience maps: computing distance in the frontal plane, computing centroids (center of a cluster of items), computing the numerosity of a collection of items, and identifying alphabetic letters. We painstakingly demonstrate that defining items exclusively by color or texture not only is sufficient for these tasks, but that light-dark luminance information significantly improves performance only for letter recognition. Obviously, visual features are represented in the brain but their salience alone is sufficient for these four judgments.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11724370/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Here, we investigate the shift in eye balance in response to monocular cueing in adults with amblyopia. In normally sighted adults, biasing attention toward one eye, by presenting a monocular visual stimulus to it, can shift eye balance toward the stimulated eye, as measured by binocular rivalry. We investigated whether we can modulate eye balance by directing monocular stimulation/attention in adults with clinical binocular deficits associated with amblyopia and larger eye imbalances. In a dual-task paradigm, eight participants continuously reported ongoing rivalry percepts and simultaneously performed a task related to the cueing stimulus. Time series of eye balance dynamics, aligned to cue onset, are averaged across trials and participants. In different time series, we tested the effect of monocular cueing on the amblyopic and fellow eyes (compared to a binocular control condition) and the effect of an active versus passive task. Overall, we found a significant shift in eye balance toward the monocularly cued eye, when both the fellow eye or the amblyopic eye were cued, F(2, 14) = 27.649, p < 0.01, ω2 = 0.590. This was independent of whether, during the binocular rivalry, the cue stimulus was presented to the perceiving eye or the non-perceiving eye. Performing an active task tended to produce a larger eye balance change, but this effect did not reach significance. Our results suggest that the eye imbalance in adults with binocular deficits, such as amblyopia, can be transiently reduced by monocularly directed stimulation, at least through activation of bottom-up attentional processes.
在此,我们研究了成人弱视患者单眼信号对眼平衡的影响。在视力正常的成年人中,通过向一只眼睛提供单眼视觉刺激,将注意力偏向一只眼睛,可以将眼睛平衡转移到受刺激的眼睛,这是通过双眼竞争来测量的。我们研究了是否可以通过引导单眼刺激/注意力来调节伴有弱视和大眼不平衡的双眼缺陷成人的眼平衡。在双任务范式中,8名参与者连续报告正在进行的竞争感知,同时执行与提示刺激相关的任务。眼平衡动态的时间序列,与线索开始一致,在试验和参与者之间平均。在不同的时间序列中,我们测试了单眼线索对弱视和其他眼睛的影响(与双眼对照条件相比)以及主动任务和被动任务的影响。总的来说,我们发现,当双眼或弱视眼同时被提示时,眼睛平衡向单眼倾斜,F(2,14) = 27.649, p < 0.01, ω2 = 0.590。这与在双眼竞争过程中,线索刺激是呈现给感知眼还是非感知眼无关。执行主动任务往往会产生较大的眼平衡变化,但这种影响没有达到显著性。我们的研究结果表明,有双眼缺陷的成年人,如弱视,可以通过单眼定向刺激,至少通过激活自下而上的注意力过程,暂时减少眼睛失衡。
{"title":"Monocular eye-cueing shifts eye balance in amblyopia.","authors":"Sandy P Wong, Robert F Hess, Kathy T Mullen","doi":"10.1167/jov.25.1.6","DOIUrl":"10.1167/jov.25.1.6","url":null,"abstract":"<p><p>Here, we investigate the shift in eye balance in response to monocular cueing in adults with amblyopia. In normally sighted adults, biasing attention toward one eye, by presenting a monocular visual stimulus to it, can shift eye balance toward the stimulated eye, as measured by binocular rivalry. We investigated whether we can modulate eye balance by directing monocular stimulation/attention in adults with clinical binocular deficits associated with amblyopia and larger eye imbalances. In a dual-task paradigm, eight participants continuously reported ongoing rivalry percepts and simultaneously performed a task related to the cueing stimulus. Time series of eye balance dynamics, aligned to cue onset, are averaged across trials and participants. In different time series, we tested the effect of monocular cueing on the amblyopic and fellow eyes (compared to a binocular control condition) and the effect of an active versus passive task. Overall, we found a significant shift in eye balance toward the monocularly cued eye, when both the fellow eye or the amblyopic eye were cued, F(2, 14) = 27.649, p < 0.01, ω2 = 0.590. This was independent of whether, during the binocular rivalry, the cue stimulus was presented to the perceiving eye or the non-perceiving eye. Performing an active task tended to produce a larger eye balance change, but this effect did not reach significance. Our results suggest that the eye imbalance in adults with binocular deficits, such as amblyopia, can be transiently reduced by monocularly directed stimulation, at least through activation of bottom-up attentional processes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11724371/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intentional binding (IB) refers to the compression of subjective timing between a voluntary action and its outcome. In this study, we investigate the IB of a multimodal (audiovisual) outcome. We used a modified Libet clock while depicting a dynamic physical event (collision). Experiment 1 examined whether IB for the unimodal (auditory) event could be generalized to the multimodal (audiovisual) event, compared their magnitudes, and assessed whether the level of integration between modalities could affect IB. Planned contrasts (n = 42) showed significant IB effects for all types of events; the magnitude of IB was significantly weaker in both audiovisual integrated and audiovisual irrelevant conditions compared with auditory, with no difference between the integrated and irrelevant conditions. Experiment 2 separated the components of the audiovisual event to test the appropriate model describing the magnitude of IB in multimodal contexts. Planned contrasts (n = 42) showed the magnitude of IB was significantly weaker in both the audiovisual and visual conditions compared with the auditory condition, with no difference between the audiovisual and visual conditions. Additional Bayesian analysis provided moderate evidence supporting the equivalence between the two conditions. In conclusion, this study demonstrated that the IB phenomenon can be generalized to multimodal (audiovisual) sensory outcomes, and visual information shows dominance in determining the magnitude of IB for audiovisual events.
{"title":"Visual information shows dominance in determining the magnitude of intentional binding for audiovisual outcomes.","authors":"De-Wei Dai, Po-Jang Brown Hsieh","doi":"10.1167/jov.25.1.7","DOIUrl":"10.1167/jov.25.1.7","url":null,"abstract":"<p><p>Intentional binding (IB) refers to the compression of subjective timing between a voluntary action and its outcome. In this study, we investigate the IB of a multimodal (audiovisual) outcome. We used a modified Libet clock while depicting a dynamic physical event (collision). Experiment 1 examined whether IB for the unimodal (auditory) event could be generalized to the multimodal (audiovisual) event, compared their magnitudes, and assessed whether the level of integration between modalities could affect IB. Planned contrasts (n = 42) showed significant IB effects for all types of events; the magnitude of IB was significantly weaker in both audiovisual integrated and audiovisual irrelevant conditions compared with auditory, with no difference between the integrated and irrelevant conditions. Experiment 2 separated the components of the audiovisual event to test the appropriate model describing the magnitude of IB in multimodal contexts. Planned contrasts (n = 42) showed the magnitude of IB was significantly weaker in both the audiovisual and visual conditions compared with the auditory condition, with no difference between the audiovisual and visual conditions. Additional Bayesian analysis provided moderate evidence supporting the equivalence between the two conditions. In conclusion, this study demonstrated that the IB phenomenon can be generalized to multimodal (audiovisual) sensory outcomes, and visual information shows dominance in determining the magnitude of IB for audiovisual events.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11721482/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The population receptive field (pRF) method, which measures the region in visual space that elicits a blood-oxygen-level-dependent (BOLD) signal in a voxel in retinotopic cortex, is a powerful tool for investigating the functional organization of human visual cortex with fMRI (Dumoulin & Wandell, 2008). However, recent work has shown that pRF estimates for early retinotopic visual areas can be biased and unreliable, especially for voxels representing the fovea. Here, we show that a log-bar stimulus that is logarithmically warped along the eccentricity dimension produces more reliable estimates of pRF size and location than the traditional moving bar stimulus. The log-bar stimulus was better able to identify pRFs near the foveal representation, and pRFs were smaller in size, consistent with simulation estimates of receptive field sizes in the fovea.
{"title":"Improving the reliability and accuracy of population receptive field measures using a logarithmically warped stimulus.","authors":"Kelly Chang, Ione Fine, Geoffrey M Boynton","doi":"10.1167/jov.25.1.5","DOIUrl":"10.1167/jov.25.1.5","url":null,"abstract":"<p><p>The population receptive field (pRF) method, which measures the region in visual space that elicits a blood-oxygen-level-dependent (BOLD) signal in a voxel in retinotopic cortex, is a powerful tool for investigating the functional organization of human visual cortex with fMRI (Dumoulin & Wandell, 2008). However, recent work has shown that pRF estimates for early retinotopic visual areas can be biased and unreliable, especially for voxels representing the fovea. Here, we show that a log-bar stimulus that is logarithmically warped along the eccentricity dimension produces more reliable estimates of pRF size and location than the traditional moving bar stimulus. The log-bar stimulus was better able to identify pRFs near the foveal representation, and pRFs were smaller in size, consistent with simulation estimates of receptive field sizes in the fovea.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"5"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}