Vanessa Carneiro Morita, David Souto, Guillaume S Masson, Anna Montagnini
Sensory-motor systems can extract statistical regularities in dynamic uncertain environments, enabling quicker responses and anticipatory behavior for expected events. Anticipatory smooth pursuit eye movements (aSP) have been observed in primates when the temporal and kinematic properties of a forthcoming visual moving target are fully or partially predictable. To investigate the nature of the internal model of target kinematics underlying aSP, we tested the effect of varying the target kinematics and its predictability. Participants tracked a small visual target in a constant direction with either constant, accelerating, or decelerating speed. Across experimental blocks, we manipulated the probability of each kinematic condition varying either speed or acceleration across trials; with either one kinematic condition (providing certainty) or with a mixture of conditions with a fixed probability within a block. We show that aSP is robustly modulated by target kinematics. With constant-velocity targets, aSP velocity scales linearly with target velocity in blocked sessions, and matches the probability-weighted average in the mixture sessions. Predictable target acceleration does also have an influence on aSP, suggesting that the internal model of motion that drives anticipation contains some information about the changing target kinematics, beyond the initial target speed. However, there is a large variability across participants in the precision and consistency with which this information is taken into account to control anticipatory behavior.
{"title":"Anticipatory smooth pursuit eye movements scale with the probability of visual motion: The role of target speed and acceleration.","authors":"Vanessa Carneiro Morita, David Souto, Guillaume S Masson, Anna Montagnini","doi":"10.1167/jov.25.1.2","DOIUrl":"10.1167/jov.25.1.2","url":null,"abstract":"<p><p>Sensory-motor systems can extract statistical regularities in dynamic uncertain environments, enabling quicker responses and anticipatory behavior for expected events. Anticipatory smooth pursuit eye movements (aSP) have been observed in primates when the temporal and kinematic properties of a forthcoming visual moving target are fully or partially predictable. To investigate the nature of the internal model of target kinematics underlying aSP, we tested the effect of varying the target kinematics and its predictability. Participants tracked a small visual target in a constant direction with either constant, accelerating, or decelerating speed. Across experimental blocks, we manipulated the probability of each kinematic condition varying either speed or acceleration across trials; with either one kinematic condition (providing certainty) or with a mixture of conditions with a fixed probability within a block. We show that aSP is robustly modulated by target kinematics. With constant-velocity targets, aSP velocity scales linearly with target velocity in blocked sessions, and matches the probability-weighted average in the mixture sessions. Predictable target acceleration does also have an influence on aSP, suggesting that the internal model of motion that drives anticipation contains some information about the changing target kinematics, beyond the initial target speed. However, there is a large variability across participants in the precision and consistency with which this information is taken into account to control anticipatory behavior.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"2"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beyond the light reflex, the pupil responds to various high-level cognitive processes. Multiple statistical regularities of stimuli have been found to modulate the pupillary response. However, most studies have used auditory or visual temporal sequences as stimuli, and it is unknown whether the pupil size is modulated by statistical regularity in the spatial arrangement of stimuli. In three experiments, we created perceived regular and irregular stimuli, matching physical regularity, to investigate the effect of spatial regularity on pupillary responses during passive viewing. Experiments using orientation (Experiments 1 and 2) and size (Experiment 3) as stimuli consistently showed that perceived irregular stimuli elicited more pupil constriction than regular stimuli. Furthermore, this effect was independent of the luminance of the stimuli. In conclusion, our study revealed that the pupil responds spontaneously to perceived visuospatial regularity, extending the stimulus regularity that influences the pupillary response into the visuospatial domain.
{"title":"Pupil responds spontaneously to visuospatial regularity.","authors":"Zhiming Kong, Chen Chen, Jianrong Jia","doi":"10.1167/jov.25.1.14","DOIUrl":"10.1167/jov.25.1.14","url":null,"abstract":"<p><p>Beyond the light reflex, the pupil responds to various high-level cognitive processes. Multiple statistical regularities of stimuli have been found to modulate the pupillary response. However, most studies have used auditory or visual temporal sequences as stimuli, and it is unknown whether the pupil size is modulated by statistical regularity in the spatial arrangement of stimuli. In three experiments, we created perceived regular and irregular stimuli, matching physical regularity, to investigate the effect of spatial regularity on pupillary responses during passive viewing. Experiments using orientation (Experiments 1 and 2) and size (Experiment 3) as stimuli consistently showed that perceived irregular stimuli elicited more pupil constriction than regular stimuli. Furthermore, this effect was independent of the luminance of the stimuli. In conclusion, our study revealed that the pupil responds spontaneously to perceived visuospatial regularity, extending the stimulus regularity that influences the pupillary response into the visuospatial domain.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"14"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11756609/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Serpil Karabüklü, Sandra Wood, Chuck Bradley, Ronnie B Wilbur, Evie A Malaia
The visual environment of sign language users is markedly distinct in its spatiotemporal parameters compared to that of non-signers. Although the importance of temporal and spectral resolution in the auditory modality for language development is well established, the spectrotemporal parameters of visual attention necessary for sign language comprehension remain less understood. This study investigates visual temporal resolution in learners of American Sign Language (ASL) at various stages of acquisition to determine how experience with sign language affects perceptual sampling. Using a flicker paradigm, we assessed the accuracy of identifying out-of-phase visual flicker objects at frequencies up to 60 Hz. Our findings reveal that third-semester ASL learners show increased accuracy in detecting high-frequency flicker, indicating enhanced temporal resolution. Interestingly, as learners achieve higher proficiency in ASL, their perceptual sampling reverts to typical levels, likely because of a shift toward predictive processing mechanisms in sign language comprehension. These results suggest that the temporal resolution of visual attention is malleable and can be influenced by the process of learning a visual language.
{"title":"Effect of sign language learning on temporal resolution of visual attention.","authors":"Serpil Karabüklü, Sandra Wood, Chuck Bradley, Ronnie B Wilbur, Evie A Malaia","doi":"10.1167/jov.25.1.3","DOIUrl":"10.1167/jov.25.1.3","url":null,"abstract":"<p><p>The visual environment of sign language users is markedly distinct in its spatiotemporal parameters compared to that of non-signers. Although the importance of temporal and spectral resolution in the auditory modality for language development is well established, the spectrotemporal parameters of visual attention necessary for sign language comprehension remain less understood. This study investigates visual temporal resolution in learners of American Sign Language (ASL) at various stages of acquisition to determine how experience with sign language affects perceptual sampling. Using a flicker paradigm, we assessed the accuracy of identifying out-of-phase visual flicker objects at frequencies up to 60 Hz. Our findings reveal that third-semester ASL learners show increased accuracy in detecting high-frequency flicker, indicating enhanced temporal resolution. Interestingly, as learners achieve higher proficiency in ASL, their perceptual sampling reverts to typical levels, likely because of a shift toward predictive processing mechanisms in sign language comprehension. These results suggest that the temporal resolution of visual attention is malleable and can be influenced by the process of learning a visual language.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11706239/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sofia Varon, Karsten Babin, Miriam Spering, Jody C Culham
Human performance in perceptual and visuomotor tasks is enhanced when stimulus motion follows the laws of gravitational physics, including acceleration consistent with Earth's gravity, g. Here we used a manual interception task in virtual reality to investigate the effects of trajectory shape and orientation on interception timing and accuracy. Participants punched to intercept a ball moving along one of four trajectories that varied in shape (parabola or tent) and orientation (upright or inverted). We also varied the location of visual fixation such that trajectories fell entirely within the lower or upper visual field. Reaction times were faster for more natural shapes and orientations, regardless of visual field. Overall accuracy was poorer and movement time was longer for the inverted tent condition than the other three conditions, perhaps because it was imperfectly reminiscent of a bouncing ball. A detailed analysis of spatial errors revealed that interception endpoints were more likely to fall along the path of the final trajectory in upright vs. inverted conditions, suggesting stronger expectations regarding the final trajectory direction for these conditions. Taken together, these results suggest that the naturalness of the shape and orientation of a trajectory contributes to performance in a virtual interception task.
{"title":"Target interception in virtual reality is better for natural versus unnatural trajectory shapes and orientations.","authors":"Sofia Varon, Karsten Babin, Miriam Spering, Jody C Culham","doi":"10.1167/jov.25.1.11","DOIUrl":"10.1167/jov.25.1.11","url":null,"abstract":"<p><p>Human performance in perceptual and visuomotor tasks is enhanced when stimulus motion follows the laws of gravitational physics, including acceleration consistent with Earth's gravity, g. Here we used a manual interception task in virtual reality to investigate the effects of trajectory shape and orientation on interception timing and accuracy. Participants punched to intercept a ball moving along one of four trajectories that varied in shape (parabola or tent) and orientation (upright or inverted). We also varied the location of visual fixation such that trajectories fell entirely within the lower or upper visual field. Reaction times were faster for more natural shapes and orientations, regardless of visual field. Overall accuracy was poorer and movement time was longer for the inverted tent condition than the other three conditions, perhaps because it was imperfectly reminiscent of a bouncing ball. A detailed analysis of spatial errors revealed that interception endpoints were more likely to fall along the path of the final trajectory in upright vs. inverted conditions, suggesting stronger expectations regarding the final trajectory direction for these conditions. Taken together, these results suggest that the naturalness of the shape and orientation of a trajectory contributes to performance in a virtual interception task.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"11"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11725989/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent studies have suggested that autistic perception can be attributed to atypical Bayesian inference; however, it remains unclear whether the atypical Bayesian inference originates in the perceptual or post-perceptual stage or both. This study examined serial dependence in orientation at the perceptual and response stages in autistic and neurotypical adult groups. Participants comprised 17 autistic and 23 neurotypical adults. They reproduced the orientation of a Gabor stimulus in every odd trial or its mirror in every even trial. In the similar-stimulus session, a right-tilted Gabor stimulus was always presented; hence, serial dependence at the perceptual stage was presumed to occur because the perceived orientation was similar throughout the session. In the similar-response session, right- and left-tilted Gabor patches were alternately presented; thus serial dependence was presumed to occur because the response orientations were similar. Significant serial dependence was observed only in neurotypical adults for the similar-stimulus session, whereas it was observed in both groups for the similar-response session. Moreover, no significant correlation was observed between serial dependence and sensory profile. These findings suggest that autistic individuals possess atypical Bayesian inference at the perceptual stage and that sensory experiences in their daily lives are not attributable only to atypical Bayesian inference.
{"title":"Serial dependence in orientation is weak at the perceptual stage but intact at the response stage in autistic adults.","authors":"Masaki Tsujita, Naoko Inada, Ayako H Saneyoshi, Tomoe Hayakawa, Shin-Ichiro Kumagaya","doi":"10.1167/jov.25.1.13","DOIUrl":"10.1167/jov.25.1.13","url":null,"abstract":"<p><p>Recent studies have suggested that autistic perception can be attributed to atypical Bayesian inference; however, it remains unclear whether the atypical Bayesian inference originates in the perceptual or post-perceptual stage or both. This study examined serial dependence in orientation at the perceptual and response stages in autistic and neurotypical adult groups. Participants comprised 17 autistic and 23 neurotypical adults. They reproduced the orientation of a Gabor stimulus in every odd trial or its mirror in every even trial. In the similar-stimulus session, a right-tilted Gabor stimulus was always presented; hence, serial dependence at the perceptual stage was presumed to occur because the perceived orientation was similar throughout the session. In the similar-response session, right- and left-tilted Gabor patches were alternately presented; thus serial dependence was presumed to occur because the response orientations were similar. Significant serial dependence was observed only in neurotypical adults for the similar-stimulus session, whereas it was observed in both groups for the similar-response session. Moreover, no significant correlation was observed between serial dependence and sensory profile. These findings suggest that autistic individuals possess atypical Bayesian inference at the perceptual stage and that sensory experiences in their daily lives are not attributable only to atypical Bayesian inference.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"13"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11745202/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual perception has been described as a dynamic process where incoming visual information is combined with what has been seen before to form the current percept. Such a process can result in multiple visual aftereffects that can be attractive toward or repulsive away from past visual stimulation. A lot of research has been conducted on what functional role the mechanisms that produce these aftereffects may play. However, there is a lack of understanding of the role of stimulus uncertainty on these aftereffects. In this study, we investigate how the contrast of a stimulus affects the serial aftereffects it induces and how the stimulus itself is affected by these effects depending on its contrast. We presented human observers with a series of Gabor patches and monitored how the perceived orientation of stimuli changed over time with the systematic manipulation of orientation and contrast of presented stimuli. We hypothesized that repulsive serial effects would be stronger for the judgment of high-contrast than low-contrast stimuli, but the other way around for attractive serial effects. Our experimental findings confirm such a strong interaction between contrast and sign of aftereffects. We present a Bayesian model observer that can explain this interaction based on two principles, the dynamic changes of orientation-tuned channels in short timescales and the slow integration of prior information over long timescales. Our findings have strong implications for our understanding of orientation perception and can inspire further work on the identification of its neural mechanisms.
{"title":"Attractive and repulsive visual aftereffects depend on stimulus contrast.","authors":"Nikos Gekas, Pascal Mamassian","doi":"10.1167/jov.25.1.10","DOIUrl":"10.1167/jov.25.1.10","url":null,"abstract":"<p><p>Visual perception has been described as a dynamic process where incoming visual information is combined with what has been seen before to form the current percept. Such a process can result in multiple visual aftereffects that can be attractive toward or repulsive away from past visual stimulation. A lot of research has been conducted on what functional role the mechanisms that produce these aftereffects may play. However, there is a lack of understanding of the role of stimulus uncertainty on these aftereffects. In this study, we investigate how the contrast of a stimulus affects the serial aftereffects it induces and how the stimulus itself is affected by these effects depending on its contrast. We presented human observers with a series of Gabor patches and monitored how the perceived orientation of stimuli changed over time with the systematic manipulation of orientation and contrast of presented stimuli. We hypothesized that repulsive serial effects would be stronger for the judgment of high-contrast than low-contrast stimuli, but the other way around for attractive serial effects. Our experimental findings confirm such a strong interaction between contrast and sign of aftereffects. We present a Bayesian model observer that can explain this interaction based on two principles, the dynamic changes of orientation-tuned channels in short timescales and the slow integration of prior information over long timescales. Our findings have strong implications for our understanding of orientation perception and can inspire further work on the identification of its neural mechanisms.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11725992/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ajay Subramanian, Sara Price, Omkar Kumbhar, Elena Sizikova, Najib J Majaj, Denis G Pelli
Active object recognition, fundamental to tasks like reading and driving, relies on the ability to make time-sensitive decisions. People exhibit a flexible tradeoff between speed and accuracy, a crucial human skill. However, current computational models struggle to incorporate time. To address this gap, we present the first dataset (with 148 observers) exploring the speed-accuracy tradeoff (SAT) in ImageNet object recognition. Participants performed a 16-way ImageNet categorization task where their responses counted only if they occurred near the time of a fixed-delay beep. Each block of trials allowed one reaction time. As expected, human accuracy increases with reaction time. We compare human performance with that of dynamic neural networks that adapt their computation to the available inference time. Time is a scarce resource for human object recognition, and finding an appropriate analog in neural networks is challenging. Networks can repeat operations by using layers, recurrent cycles, or early exits. We use the repetition count as a network's analog for time. In our analysis, the number of layers, recurrent cycles, and early exits correlates strongly with floating-point operations, making them suitable time analogs. Comparing networks and humans on SAT-fit error, category-wise correlation, and SAT-curve steepness, we find cascaded dynamic neural networks most promising in modeling human speed and accuracy. Surprisingly, convolutional recurrent networks, typically favored in human object recognition modeling, perform the worst on our benchmark.
{"title":"Benchmarking the speed-accuracy tradeoff in object recognition by humans and neural networks.","authors":"Ajay Subramanian, Sara Price, Omkar Kumbhar, Elena Sizikova, Najib J Majaj, Denis G Pelli","doi":"10.1167/jov.25.1.4","DOIUrl":"10.1167/jov.25.1.4","url":null,"abstract":"<p><p>Active object recognition, fundamental to tasks like reading and driving, relies on the ability to make time-sensitive decisions. People exhibit a flexible tradeoff between speed and accuracy, a crucial human skill. However, current computational models struggle to incorporate time. To address this gap, we present the first dataset (with 148 observers) exploring the speed-accuracy tradeoff (SAT) in ImageNet object recognition. Participants performed a 16-way ImageNet categorization task where their responses counted only if they occurred near the time of a fixed-delay beep. Each block of trials allowed one reaction time. As expected, human accuracy increases with reaction time. We compare human performance with that of dynamic neural networks that adapt their computation to the available inference time. Time is a scarce resource for human object recognition, and finding an appropriate analog in neural networks is challenging. Networks can repeat operations by using layers, recurrent cycles, or early exits. We use the repetition count as a network's analog for time. In our analysis, the number of layers, recurrent cycles, and early exits correlates strongly with floating-point operations, making them suitable time analogs. Comparing networks and humans on SAT-fit error, category-wise correlation, and SAT-curve steepness, we find cascaded dynamic neural networks most promising in modeling human speed and accuracy. Surprisingly, convolutional recurrent networks, typically favored in human object recognition modeling, perform the worst on our benchmark.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"4"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11706240/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agostino Gibaldi, Yinghua Liu, Christos Kaspiris-Rousellis, Madhumitha S Mahadevan, Jenny C A Read, Björn N S Vlaskamp, Gerrit W Maus
When rendering the visual scene for near-eye head-mounted displays, accurate knowledge of the geometry of the displays, scene objects, and eyes is required for the correct generation of the binocular images. Despite possible design and calibration efforts, these quantities are subject to positional and measurement errors, resulting in some misalignment of the images projected to each eye. Previous research investigated the effects in virtual reality (VR) setups that triggered such symptoms as eye strain and nausea. This work aimed at investigating the effects of binocular vertical misalignment (BVM) in see-through augmented reality (AR). In such devices, two conflicting environments coexist. One environment corresponds to the real world, which lies in the background and forms geometrically aligned images on the retinas. The other environment corresponds to the augmented content, which stands out as foreground and might be subject to misalignment. We simulated a see-through AR environment using a standard three-dimensional (3D) stereoscopic display to have full control and high accuracy of the real and augmented contents. Participants were involved in a visual search task that forced them to alternatively interact with the real and the augmented contents while being exposed to different amounts of BVM. The measured eye posture indicated that the compensation for vertical misalignment is equally shared by the sensory (binocular fusion) and the motor (vertical vergence) components of binocular vision. The sensitivity of each participant varied, both in terms of perceived discomfort and misalignment tolerance, suggesting that a per-user calibration might be useful for a comfortable visual experience.
{"title":"Eye posture and screen alignment with simulated see-through head-mounted displays.","authors":"Agostino Gibaldi, Yinghua Liu, Christos Kaspiris-Rousellis, Madhumitha S Mahadevan, Jenny C A Read, Björn N S Vlaskamp, Gerrit W Maus","doi":"10.1167/jov.25.1.9","DOIUrl":"10.1167/jov.25.1.9","url":null,"abstract":"<p><p>When rendering the visual scene for near-eye head-mounted displays, accurate knowledge of the geometry of the displays, scene objects, and eyes is required for the correct generation of the binocular images. Despite possible design and calibration efforts, these quantities are subject to positional and measurement errors, resulting in some misalignment of the images projected to each eye. Previous research investigated the effects in virtual reality (VR) setups that triggered such symptoms as eye strain and nausea. This work aimed at investigating the effects of binocular vertical misalignment (BVM) in see-through augmented reality (AR). In such devices, two conflicting environments coexist. One environment corresponds to the real world, which lies in the background and forms geometrically aligned images on the retinas. The other environment corresponds to the augmented content, which stands out as foreground and might be subject to misalignment. We simulated a see-through AR environment using a standard three-dimensional (3D) stereoscopic display to have full control and high accuracy of the real and augmented contents. Participants were involved in a visual search task that forced them to alternatively interact with the real and the augmented contents while being exposed to different amounts of BVM. The measured eye posture indicated that the compensation for vertical misalignment is equally shared by the sensory (binocular fusion) and the motor (vertical vergence) components of binocular vision. The sensitivity of each participant varied, both in terms of perceived discomfort and misalignment tolerance, suggesting that a per-user calibration might be useful for a comfortable visual experience.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"9"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11725991/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vivien Chopurian, Anni Kienke, Christoph Bledowski, Thomas B Christophel
Previous research has shown that, when multiple similar items are maintained in working memory, recall precision declines. Less is known about how heterogeneous sets of items across different features within and between modalities impact recall precision. In two experiments, we investigated modality (Experiment 1, n = 79) and feature-specific (Experiment 2, n = 154) load effects on working memory performance. First, we found a cross-modal advantage in continuous recall: Orientations that are memorized together with a pitch are recalled more precisely than orientations that are memorized together with another orientation. The results of our second experiment, however, suggest that this is not a pure effect of sensory modality but rather a feature-dependent effect. We combined orientations, pitches, and colors in pairs. We found that memorizing orientations together with a color benefits orientation recall to a similar extent as the cross-modal benefit. To investigate this absence of interference between orientations and colors held in working memory, we analyzed subjective reports of strategies used for the different features. We found that, although orientations and pitches rely almost exclusively on sensory strategies, colors are memorized not only visually but also with abstract and verbal strategies. Thus, although color stimuli are also visually presented, they might be represented by independent neural circuits. Our results suggest that working memory storage is organized in a modality-, feature-, and strategy-dependent way.
{"title":"Modality-, feature-, and strategy-dependent organization of low-level working memory.","authors":"Vivien Chopurian, Anni Kienke, Christoph Bledowski, Thomas B Christophel","doi":"10.1167/jov.25.1.16","DOIUrl":"https://doi.org/10.1167/jov.25.1.16","url":null,"abstract":"<p><p>Previous research has shown that, when multiple similar items are maintained in working memory, recall precision declines. Less is known about how heterogeneous sets of items across different features within and between modalities impact recall precision. In two experiments, we investigated modality (Experiment 1, n = 79) and feature-specific (Experiment 2, n = 154) load effects on working memory performance. First, we found a cross-modal advantage in continuous recall: Orientations that are memorized together with a pitch are recalled more precisely than orientations that are memorized together with another orientation. The results of our second experiment, however, suggest that this is not a pure effect of sensory modality but rather a feature-dependent effect. We combined orientations, pitches, and colors in pairs. We found that memorizing orientations together with a color benefits orientation recall to a similar extent as the cross-modal benefit. To investigate this absence of interference between orientations and colors held in working memory, we analyzed subjective reports of strategies used for the different features. We found that, although orientations and pitches rely almost exclusively on sensory strategies, colors are memorized not only visually but also with abstract and verbal strategies. Thus, although color stimuli are also visually presented, they might be represented by independent neural circuits. Our results suggest that working memory storage is organized in a modality-, feature-, and strategy-dependent way.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"16"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143054133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans can estimate the time and position of a moving object's arrival. However, numerous studies have demonstrated superior position estimation accuracy for descending objects compared with ascending objects. We tested whether the accuracy of position estimation for ascending and descending objects differs between the upper and lower visual fields. Using a head-mounted display, participants observed a target object ascending or descending toward a goal located at 8.7° or 17.1° above or below from the center of the monitor in the upper and lower visual fields, respectively. Participants pressed a key to match the time of the target's arrival at the goal, with the gaze kept centered. For goals (8.7°) close to the center, ascending and descending objects were equally accurate, whereas for goals (17.1°) far from the center, the ascending target's position estimation in the upper visual field was inferior to the others. Targets moved away from the center for goals further from the center and closer to the center for goals nearer to the center. As the positional accuracy of ascending and descending objects was not assessed for each of the four goals, it remains unclear which was more important for impaired accuracy: the proximity of the target position or direction of the upward or downward motion. However, taken together with previous studies, we suggest that estimating the position of objects moving further away from the central fovea of the upper visual field may have contributed to the asymmetry in position estimation for ascending and descending objects.
{"title":"Impaired visual perceptual accuracy in the upper visual field induces asymmetric performance in position estimation for falling and rising objects.","authors":"Takashi Hirata, Nobuyuki Kawai","doi":"10.1167/jov.25.1.1","DOIUrl":"https://doi.org/10.1167/jov.25.1.1","url":null,"abstract":"<p><p>Humans can estimate the time and position of a moving object's arrival. However, numerous studies have demonstrated superior position estimation accuracy for descending objects compared with ascending objects. We tested whether the accuracy of position estimation for ascending and descending objects differs between the upper and lower visual fields. Using a head-mounted display, participants observed a target object ascending or descending toward a goal located at 8.7° or 17.1° above or below from the center of the monitor in the upper and lower visual fields, respectively. Participants pressed a key to match the time of the target's arrival at the goal, with the gaze kept centered. For goals (8.7°) close to the center, ascending and descending objects were equally accurate, whereas for goals (17.1°) far from the center, the ascending target's position estimation in the upper visual field was inferior to the others. Targets moved away from the center for goals further from the center and closer to the center for goals nearer to the center. As the positional accuracy of ascending and descending objects was not assessed for each of the four goals, it remains unclear which was more important for impaired accuracy: the proximity of the target position or direction of the upward or downward motion. However, taken together with previous studies, we suggest that estimating the position of objects moving further away from the central fovea of the upper visual field may have contributed to the asymmetry in position estimation for ascending and descending objects.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142916008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}