In order to fully process items of interest, we use information from outside the fovea to plan the target of the next saccadic eye movement. There is growing evidence that our initial preview of the identity and features of the saccade target, prior to bringing it to the fovea using the saccade, is used to make our subsequent post-saccadic processing more efficient. However, the mechanisms underlying trans-saccadic previews remain unknown. We investigated this in a gaze-contingent preview paradigm in which a face stimulus either remained the same ("valid preview") or changed ("invalid preview") during the saccadic eye movement. On some trials, a brief blank gap was added at the beginning of the new fixation, before the face was presented at the fovea. Although the expected preview benefit was found when the face stimulus was present after the saccade, the addition of the blank period eliminated the preview effect. Our results suggest that the preview effect relies on a sensorimotor prediction about both "what" will be present at the fovea after the saccade and "when" the new fixation will begin. These findings provided further evidence for an active, predictive mechanism for trans-saccadic perception.
{"title":"Predicting what and when across saccades: Evidence from the extrafoveal preview effect.","authors":"David Melcher, Michele Deodato","doi":"10.1167/jov.26.2.3","DOIUrl":"https://doi.org/10.1167/jov.26.2.3","url":null,"abstract":"<p><p>In order to fully process items of interest, we use information from outside the fovea to plan the target of the next saccadic eye movement. There is growing evidence that our initial preview of the identity and features of the saccade target, prior to bringing it to the fovea using the saccade, is used to make our subsequent post-saccadic processing more efficient. However, the mechanisms underlying trans-saccadic previews remain unknown. We investigated this in a gaze-contingent preview paradigm in which a face stimulus either remained the same (\"valid preview\") or changed (\"invalid preview\") during the saccadic eye movement. On some trials, a brief blank gap was added at the beginning of the new fixation, before the face was presented at the fovea. Although the expected preview benefit was found when the face stimulus was present after the saccade, the addition of the blank period eliminated the preview effect. Our results suggest that the preview effect relies on a sensorimotor prediction about both \"what\" will be present at the fovea after the saccade and \"when\" the new fixation will begin. These findings provided further evidence for an active, predictive mechanism for trans-saccadic perception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 2","pages":"3"},"PeriodicalIF":2.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146126967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous research has shown that observers can make reliable judgments about the relative mass of moving objects that collide in animated displays. One popular explanation of this is that observers' judgments are based on an internal model of Newtonian dynamics. An alternative explanation is that these judgments are based on measurable optical properties that are correlated with relative mass. To better understand this issue, the present investigation reanalyzed the data from three previous studies by Mitko and Fischer (2023), Sanborn et al. (2013), and Todd and Warren (1982), and it replicated an additional study by Hamrick et al. (2016). These new analyses demonstrate that observers' judgments of relative mass are most likely based on the post-collision optical velocities of objects without having to invoke an implausible mental representation of Newtonian dynamics as has been argued by several previous investigators.
{"title":"The visual perception of relative mass from object collisions.","authors":"James T Todd, J Farley Norman","doi":"10.1167/jov.26.2.1","DOIUrl":"10.1167/jov.26.2.1","url":null,"abstract":"<p><p>Previous research has shown that observers can make reliable judgments about the relative mass of moving objects that collide in animated displays. One popular explanation of this is that observers' judgments are based on an internal model of Newtonian dynamics. An alternative explanation is that these judgments are based on measurable optical properties that are correlated with relative mass. To better understand this issue, the present investigation reanalyzed the data from three previous studies by Mitko and Fischer (2023), Sanborn et al. (2013), and Todd and Warren (1982), and it replicated an additional study by Hamrick et al. (2016). These new analyses demonstrate that observers' judgments of relative mass are most likely based on the post-collision optical velocities of objects without having to invoke an implausible mental representation of Newtonian dynamics as has been argued by several previous investigators.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 2","pages":"1"},"PeriodicalIF":2.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12875344/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frank H Durgin, Nichole Suero Gonzalez, Ping Wen, Alexander C Huk
Density information is a possible primitive for the perception of numerosity. It has been argued, however, that the perception of numerosity is more precise than density perception at low numbers, whereas density is more precise for high numbers. An interpretive problem with the stimuli used to make those claims is that actual stimulus density was often mis-specified owing to an ambiguity regarding the idealized versus actual filled area. This ambiguity had the effect of underestimating density precision at low numerosities. Here we used a novel method of stimulus generation that allows us to accurately specify stimulus density independent of patch size and number, while varying patch size from trial to trial to dissociate numerosity and density. For both numerosity discrimination and density discrimination, we presented single stimuli in central vision for comparison with an internal standard. Feedback was given after each judgment. Using well-defined densities, density discrimination was more precise than numerosity perception at all densities and showed no evidence of varying as a function of density, as previously hypothesized. This was found with 8 practiced observers, and then replicated in a pre-registered study with 32 observers. As expected, feedback nullified size biases on number judgments, showing that observers were adaptively combining density and size. Reanalysis of data from a recent investigation of downward sloping Weber fractions for numerosity showed that the square root-like effects in those sorts of studies were most likely owing to reductions in patch size variance that were correlated with increases in density.
{"title":"Texture density discrimination is more precise than number discrimination.","authors":"Frank H Durgin, Nichole Suero Gonzalez, Ping Wen, Alexander C Huk","doi":"10.1167/jov.26.2.2","DOIUrl":"10.1167/jov.26.2.2","url":null,"abstract":"<p><p>Density information is a possible primitive for the perception of numerosity. It has been argued, however, that the perception of numerosity is more precise than density perception at low numbers, whereas density is more precise for high numbers. An interpretive problem with the stimuli used to make those claims is that actual stimulus density was often mis-specified owing to an ambiguity regarding the idealized versus actual filled area. This ambiguity had the effect of underestimating density precision at low numerosities. Here we used a novel method of stimulus generation that allows us to accurately specify stimulus density independent of patch size and number, while varying patch size from trial to trial to dissociate numerosity and density. For both numerosity discrimination and density discrimination, we presented single stimuli in central vision for comparison with an internal standard. Feedback was given after each judgment. Using well-defined densities, density discrimination was more precise than numerosity perception at all densities and showed no evidence of varying as a function of density, as previously hypothesized. This was found with 8 practiced observers, and then replicated in a pre-registered study with 32 observers. As expected, feedback nullified size biases on number judgments, showing that observers were adaptively combining density and size. Reanalysis of data from a recent investigation of downward sloping Weber fractions for numerosity showed that the square root-like effects in those sorts of studies were most likely owing to reductions in patch size variance that were correlated with increases in density.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 2","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12875346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vignash Tharmaratnam, Jason Haberman, Jonathan S Cant
Visual ensemble perception involves the rapid global extraction of summary statistics (e.g., average features) from groups of items, without requiring single-item recognition and working memory resources. One theory that helps explain global visual perception is the principle of feature diagnosticity. This is when informative bottom-up visual features are preferentially processed to complete the task at hand by being consistent with one's top-down expectations. Past literature has studied ensemble perception using groups of objects and faces and has shown that both low-level (e.g., average color, orientation) and high-level visual statistics (e.g., average crowd animacy, object economic value) can be efficiently extracted. However, no study has explored whether summary statistics can be extracted from stimuli higher in visual complexity, necessitating global, gist-based processing for perception. To investigate this, across five experiments we had participants extract various summary statistical features from ensembles of real-world scenes. We found that average scene content (i.e., perceived naturalness or manufacturedness of scene ensembles) and average spatial boundary (i.e., perceived openness or closedness of scene ensembles) could be rapidly extracted within 125 ms, without reliance on working memory. Interestingly, when we rotated the scenes, average scene orientation could not be extracted, likely because the perception of diagnostic edge information (i.e., cardinal edges for typically encountered upright scenes) was disrupted when rotating the scenes. These results suggest that ensemble perception is a flexible resource that can be used to extract summary statistical information across multiple stimulus types but also has limitations based on the principle of feature diagnosticity in global visual perception.
{"title":"Rapid ensemble encoding of average scene features.","authors":"Vignash Tharmaratnam, Jason Haberman, Jonathan S Cant","doi":"10.1167/jov.26.1.3","DOIUrl":"10.1167/jov.26.1.3","url":null,"abstract":"<p><p>Visual ensemble perception involves the rapid global extraction of summary statistics (e.g., average features) from groups of items, without requiring single-item recognition and working memory resources. One theory that helps explain global visual perception is the principle of feature diagnosticity. This is when informative bottom-up visual features are preferentially processed to complete the task at hand by being consistent with one's top-down expectations. Past literature has studied ensemble perception using groups of objects and faces and has shown that both low-level (e.g., average color, orientation) and high-level visual statistics (e.g., average crowd animacy, object economic value) can be efficiently extracted. However, no study has explored whether summary statistics can be extracted from stimuli higher in visual complexity, necessitating global, gist-based processing for perception. To investigate this, across five experiments we had participants extract various summary statistical features from ensembles of real-world scenes. We found that average scene content (i.e., perceived naturalness or manufacturedness of scene ensembles) and average spatial boundary (i.e., perceived openness or closedness of scene ensembles) could be rapidly extracted within 125 ms, without reliance on working memory. Interestingly, when we rotated the scenes, average scene orientation could not be extracted, likely because the perception of diagnostic edge information (i.e., cardinal edges for typically encountered upright scenes) was disrupted when rotating the scenes. These results suggest that ensemble perception is a flexible resource that can be used to extract summary statistical information across multiple stimulus types but also has limitations based on the principle of feature diagnosticity in global visual perception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"3"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12782198/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jaelyn R Peiso, Stephanie E Palmer, Steven K Shevell
Our visual system usually provides a unique and functional representation of the external world. At times, however, there is more than one compelling interpretation of the same retinal stimulus; in this case, neural populations compete for perceptual dominance to resolve ambiguity. Spatial and temporal context can guide this perceptual experience. Recent evidence shows that ambiguous retinal stimuli are sometimes resolved by enhancing either similarities or differences among multiple ambiguous stimuli. Although rivalry has traditionally been attributed to differences in stimulus strength, color vision introduces nonlinearities that are difficult to reconcile with luminance-based models. Here, it is shown that a tuned, divisive normalization framework can explain how perceptual selection can flexibly yield either similarity-based "grouped" percepts or difference-enhanced percepts during binocular rivalry. Empirical and simulated results show that divisive normalization can account for perceptual representations of either similarity enhancement (so-called grouping) or difference enhancement, offering a unified framework for opposite perceptual outcomes.
{"title":"Perceptual resolution of ambiguity: A divisive normalization account for both interocular color grouping and difference enhancement.","authors":"Jaelyn R Peiso, Stephanie E Palmer, Steven K Shevell","doi":"10.1167/jov.26.1.8","DOIUrl":"10.1167/jov.26.1.8","url":null,"abstract":"<p><p>Our visual system usually provides a unique and functional representation of the external world. At times, however, there is more than one compelling interpretation of the same retinal stimulus; in this case, neural populations compete for perceptual dominance to resolve ambiguity. Spatial and temporal context can guide this perceptual experience. Recent evidence shows that ambiguous retinal stimuli are sometimes resolved by enhancing either similarities or differences among multiple ambiguous stimuli. Although rivalry has traditionally been attributed to differences in stimulus strength, color vision introduces nonlinearities that are difficult to reconcile with luminance-based models. Here, it is shown that a tuned, divisive normalization framework can explain how perceptual selection can flexibly yield either similarity-based \"grouped\" percepts or difference-enhanced percepts during binocular rivalry. Empirical and simulated results show that divisive normalization can account for perceptual representations of either similarity enhancement (so-called grouping) or difference enhancement, offering a unified framework for opposite perceptual outcomes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"8"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12811879/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cemre Yilmaz, Kerstin Maitz, Maximilian Gerschütz, Wilfried Grassegger, Anja Ischebeck, Andreas Bartels, Natalia Zaretskaya
Binocular rivalry occurs when two eyes are presented with two conflicting stimuli. Although the physical stimulation stays the same, the conscious percept changes over time. This property makes it a unique paradigm in both vision science and consciousness research. Two key parameters, contrast and attention, were repeatedly shown to affect binocular rivalry dynamics in a similar manner. This was taken as evidence that attention acts by enhancing effective stimulus contrast. Brief transition periods between the two clear percepts have so far been much less investigated. In a previous study we demonstrated that transition periods can appear in different forms depending on the stimulus type and the observer. In the current study, we investigated how attention and contrast affect transition appearance. Observers viewed binocular rivalry and reported their perception of the four most common transition types by a button press while either the stimulus contrast or the locus of exogenous attention was manipulated. We show that contrast and attention similarly affect the overall binocular rivalry dynamics, but their effects on the appearance of transitions differ. These results suggest that the effect of attention is different from a simple enhancement of stimulus strength, which becomes evident only when different transition types are considered.
{"title":"Differential effects of attention and contrast on transition appearance during binocular rivalry.","authors":"Cemre Yilmaz, Kerstin Maitz, Maximilian Gerschütz, Wilfried Grassegger, Anja Ischebeck, Andreas Bartels, Natalia Zaretskaya","doi":"10.1167/jov.26.1.14","DOIUrl":"10.1167/jov.26.1.14","url":null,"abstract":"<p><p>Binocular rivalry occurs when two eyes are presented with two conflicting stimuli. Although the physical stimulation stays the same, the conscious percept changes over time. This property makes it a unique paradigm in both vision science and consciousness research. Two key parameters, contrast and attention, were repeatedly shown to affect binocular rivalry dynamics in a similar manner. This was taken as evidence that attention acts by enhancing effective stimulus contrast. Brief transition periods between the two clear percepts have so far been much less investigated. In a previous study we demonstrated that transition periods can appear in different forms depending on the stimulus type and the observer. In the current study, we investigated how attention and contrast affect transition appearance. Observers viewed binocular rivalry and reported their perception of the four most common transition types by a button press while either the stimulus contrast or the locus of exogenous attention was manipulated. We show that contrast and attention similarly affect the overall binocular rivalry dynamics, but their effects on the appearance of transitions differ. These results suggest that the effect of attention is different from a simple enhancement of stimulus strength, which becomes evident only when different transition types are considered.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"14"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12854236/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fengping Hu, Joyce Y Chen, Denis G Pelli, Jonathan Winawer
Online vision testing enables efficient data collection from diverse participants, but often requires accurate fixation. When needed, fixation accuracy is traditionally ensured by using a camera to track gaze. That works well in the laboratory, but tracking during online testing with a built-in webcam is not yet sufficiently precise. Kurzawski, Pombo, et al. (2023) introduced a fixation task that improves fixation through hand-eye coordination, requiring participants to track a moving crosshair with a mouse-controlled cursor. This dynamic fixation task greatly reduces peeking at peripheral targets relative to a stationary fixation task, but does not eliminate it. Here, we introduce a crowded dynamic fixation task that further enhances fixation by adding clutter around the fixation mark. We assessed fixation accuracy during peripheral threshold measurement. Relative to the root mean square gaze error during the stationary fixation task, the dynamic fixation error was 55%, whereas the crowded dynamic fixation error was only 40%. With a 1.5° tolerance, peeking occurred on 7% of trials with stationary fixation, 1.5% with dynamic fixation, and 0% with crowded dynamic fixation. This improvement eliminated implausibly low peripheral thresholds, likely by preventing peeking. We conclude that crowded dynamic fixation provides accurate gaze control for online testing.
{"title":"EasyEyes: Crowded dynamic fixation for online psychophysics.","authors":"Fengping Hu, Joyce Y Chen, Denis G Pelli, Jonathan Winawer","doi":"10.1167/jov.26.1.18","DOIUrl":"10.1167/jov.26.1.18","url":null,"abstract":"<p><p>Online vision testing enables efficient data collection from diverse participants, but often requires accurate fixation. When needed, fixation accuracy is traditionally ensured by using a camera to track gaze. That works well in the laboratory, but tracking during online testing with a built-in webcam is not yet sufficiently precise. Kurzawski, Pombo, et al. (2023) introduced a fixation task that improves fixation through hand-eye coordination, requiring participants to track a moving crosshair with a mouse-controlled cursor. This dynamic fixation task greatly reduces peeking at peripheral targets relative to a stationary fixation task, but does not eliminate it. Here, we introduce a crowded dynamic fixation task that further enhances fixation by adding clutter around the fixation mark. We assessed fixation accuracy during peripheral threshold measurement. Relative to the root mean square gaze error during the stationary fixation task, the dynamic fixation error was 55%, whereas the crowded dynamic fixation error was only 40%. With a 1.5° tolerance, peeking occurred on 7% of trials with stationary fixation, 1.5% with dynamic fixation, and 0% with crowded dynamic fixation. This improvement eliminated implausibly low peripheral thresholds, likely by preventing peeking. We conclude that crowded dynamic fixation provides accurate gaze control for online testing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"18"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12859709/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study examines the temporal and spatial components of microsaccade dynamics in homonymous hemianopia (HH) after ischemic stroke, and their association with patients' visual impairments. The eye position data were recorded during visual field testing in 15 patients with HH and 15 controls. Microsaccade rate (temporal) and direction (spatial) dynamics in HH were analyzed across visual field sectors with varying defect depth and compared with controls. Support vector machines were trained to characterize the visual field defects in HH based on microsaccade dynamics. Patients exhibited stronger microsaccadic inhibition in the sighted areas, postponed and stronger microsaccadic inhibition in areas of residual vision (ARVs) compared to controls. Meanwhile, a rebound was evident in the sighted areas but absent in the ARVs and blind areas. Microsaccades surviving the inhibition were more attracted toward the stimulus, whereas microsaccades after the inhibition were directed away from the stimulus in controls. Such pattern was not observed in HH. Dissociated temporal and spatial impairments of microsaccade dynamics suggest multi-fold impairments of the visual and oculomotor networks in HH. Based on the microsaccadic phase signature underlying microsaccade rate dynamics, we characterized patients' visual field defects and discovered regions with residual function inside both the blind and sighted hemifields. These findings suggest that monitoring microsaccade dynamics may provide valuable supplementary information beyond that captured by behavioral responses.
{"title":"Dissociated temporal and spatial impairments of microsaccade dynamics in homonymous hemianopia following ischemic stroke.","authors":"Ying Gao, Huiguang He, Bernhard A Sabel","doi":"10.1167/jov.26.1.17","DOIUrl":"10.1167/jov.26.1.17","url":null,"abstract":"<p><p>This study examines the temporal and spatial components of microsaccade dynamics in homonymous hemianopia (HH) after ischemic stroke, and their association with patients' visual impairments. The eye position data were recorded during visual field testing in 15 patients with HH and 15 controls. Microsaccade rate (temporal) and direction (spatial) dynamics in HH were analyzed across visual field sectors with varying defect depth and compared with controls. Support vector machines were trained to characterize the visual field defects in HH based on microsaccade dynamics. Patients exhibited stronger microsaccadic inhibition in the sighted areas, postponed and stronger microsaccadic inhibition in areas of residual vision (ARVs) compared to controls. Meanwhile, a rebound was evident in the sighted areas but absent in the ARVs and blind areas. Microsaccades surviving the inhibition were more attracted toward the stimulus, whereas microsaccades after the inhibition were directed away from the stimulus in controls. Such pattern was not observed in HH. Dissociated temporal and spatial impairments of microsaccade dynamics suggest multi-fold impairments of the visual and oculomotor networks in HH. Based on the microsaccadic phase signature underlying microsaccade rate dynamics, we characterized patients' visual field defects and discovered regions with residual function inside both the blind and sighted hemifields. These findings suggest that monitoring microsaccade dynamics may provide valuable supplementary information beyond that captured by behavioral responses.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"17"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12859727/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146068421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigates how optical information and dynamical constraints influence movement production and perception. In Experiment 1, 16 volunteers walked or performed a Y-balance movement with and without sight on sturdy or foam-padded floors. The optical information and force environment affected the participants' kinematics, such as stride duration, stride length, stride width, gait speed, joint ranges of motion for walking, total movement duration, and joint ranges of motion for Y-balance. Naïve observers then watched these movements on a point-light display and distinguished movements executed under different optical information (Experiment 2) and force environment (Experiment 3) conditions. They were able to pick out movements performed without sight, especially for those performed on a padded floor; they were also able to discriminate movements performed on different supporting surfaces, especially when the actors were blindfolded. Thus, discriminating movement conditions from point-light displays was possible, and better with higher kinematic variability. Logistic regressions showed discriminating movements relied on the movement kinematics that varied the most between conditions. This information was valid and useful regardless of viewing perspective; that is, whether the walking and Y-balance were displayed in the frontal or side view, the perceptual performance was equivalent. Thus, both optical information and dynamical constraints shape movement patterns in ways that are perceptible through the kinematic variations.
{"title":"What affects the movement can be seen from the movement: Effects of optical information and dynamical constraints on movement production and perception.","authors":"Huiyuan Zhang, Feifei Jiang, Yijing Mao, Xian Yang, Jing Samantha Pan","doi":"10.1167/jov.26.1.6","DOIUrl":"10.1167/jov.26.1.6","url":null,"abstract":"<p><p>This study investigates how optical information and dynamical constraints influence movement production and perception. In Experiment 1, 16 volunteers walked or performed a Y-balance movement with and without sight on sturdy or foam-padded floors. The optical information and force environment affected the participants' kinematics, such as stride duration, stride length, stride width, gait speed, joint ranges of motion for walking, total movement duration, and joint ranges of motion for Y-balance. Naïve observers then watched these movements on a point-light display and distinguished movements executed under different optical information (Experiment 2) and force environment (Experiment 3) conditions. They were able to pick out movements performed without sight, especially for those performed on a padded floor; they were also able to discriminate movements performed on different supporting surfaces, especially when the actors were blindfolded. Thus, discriminating movement conditions from point-light displays was possible, and better with higher kinematic variability. Logistic regressions showed discriminating movements relied on the movement kinematics that varied the most between conditions. This information was valid and useful regardless of viewing perspective; that is, whether the walking and Y-balance were displayed in the frontal or side view, the perceptual performance was equivalent. Thus, both optical information and dynamical constraints shape movement patterns in ways that are perceptible through the kinematic variations.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"6"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12786393/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sophie Skriabine, Maxwell Shinn, Samuel Picard, Kenneth D Harris, Matteo Carandini
Studies of the early visual system often require characterizing the visual preferences of large populations of neurons. This task typically requires multiple stimuli such as sparse noise and drifting gratings, each of which probes only a limited set of visual features. Here, we introduce a new dynamic stimulus with sharp-edged stripes that we term Zebra noise and a new analysis model based on wavelets, and we show that in combination they are highly efficient for mapping multiple aspects of the visual preferences of thousands of neurons. We used two-photon calcium imaging to record the activity of neurons in the mouse visual cortex. Zebra noise elicited strong responses that were more repeatable than those evoked by traditional stimuli. The wavelet-based model captured the repeatable aspects of the resulting responses, providing measures of neuronal tuning for multiple stimulus features: position, orientation, size, spatial frequency, drift rate, and direction. The method proved efficient, requiring only 5 minutes of stimulus (repeated three times) to characterize the tuning of thousands of neurons across visual areas. In combination, the Zebra noise stimulus and the wavelet-based model provide a broadly applicable toolkit for the rapid characterization of visual representations, promising to accelerate future studies of visual function.
{"title":"Mapping the visual cortex with Zebra noise and wavelets.","authors":"Sophie Skriabine, Maxwell Shinn, Samuel Picard, Kenneth D Harris, Matteo Carandini","doi":"10.1167/jov.26.1.1","DOIUrl":"10.1167/jov.26.1.1","url":null,"abstract":"<p><p>Studies of the early visual system often require characterizing the visual preferences of large populations of neurons. This task typically requires multiple stimuli such as sparse noise and drifting gratings, each of which probes only a limited set of visual features. Here, we introduce a new dynamic stimulus with sharp-edged stripes that we term Zebra noise and a new analysis model based on wavelets, and we show that in combination they are highly efficient for mapping multiple aspects of the visual preferences of thousands of neurons. We used two-photon calcium imaging to record the activity of neurons in the mouse visual cortex. Zebra noise elicited strong responses that were more repeatable than those evoked by traditional stimuli. The wavelet-based model captured the repeatable aspects of the resulting responses, providing measures of neuronal tuning for multiple stimulus features: position, orientation, size, spatial frequency, drift rate, and direction. The method proved efficient, requiring only 5 minutes of stimulus (repeated three times) to characterize the tuning of thousands of neurons across visual areas. In combination, the Zebra noise stimulus and the wavelet-based model provide a broadly applicable toolkit for the rapid characterization of visual representations, promising to accelerate future studies of visual function.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"26 1","pages":"1"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12782197/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}