Mohammed M Alnawmasi, Nawaf M Almutairi, Sieu K Khuu
The ability to maintain visual attention to track multiple moving objects has been reported to be impaired in individuals with mild traumatic brain injury (mTBI). We investigated whether deficits in multiple object tracking (MOT) following mTBI is associated with behavioral markers of attention, particularly cognitively driven pupillary dilation responses and eye movement patterns. Thirty-five adults were recruited. Pupillary responses and eye movements were tracked while participants performed a MOT task in which the duration of tracking (five and ten seconds), number of target dots (three, four, and five), and number of distractor dots (three, six, and nine) were independent variables. Patients with mTBI had reduced pupil dilation when tracking a high number of target dots (four dots: Mean difference [MD] = 0.79, p < 0.001; five dots: MD = 0.67, p < 0.001) compared to controls. Similarly, patients with mTBI had reduced pupil dilation when the number of distractor dots increased (six distractors: MD = 0.43, p < 0.001; nine distractors: MD = 0.46, p < 0.001) compared to controls. A reduction in pupil dilation observed in patients with mTBI may reflect a limitation in the mental capacity to meet increasing cognitive demands. Eye movement analysis showed that patients with mTBI made significantly more fixations (and with reduced fixation durations), consistent with a local tracking strategy, than controls. In conclusion, tracking pupil response and eye movements while tracking multiple moving objects provided an indication of possible factors that contributed to the poor performance among patients with mTBI.
据报道,轻度创伤性脑损伤(mTBI)患者维持视觉注意力追踪多个运动物体的能力受损。我们研究了mTBI后多目标追踪(MOT)缺陷是否与注意的行为标记有关,特别是认知驱动的瞳孔扩张反应和眼动模式。招募了35名成年人。当参与者执行一项MOT任务时,瞳孔反应和眼球运动被跟踪,其中跟踪的持续时间(5秒和10秒)、目标点的数量(3、4和5)和分心点的数量(3、6和9)是独立变量。与对照组相比,mTBI患者在跟踪大量目标点(4个点:平均差值[MD] = 0.79, p < 0.001; 5个点:MD = 0.67, p < 0.001)时瞳孔扩张减小。同样,与对照组相比,mTBI患者瞳孔扩张随着牵张器点数的增加而减小(6个牵张器:MD = 0.43, p < 0.001; 9个牵张器:MD = 0.46, p < 0.001)。在mTBI患者中观察到瞳孔扩张的减少可能反映了心智能力的限制,以满足不断增加的认知需求。眼动分析显示,与对照组相比,mTBI患者的注视次数明显增加(注视时间缩短),与局部跟踪策略一致。总之,在跟踪多个运动物体的同时跟踪瞳孔反应和眼球运动,为mTBI患者表现不佳的可能因素提供了指示。
{"title":"A pupillary and eye movement investigation of functional deficits in multiple object tracking following mild traumatic brain injury.","authors":"Mohammed M Alnawmasi, Nawaf M Almutairi, Sieu K Khuu","doi":"10.1167/jov.25.12.7","DOIUrl":"10.1167/jov.25.12.7","url":null,"abstract":"<p><p>The ability to maintain visual attention to track multiple moving objects has been reported to be impaired in individuals with mild traumatic brain injury (mTBI). We investigated whether deficits in multiple object tracking (MOT) following mTBI is associated with behavioral markers of attention, particularly cognitively driven pupillary dilation responses and eye movement patterns. Thirty-five adults were recruited. Pupillary responses and eye movements were tracked while participants performed a MOT task in which the duration of tracking (five and ten seconds), number of target dots (three, four, and five), and number of distractor dots (three, six, and nine) were independent variables. Patients with mTBI had reduced pupil dilation when tracking a high number of target dots (four dots: Mean difference [MD] = 0.79, p < 0.001; five dots: MD = 0.67, p < 0.001) compared to controls. Similarly, patients with mTBI had reduced pupil dilation when the number of distractor dots increased (six distractors: MD = 0.43, p < 0.001; nine distractors: MD = 0.46, p < 0.001) compared to controls. A reduction in pupil dilation observed in patients with mTBI may reflect a limitation in the mental capacity to meet increasing cognitive demands. Eye movement analysis showed that patients with mTBI made significantly more fixations (and with reduced fixation durations), consistent with a local tracking strategy, than controls. In conclusion, tracking pupil response and eye movements while tracking multiple moving objects provided an indication of possible factors that contributed to the poor performance among patients with mTBI.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"7"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12517373/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145233903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Object individuation, the process of endowing visual elements with objecthood, is known to have a limited capacity, as demonstrated by the subitizing phenomenon-the rapid and precise enumeration of small quantities (up to three or four items). Previous research has primarily focused on multiple object individuation when components defining each object are presented simultaneously. However, the impact of temporal factors remains understudied. This study investigates the role of temporal processing modes in subitizing. Specifically, we investigated whether subitizing remains feasible and maintains a comparable capacity when object-defining components are presented at different times and need to be either combined into a single object (temporal integration) or separated into distinct objects (temporal segregation). Across two experiments using paradigms based on the missing/odd element task, the impact of different temporal operations (integration vs. segregation) on subitizing was examined after task difficulty was equalized by individually-adjusted inter-stimulus intervals. The results revealed that subitizing is a ubiquitous phenomenon even when target components are presented at different times. Critically, whether these components are temporally integrable or separable influences subitizing capacity. Temporal segregation exhibited a higher subitizing capacity and lower cognitive resource demands than temporal integration, likely because it prioritizes perceptual sensitivity to change over maintaining perceptual continuity and stability during the initial stage of object individuation. Additionally, temporal integration-based subitizing benefits more from an increased repetition of displays than temporal segregation-based subitizing. These findings demonstrate that task-dependent temporal processing modes modulate the efficiency and capacity of numerical individuation, underscoring the importance of temporal organization in multiple object individuation.
{"title":"Disentangling temporal integration and segregation in multiple object individuation.","authors":"Yue Huang, Fengxiao Hao, Min Li, Hexing Zhong, Zhangjing Ma, Zhao Fan, Xianfeng Ding, Xiaorong Cheng","doi":"10.1167/jov.25.12.10","DOIUrl":"10.1167/jov.25.12.10","url":null,"abstract":"<p><p>Object individuation, the process of endowing visual elements with objecthood, is known to have a limited capacity, as demonstrated by the subitizing phenomenon-the rapid and precise enumeration of small quantities (up to three or four items). Previous research has primarily focused on multiple object individuation when components defining each object are presented simultaneously. However, the impact of temporal factors remains understudied. This study investigates the role of temporal processing modes in subitizing. Specifically, we investigated whether subitizing remains feasible and maintains a comparable capacity when object-defining components are presented at different times and need to be either combined into a single object (temporal integration) or separated into distinct objects (temporal segregation). Across two experiments using paradigms based on the missing/odd element task, the impact of different temporal operations (integration vs. segregation) on subitizing was examined after task difficulty was equalized by individually-adjusted inter-stimulus intervals. The results revealed that subitizing is a ubiquitous phenomenon even when target components are presented at different times. Critically, whether these components are temporally integrable or separable influences subitizing capacity. Temporal segregation exhibited a higher subitizing capacity and lower cognitive resource demands than temporal integration, likely because it prioritizes perceptual sensitivity to change over maintaining perceptual continuity and stability during the initial stage of object individuation. Additionally, temporal integration-based subitizing benefits more from an increased repetition of displays than temporal segregation-based subitizing. These findings demonstrate that task-dependent temporal processing modes modulate the efficiency and capacity of numerical individuation, underscoring the importance of temporal organization in multiple object individuation.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"10"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12517379/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ultimately, human behavior needs to be understood in the context of natural everyday tasks. Over the last two decades, a number of observations of natural visually guided behavior have accumulated. These observations help define the functional demands placed on the visual system in a variety of tasks, but progress has been limited by the diversity of natural behavior and by the lack of unified theoretical structures to guide understanding of the underlying processes. In this article, we summarize some recent attempts that might provide a template for a more formal approach. This is possible because it has become clear that natural behavior has many regularities reflecting the underlying sensorimotor decisions. We first summarize these regularities and then show how simple visually guided behaviors can be well described by partially observable Markov decision processes. We give examples of how laboratory experiments can be designed to elicit the common elements of natural behavior and how such experiments afford control of the statistical structure of tasks, thereby allowing formal modeling. Finally, we suggest that a new exciting avenue using recently introduced inverse models may lead the way forward, as it recovers the intrinsic properties of human perception, cognition, and action, which are intertwined in natural behavior.
{"title":"Computational elements of natural vision.","authors":"Constantin A Rothkopf, Mary M Hayhoe","doi":"10.1167/jov.25.12.4","DOIUrl":"10.1167/jov.25.12.4","url":null,"abstract":"<p><p>Ultimately, human behavior needs to be understood in the context of natural everyday tasks. Over the last two decades, a number of observations of natural visually guided behavior have accumulated. These observations help define the functional demands placed on the visual system in a variety of tasks, but progress has been limited by the diversity of natural behavior and by the lack of unified theoretical structures to guide understanding of the underlying processes. In this article, we summarize some recent attempts that might provide a template for a more formal approach. This is possible because it has become clear that natural behavior has many regularities reflecting the underlying sensorimotor decisions. We first summarize these regularities and then show how simple visually guided behaviors can be well described by partially observable Markov decision processes. We give examples of how laboratory experiments can be designed to elicit the common elements of natural behavior and how such experiments afford control of the statistical structure of tasks, thereby allowing formal modeling. Finally, we suggest that a new exciting avenue using recently introduced inverse models may lead the way forward, as it recovers the intrinsic properties of human perception, cognition, and action, which are intertwined in natural behavior.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"4"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12514976/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emily J A-Izzeddin, Thomas S A Wallis, Jason B Mattingley, William J Harrison
The mechanisms by which humans perceptually organize individual regions of a visual scene to generate a coherent scene representation remain largely unknown. Our perception of statistical regularities has been relatively well-studied in simple stimuli, and explicit computational mechanisms that use low-level image features (e.g., luminance, contrast energy) to explain these perceptions have been described. Here, we investigate to what extent observers can effectively use such low-level information present in isolated naturalistic scene regions to facilitate associations between said regions. Across two experiments, participants were shown an isolated reference patch, then required to select which of two subsequently presented patches came from the same scene as the reference (two-alternative forced choice method). In Experiment 1, participants made their judgments based on unaltered image patches, and were consistently above chance when performing such association judgments. Additionally, participants' responses were well-predicted by a generalized linear multilevel model using predictors based on low-level feature similarity metrics (specifically, pixel-wise luminance and phase-invariant structure correlations). In Experiment 2, participants were presented with unaltered image regions, thresholded image regions, or regions reduced to only their edge content. Performance for thresholded and edge regions was significantly poorer than for unaltered image regions. Nonetheless, the model still correlated well with participants' judgments. Our findings suggest that image region associations can be accounted for using low-level feature correlations, suggesting such basic features are strongly associated with those underlying judgments made for complex visual stimuli.
{"title":"Low-level features predict perceived similarity for naturalistic images.","authors":"Emily J A-Izzeddin, Thomas S A Wallis, Jason B Mattingley, William J Harrison","doi":"10.1167/jov.25.12.11","DOIUrl":"10.1167/jov.25.12.11","url":null,"abstract":"<p><p>The mechanisms by which humans perceptually organize individual regions of a visual scene to generate a coherent scene representation remain largely unknown. Our perception of statistical regularities has been relatively well-studied in simple stimuli, and explicit computational mechanisms that use low-level image features (e.g., luminance, contrast energy) to explain these perceptions have been described. Here, we investigate to what extent observers can effectively use such low-level information present in isolated naturalistic scene regions to facilitate associations between said regions. Across two experiments, participants were shown an isolated reference patch, then required to select which of two subsequently presented patches came from the same scene as the reference (two-alternative forced choice method). In Experiment 1, participants made their judgments based on unaltered image patches, and were consistently above chance when performing such association judgments. Additionally, participants' responses were well-predicted by a generalized linear multilevel model using predictors based on low-level feature similarity metrics (specifically, pixel-wise luminance and phase-invariant structure correlations). In Experiment 2, participants were presented with unaltered image regions, thresholded image regions, or regions reduced to only their edge content. Performance for thresholded and edge regions was significantly poorer than for unaltered image regions. Nonetheless, the model still correlated well with participants' judgments. Our findings suggest that image region associations can be accounted for using low-level feature correlations, suggesting such basic features are strongly associated with those underlying judgments made for complex visual stimuli.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"11"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12514980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristian P Skoczek, Jennifer H Acton, John A Greenwood, Tony Redmond
Visual crowding is the disruptive effect of nearby details on the perception of a target. This influence is dependent on both spatial separation and perceived similarity between target and flanker elements. However, it is not clear how these simultaneous influences combine to produce the final "crowded" percept as flankers traverse the limits of the crowding zone. We investigated the reported appearance of a peripherally presented Landolt-C target flanked by a pair of simultaneously presented Landolt-Cs across different levels of target-flanker similarity (relative orientation), spatial separation, and target eccentricity. The distributions of errors in reported target orientation were fitted with a pooling model that simulated errors using a weighted combination of target and flanker orientation signals. The change in error distribution with target-flanker spacing (the "spatial profile") was fitted with a logistic function, estimating both the rate at which target- and flanker-signal weighting varies as target-flanker spatial separation decreases (slope) and the spatial separation at which signals were balanced (midpoint). We found that the slope of the spatial profile increases as target-flanker similarity decreases, with similar modulation patterns across target eccentricities. In contrast, spatial profile midpoints increased linearly with eccentricity, in line with Bouma's law, but were invariant of target-flanker similarity. This suggests similarity-related modulation may operate within a fixed spatial extent at each eccentricity. Investigating the spatial profile of crowding disentangles effects related to the appearance of targets and flankers (i.e., similarity) from appearance-independent influences, which can be confounded when using other common measures to define crowding zone extent.
{"title":"Target-flanker similarity alters the spatial profile of visual crowding.","authors":"Kristian P Skoczek, Jennifer H Acton, John A Greenwood, Tony Redmond","doi":"10.1167/jov.25.12.17","DOIUrl":"10.1167/jov.25.12.17","url":null,"abstract":"<p><p>Visual crowding is the disruptive effect of nearby details on the perception of a target. This influence is dependent on both spatial separation and perceived similarity between target and flanker elements. However, it is not clear how these simultaneous influences combine to produce the final \"crowded\" percept as flankers traverse the limits of the crowding zone. We investigated the reported appearance of a peripherally presented Landolt-C target flanked by a pair of simultaneously presented Landolt-Cs across different levels of target-flanker similarity (relative orientation), spatial separation, and target eccentricity. The distributions of errors in reported target orientation were fitted with a pooling model that simulated errors using a weighted combination of target and flanker orientation signals. The change in error distribution with target-flanker spacing (the \"spatial profile\") was fitted with a logistic function, estimating both the rate at which target- and flanker-signal weighting varies as target-flanker spatial separation decreases (slope) and the spatial separation at which signals were balanced (midpoint). We found that the slope of the spatial profile increases as target-flanker similarity decreases, with similar modulation patterns across target eccentricities. In contrast, spatial profile midpoints increased linearly with eccentricity, in line with Bouma's law, but were invariant of target-flanker similarity. This suggests similarity-related modulation may operate within a fixed spatial extent at each eccentricity. Investigating the spatial profile of crowding disentangles effects related to the appearance of targets and flankers (i.e., similarity) from appearance-independent influences, which can be confounded when using other common measures to define crowding zone extent.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"17"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12517358/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seeing on the fly: No need for space-to-time encoding; saccade-generated transients enable fast, parallel representation of space.","authors":"Moshe Gur","doi":"10.1167/jov.25.11.4","DOIUrl":"10.1167/jov.25.11.4","url":null,"abstract":"","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"4"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12416515/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144994115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sumiya Sheikh Abdirashid, Tomas Knapen, Serge O Dumoulin
We alter our sampling of visual space not only by where we direct our gaze, but also by where and how we direct our attention. Attention attracts receptive fields toward the attended position, but our understanding of this process is limited. Here we show that the degree of this attraction toward the attended locus is dictated not just by the attended position, but also by the precision of attention. We manipulated attentional precision while using 7T functional magnetic resonance imaging to measure population receptive field (pRF) properties. Participants performed the same color-proportion detection task either focused at fixation (0.1° radius) or distributed across the entire display (>5° radius). We observed blood oxygenation level-dependent response amplitude increases as a function of the task, with selective increases in foveal pRFs for the focused attention task and vice versa for the distributed attention task. Furthermore, cortical spatial tuning changed as a function of attentional precision. Specifically, focused attention more strongly attracted pRFs toward the attended locus compared with distributed attention. This attraction also depended on the degree of overlap between a pRF and the attention field. A Gaussian attention field model with an offset on the attention field explained our results. Together, our observations indicate the spatial distribution of attention dictates the degree of its resampling of visual space.
{"title":"The precision of attention controls attraction of population receptive fields.","authors":"Sumiya Sheikh Abdirashid, Tomas Knapen, Serge O Dumoulin","doi":"10.1167/jov.25.11.3","DOIUrl":"10.1167/jov.25.11.3","url":null,"abstract":"<p><p>We alter our sampling of visual space not only by where we direct our gaze, but also by where and how we direct our attention. Attention attracts receptive fields toward the attended position, but our understanding of this process is limited. Here we show that the degree of this attraction toward the attended locus is dictated not just by the attended position, but also by the precision of attention. We manipulated attentional precision while using 7T functional magnetic resonance imaging to measure population receptive field (pRF) properties. Participants performed the same color-proportion detection task either focused at fixation (0.1° radius) or distributed across the entire display (>5° radius). We observed blood oxygenation level-dependent response amplitude increases as a function of the task, with selective increases in foveal pRFs for the focused attention task and vice versa for the distributed attention task. Furthermore, cortical spatial tuning changed as a function of attentional precision. Specifically, focused attention more strongly attracted pRFs toward the attended locus compared with distributed attention. This attraction also depended on the degree of overlap between a pRF and the attention field. A Gaussian attention field model with an offset on the attention field explained our results. Together, our observations indicate the spatial distribution of attention dictates the degree of its resampling of visual space.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"3"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12410274/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Norick R Bowers, Karl R Gegenfurtner, Alexander Goettker
The contrast sensitivity function (CSF) has been studied extensively; however, most studies have focused on the central region of the visual field. The current study aims to address two gaps in previous measurements: first, it provides a detailed measurement of the CSF for both achromatic and, importantly, chromatic stimuli in the far periphery, up to 90 dva of visual angle. Second, we describe visual sensitivity around the monocular/binocular boundary that is naturally present in the periphery. In the first experiment, the CSF was measured in three different conditions: Stimuli were either Achromatic (L + M), Red-Green (L - M) or Yellow-Violet (S - (L + M)) Gabor patches. Overall, results followed the expected patterns established in the near periphery. However, achromatic sensitivity in the far periphery was mostly underestimated by current models of visual perception, and the decay in sensitivity observed for red-green stimuli slows down in the periphery. The decay of sensitivity for yellow-violet stimuli roughly matches that of achromatic stimuli. For the second experiment, we compared binocular and monocular visual sensitivity at different locations in the visual field. We observed a consistent increase in visual sensitivity for binocular viewing in the central part of the visual field compared to monocular viewing, but this benefit already decreased within the binocular visual field in the periphery. Together, these data provide a detailed description of visual sensitivity in the far periphery. These measurements can help to improve current models of visual sensitivity and can be vital for applications in full-field visual displays in virtual and augmented reality.
{"title":"Chromatic and achromatic contrast sensitivity in the far periphery.","authors":"Norick R Bowers, Karl R Gegenfurtner, Alexander Goettker","doi":"10.1167/jov.25.11.7","DOIUrl":"10.1167/jov.25.11.7","url":null,"abstract":"<p><p>The contrast sensitivity function (CSF) has been studied extensively; however, most studies have focused on the central region of the visual field. The current study aims to address two gaps in previous measurements: first, it provides a detailed measurement of the CSF for both achromatic and, importantly, chromatic stimuli in the far periphery, up to 90 dva of visual angle. Second, we describe visual sensitivity around the monocular/binocular boundary that is naturally present in the periphery. In the first experiment, the CSF was measured in three different conditions: Stimuli were either Achromatic (L + M), Red-Green (L - M) or Yellow-Violet (S - (L + M)) Gabor patches. Overall, results followed the expected patterns established in the near periphery. However, achromatic sensitivity in the far periphery was mostly underestimated by current models of visual perception, and the decay in sensitivity observed for red-green stimuli slows down in the periphery. The decay of sensitivity for yellow-violet stimuli roughly matches that of achromatic stimuli. For the second experiment, we compared binocular and monocular visual sensitivity at different locations in the visual field. We observed a consistent increase in visual sensitivity for binocular viewing in the central part of the visual field compared to monocular viewing, but this benefit already decreased within the binocular visual field in the periphery. Together, these data provide a detailed description of visual sensitivity in the far periphery. These measurements can help to improve current models of visual sensitivity and can be vital for applications in full-field visual displays in virtual and augmented reality.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"7"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12439501/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145034589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frederick A A Kingdom, Xingao Clara Wang, Huayun Li, Yoel Yakobi
Numerous studies have shown that sensitivity to binocular targets is higher than to its monocular components, a phenomenon known as binocular summation. Binocular summation has been demonstrated with luminance contrast targets that are not only interocularly in-phase, that is, identical in both eyes, but also interocularly anti-phase, that is, of opposite polarity in the two eyes. Here we show that for the detection of anti-phase targets defined along the red-cyan and violet-lime axes of cardinal color space two eyes are more often than not worse than one. We suggest this is because channels that detect interocular differences, or S- channels are relatively insensitive to chromatic stimuli. We tested this idea by measuring binocular summation for chromatic anti-phase targets in the context of a chromatic surround that itself was either interocularly in-phase or anti-phase. The anti-phase surrounds reduced even further binocular summation for the anti-phase targets whereas the in-phase surrounds increased the level of summation. We show that a model that combines via probability summation the independent activities of adding S+ and differencing S- channels gave a good account of the data, especially for the anti-phase targets. We conclude that binocular adding and differencing channels play an important role in binocular color vision.
{"title":"When two eyes are worse than one: Binocular summation for chromatic, interocular-anti-phase stimuli.","authors":"Frederick A A Kingdom, Xingao Clara Wang, Huayun Li, Yoel Yakobi","doi":"10.1167/jov.25.11.15","DOIUrl":"10.1167/jov.25.11.15","url":null,"abstract":"<p><p>Numerous studies have shown that sensitivity to binocular targets is higher than to its monocular components, a phenomenon known as binocular summation. Binocular summation has been demonstrated with luminance contrast targets that are not only interocularly in-phase, that is, identical in both eyes, but also interocularly anti-phase, that is, of opposite polarity in the two eyes. Here we show that for the detection of anti-phase targets defined along the red-cyan and violet-lime axes of cardinal color space two eyes are more often than not worse than one. We suggest this is because channels that detect interocular differences, or S- channels are relatively insensitive to chromatic stimuli. We tested this idea by measuring binocular summation for chromatic anti-phase targets in the context of a chromatic surround that itself was either interocularly in-phase or anti-phase. The anti-phase surrounds reduced even further binocular summation for the anti-phase targets whereas the in-phase surrounds increased the level of summation. We show that a model that combines via probability summation the independent activities of adding S+ and differencing S- channels gave a good account of the data, especially for the anti-phase targets. We conclude that binocular adding and differencing channels play an important role in binocular color vision.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"15"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12477827/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145139152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The sensory recruitment hypothesis posits that Visual Working Memory (VWM) maintenance uses the same cortical machinery as online perception, implying similarity between the two. Characterizing similarities and differences in these representations is critical for understanding how perceptions are reformatted into durable working memories. It is unknown whether the perceptual appearance effect brought by attention is maintained in VWM. We investigated how VWM depends on attentional state by examining how transient modulations from reflexive (exogenous) attentional orienting affect the appearance of VWM representations; particularly whether VWM takes a "snapshot" at the time of encoding, or transient attentional dynamics continue into VWM. Specifically, we assessed whether the transient modulation to perceived contrast caused by exogenous attention is preserved when attended stimuli are encoded and maintained in VWM. Observers performed a delayed contrast comparison task in which one stimulus had to be held in VWM across a delay and compared to a second stimulus. Exogenous attention was manipulated through transient pre-cues appearing above the location of the first, second, or both stimuli before their onset. Model comparisons revealed that the transient attentional boost to perceived contrast persisted in VWM across the delay. This result indicates that VWM maintains a "snapshot" of the attentional-modulated perceptual representation at the time of encoding and suggests that attentional effects on vision enable us to select and protect in VWM visual information relevant to cognition and action.
{"title":"Transient increases to apparent contrast by exogenous attention persist in visual working memory.","authors":"Luke Huszar, Tair Vizel, Marisa Carrasco","doi":"10.1167/jov.25.11.13","DOIUrl":"10.1167/jov.25.11.13","url":null,"abstract":"<p><p>The sensory recruitment hypothesis posits that Visual Working Memory (VWM) maintenance uses the same cortical machinery as online perception, implying similarity between the two. Characterizing similarities and differences in these representations is critical for understanding how perceptions are reformatted into durable working memories. It is unknown whether the perceptual appearance effect brought by attention is maintained in VWM. We investigated how VWM depends on attentional state by examining how transient modulations from reflexive (exogenous) attentional orienting affect the appearance of VWM representations; particularly whether VWM takes a \"snapshot\" at the time of encoding, or transient attentional dynamics continue into VWM. Specifically, we assessed whether the transient modulation to perceived contrast caused by exogenous attention is preserved when attended stimuli are encoded and maintained in VWM. Observers performed a delayed contrast comparison task in which one stimulus had to be held in VWM across a delay and compared to a second stimulus. Exogenous attention was manipulated through transient pre-cues appearing above the location of the first, second, or both stimuli before their onset. Model comparisons revealed that the transient attentional boost to perceived contrast persisted in VWM across the delay. This result indicates that VWM maintains a \"snapshot\" of the attentional-modulated perceptual representation at the time of encoding and suggests that attentional effects on vision enable us to select and protect in VWM visual information relevant to cognition and action.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"13"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12449825/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}