Traditionally, perceptual spaces are defined by the medium through which the visual environment is conveyed (e.g., in a physical environment, through a picture, or on a screen). This approach overlooks the distinct contributions of different types of visual information, such as binocular disparity and motion parallax, that transform different visual environments to yield different perceptual spaces. The current study proposes a new approach to describe different perceptual spaces based on different visual information. A geometrical model was developed to delineate the transformations imposed by binocular disparity and motion parallax, including (a) a relief depth scaling along the observer's line of sight and (b) pictorial distortions that rotate the entire perceptual space, as well as the invariant properties after these transformations, including distance, three-dimensional shape, and allocentric direction. The model was fitted to the behavioral results from two experiments, wherein the participants rotated a human figure to point at different targets in virtual reality. The pointer was displayed on a virtual frame that could differentially manipulate the availability of binocular disparity and motion parallax. The model fitted the behavioral results well, and model comparisons validated the relief scaling in the form of depth expansion and the pictorial distortions in the form of an isotropic rotation. Fitted parameters showed that binocular disparity renders distance invariant but also introduces relief depth expansion to three-dimensional objects, whereas motion parallax keeps allocentric direction invariant. We discuss the implications of the mediating effects of binocular disparity and motion parallax when connecting different perceptual spaces.
{"title":"Relating visual and pictorial space: Integration of binocular disparity and motion parallax.","authors":"Xiaoye Michael Wang, Nikolaus F Troje","doi":"10.1167/jov.24.13.7","DOIUrl":"10.1167/jov.24.13.7","url":null,"abstract":"<p><p>Traditionally, perceptual spaces are defined by the medium through which the visual environment is conveyed (e.g., in a physical environment, through a picture, or on a screen). This approach overlooks the distinct contributions of different types of visual information, such as binocular disparity and motion parallax, that transform different visual environments to yield different perceptual spaces. The current study proposes a new approach to describe different perceptual spaces based on different visual information. A geometrical model was developed to delineate the transformations imposed by binocular disparity and motion parallax, including (a) a relief depth scaling along the observer's line of sight and (b) pictorial distortions that rotate the entire perceptual space, as well as the invariant properties after these transformations, including distance, three-dimensional shape, and allocentric direction. The model was fitted to the behavioral results from two experiments, wherein the participants rotated a human figure to point at different targets in virtual reality. The pointer was displayed on a virtual frame that could differentially manipulate the availability of binocular disparity and motion parallax. The model fitted the behavioral results well, and model comparisons validated the relief scaling in the form of depth expansion and the pictorial distortions in the form of an isotropic rotation. Fitted parameters showed that binocular disparity renders distance invariant but also introduces relief depth expansion to three-dimensional objects, whereas motion parallax keeps allocentric direction invariant. We discuss the implications of the mediating effects of binocular disparity and motion parallax when connecting different perceptual spaces.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11640909/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Corrections to: Mapping spatial frequency preferences across human primary visual cortex.","authors":"","doi":"10.1167/jov.24.13.8","DOIUrl":"10.1167/jov.24.13.8","url":null,"abstract":"","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11629902/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samantha L Strong, Ayah I Al-Rababah, Leon N Davies
Changes in contrast and blur affect speed perception, raising the question of whether natural changes in the eye (e.g., cataract) that induce light scatter may affect motion perception. This study investigated whether light scatter, similar to that present in a cataractous eye, could have deleterious effects on speed perception. Experiment 1: Participants (n = 14) completed a speed discrimination task using random dot kinematograms. The just-noticeable difference was calculated for two reference speeds (slow; fast) and two directions (translational; radial). Light scatter was induced with filters across four levels: baseline, mild, moderate, severe. Repeated measures analyses of variance (ANOVAs) found significant main effects of scatter on speed discrimination for radial motion (slow F(3, 39) = 7.33, p < 0.01; fast F(3, 39) = 4.80, p < 0.01). Discrimination was attenuated for moderate (slow p = 0.021) and severe (slow p = 0.024; fast p = 0.017) scatter. No effect was found for translational motion. Experiment 2: Participants (n = 14) completed a time-to-contact experiment for three speeds (slow, moderate, fast). Light scatter was induced as Experiment 1. Results show increasing scatter led to perceptual slowing. Repeated measures ANOVAs revealed that moderate (F(3, 39) = 3.57, p = 0.023) and fast (F(1.42, 18.48) = 5.63, p = 0.020) speeds were affected by the increasing light scatter. Overall, speed discrimination is attenuated by increasing light scatter, which seems to be driven by a perceptual slowing of stimuli.
对比度和模糊度的变化会影响速度感知,这就提出了一个问题:眼睛的自然变化(如白内障)是否会引起光散射,从而影响运动感知。本研究调查了类似于白内障眼球中存在的光散射是否会对速度知觉产生有害影响。实验 1:参与者(n = 14)使用随机点运动图完成速度分辨任务。计算两种参考速度(慢;快)和两个方向(平移;径向)的刚察觉到的差异。用滤光片诱导光散射,分为四个等级:基线、轻度、中度和重度。重复测量方差分析(ANOVA)发现,散射对径向运动速度分辨力有显著的主效应(慢速 F(3, 39) = 7.33,p < 0.01;快速 F(3, 39) = 4.80,p < 0.01)。中度(慢速 p = 0.021)和重度(慢速 p = 0.024;快速 p = 0.017)散射的辨别能力减弱。平移运动没有发现影响。实验 2:参与者(n = 14)完成了三种速度(慢速、中速、快速)的接触时间实验。光散射的诱导与实验 1 相同。结果显示,散射光增加会导致感知减慢。重复测量方差分析显示,中速(F(3, 39) = 3.57, p = 0.023)和快速(F(1.42, 18.48) = 5.63, p = 0.020)受到光散射增加的影响。总体而言,光散射的增加会减弱速度分辨能力,这似乎是由对刺激的感知减慢驱动的。
{"title":"Increased light scatter in simulated cataracts degrades speed perception.","authors":"Samantha L Strong, Ayah I Al-Rababah, Leon N Davies","doi":"10.1167/jov.24.13.12","DOIUrl":"10.1167/jov.24.13.12","url":null,"abstract":"<p><p>Changes in contrast and blur affect speed perception, raising the question of whether natural changes in the eye (e.g., cataract) that induce light scatter may affect motion perception. This study investigated whether light scatter, similar to that present in a cataractous eye, could have deleterious effects on speed perception. Experiment 1: Participants (n = 14) completed a speed discrimination task using random dot kinematograms. The just-noticeable difference was calculated for two reference speeds (slow; fast) and two directions (translational; radial). Light scatter was induced with filters across four levels: baseline, mild, moderate, severe. Repeated measures analyses of variance (ANOVAs) found significant main effects of scatter on speed discrimination for radial motion (slow F(3, 39) = 7.33, p < 0.01; fast F(3, 39) = 4.80, p < 0.01). Discrimination was attenuated for moderate (slow p = 0.021) and severe (slow p = 0.024; fast p = 0.017) scatter. No effect was found for translational motion. Experiment 2: Participants (n = 14) completed a time-to-contact experiment for three speeds (slow, moderate, fast). Light scatter was induced as Experiment 1. Results show increasing scatter led to perceptual slowing. Repeated measures ANOVAs revealed that moderate (F(3, 39) = 3.57, p = 0.023) and fast (F(1.42, 18.48) = 5.63, p = 0.020) speeds were affected by the increasing light scatter. Overall, speed discrimination is attenuated by increasing light scatter, which seems to be driven by a perceptual slowing of stimuli.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"12"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668353/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142865864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research on visual texture perception in the last decades was often devoted to segmentation and region segregation. In this report, I address a different aspect, that of texture identification and similarity ratings between texture fields with different texture properties superimposed. In a series of experiments, I noticed that certain feature dimensions were considered as more important for similarity evaluation than others. A particularly low ranking is given to orientation. This paper reports data from two test series: a comparison of color and line orientation and a comparison of two purely spatial properties, texture granularity (spatial frequency) and texture orientation. In both experiments, observers tended to ignore orientation when grouping texture patches for similarity and instead looked for similarities in the second dimension, color or spatial frequency, even across different orientations.
{"title":"Low sensitivity for orientation in texture similarity ratings.","authors":"Hans-Christoph Nothdurft","doi":"10.1167/jov.24.13.14","DOIUrl":"10.1167/jov.24.13.14","url":null,"abstract":"<p><p>Research on visual texture perception in the last decades was often devoted to segmentation and region segregation. In this report, I address a different aspect, that of texture identification and similarity ratings between texture fields with different texture properties superimposed. In a series of experiments, I noticed that certain feature dimensions were considered as more important for similarity evaluation than others. A particularly low ranking is given to orientation. This paper reports data from two test series: a comparison of color and line orientation and a comparison of two purely spatial properties, texture granularity (spatial frequency) and texture orientation. In both experiments, observers tended to ignore orientation when grouping texture patches for similarity and instead looked for similarities in the second dimension, color or spatial frequency, even across different orientations.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"14"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668350/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142865879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weber's law states that estimation noise is proportional to stimulus intensity. Although this holds in perception, it appears absent in visually guided actions where response variability does not scale with object size. This discrepancy is often attributed to dissociated visual processing for perception and action. Here, we explore an alternative explanation: It is the influence of sensory feedback on motor output that causes this apparent violation. Our research investigated response variability across repeated grasps relative to object size and found that the variability pattern is contingent on sensory feedback. Pantomime grasps with neither online visual feedback nor final haptic feedback showed variability that scaled with object size, as expected by Weber's law. However, this scaling diminished when sensory feedback was available, either directly present in the movement (Experiment 1) or in adjacent movements in the same block (Experiment 2). Moreover, a simple visual cue indicating performance error similarly reduced the scaling of variability with object size in manual size estimates, the perceptual counterpart of grasping responses (Experiment 3). These results support the hypothesis that sensory feedback modulates motor responses and their associated variability across both action and perception tasks. Post hoc analyses indicated that the reduced scaling of response variability with object size could be due to changes in motor mapping, the process mapping visual size estimates to motor outputs. Consequently, the absence of Weber's law in action responses might not indicate distinct visual processing but rather adaptive changes in motor strategies based on sensory feedback.
{"title":"Sensory feedback modulates Weber's law of both perception and action.","authors":"Ailin Deng, Evan Cesanek, Fulvio Domini","doi":"10.1167/jov.24.13.10","DOIUrl":"10.1167/jov.24.13.10","url":null,"abstract":"<p><p>Weber's law states that estimation noise is proportional to stimulus intensity. Although this holds in perception, it appears absent in visually guided actions where response variability does not scale with object size. This discrepancy is often attributed to dissociated visual processing for perception and action. Here, we explore an alternative explanation: It is the influence of sensory feedback on motor output that causes this apparent violation. Our research investigated response variability across repeated grasps relative to object size and found that the variability pattern is contingent on sensory feedback. Pantomime grasps with neither online visual feedback nor final haptic feedback showed variability that scaled with object size, as expected by Weber's law. However, this scaling diminished when sensory feedback was available, either directly present in the movement (Experiment 1) or in adjacent movements in the same block (Experiment 2). Moreover, a simple visual cue indicating performance error similarly reduced the scaling of variability with object size in manual size estimates, the perceptual counterpart of grasping responses (Experiment 3). These results support the hypothesis that sensory feedback modulates motor responses and their associated variability across both action and perception tasks. Post hoc analyses indicated that the reduced scaling of response variability with object size could be due to changes in motor mapping, the process mapping visual size estimates to motor outputs. Consequently, the absence of Weber's law in action responses might not indicate distinct visual processing but rather adaptive changes in motor strategies based on sensory feedback.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11654771/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142838998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emerging evidence suggests that visuospatial attention plays an important role in reading among Chinese children with dyslexia. Additionally, numerous studies have shown that Chinese children with dyslexia have deficits in their visuospatial attention orienting; however, the visual attention engagement deficits in Chinese children with dyslexia remain unclear. Therefore, we used a visual attention masking (AM) paradigm to characterize the spatiotemporal distribution of visual attention engagement in Chinese children with dyslexia. AM refers to impaired identification of the first (S1) of two rapidly sequentially presented mask objects. In the present study, S1 was always centrally displayed, whereas the spatial position of S2 (left, middle, or right) and the S1-S2 interval were manipulated. The results revealed a specific temporal deficit of visual attentional masking in Chinese children with dyslexia. The mean accuracy rate for developmental dyslexia (DD) in the middle spatial position was significantly lower than that in the left spatial position at a stimulus onset asynchrony (SOA) of 140 ms, compared with chronological age (CA). Moreover, we further observed spatial deficits of visual attentional masking in the three different spatial positions. Specifically, in the middle spatial position, the AM effect of DD was significantly larger for the 140-ms SOA than for the 250-ms and 600-ms SOA compared with CA. Our results suggest that Chinese children with dyslexia are significantly impaired in visual attentional engagement and that spatiotemporal visual attentional engagement may play a special role in Chinese reading.
{"title":"Impaired processing of spatiotemporal visual attention engagement deficits in Chinese children with developmental dyslexia.","authors":"Baojun Duan, Xiaoling Tang, Datao Wang, Yanjun Zhang, Guihua An, Huan Wang, Aibao Zhou","doi":"10.1167/jov.24.13.2","DOIUrl":"10.1167/jov.24.13.2","url":null,"abstract":"<p><p>Emerging evidence suggests that visuospatial attention plays an important role in reading among Chinese children with dyslexia. Additionally, numerous studies have shown that Chinese children with dyslexia have deficits in their visuospatial attention orienting; however, the visual attention engagement deficits in Chinese children with dyslexia remain unclear. Therefore, we used a visual attention masking (AM) paradigm to characterize the spatiotemporal distribution of visual attention engagement in Chinese children with dyslexia. AM refers to impaired identification of the first (S1) of two rapidly sequentially presented mask objects. In the present study, S1 was always centrally displayed, whereas the spatial position of S2 (left, middle, or right) and the S1-S2 interval were manipulated. The results revealed a specific temporal deficit of visual attentional masking in Chinese children with dyslexia. The mean accuracy rate for developmental dyslexia (DD) in the middle spatial position was significantly lower than that in the left spatial position at a stimulus onset asynchrony (SOA) of 140 ms, compared with chronological age (CA). Moreover, we further observed spatial deficits of visual attentional masking in the three different spatial positions. Specifically, in the middle spatial position, the AM effect of DD was significantly larger for the 140-ms SOA than for the 250-ms and 600-ms SOA compared with CA. Our results suggest that Chinese children with dyslexia are significantly impaired in visual attentional engagement and that spatiotemporal visual attentional engagement may play a special role in Chinese reading.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"2"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11620018/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The human visual system is continuously processing visual information to maintain a coherent perception of the environment. Temporal integration, a critical aspect of this process, allows for the combination of visual inputs over time, enhancing the signal-to-noise ratio and supporting high-level cognitive functions. Traditional methods for measuring temporal integration often require a large number of trials made up of a fixation period, stimuli separated by a blank interval, a single forced choice, and then a pause before the next trial. This trial structure potentially introduces fatigue and biases. Here, we introduce a novel continuous temporal integration (CTI) task designed to overcome these limitations by allowing free visual exploration and continuous mouse responses to dynamic stimuli. Fifty participants performed the CTI, which involved adjusting a red bar to indicate the point where a flickering sine wave grating became indistinguishable from noise. Our results, modeled by an exponential function, indicate a reliable temporal integration window of ∼100 ms. The CTI's design facilitates rapid and reliable measurement of temporal integration, demonstrating potential for broader applications across different populations and experimental settings. This task provides a more naturalistic and efficient approach to understanding this fundamental aspect of visual perception.
{"title":"Continuous temporal integration in the human visual system.","authors":"Michele Deodato, David Melcher","doi":"10.1167/jov.24.13.5","DOIUrl":"10.1167/jov.24.13.5","url":null,"abstract":"<p><p>The human visual system is continuously processing visual information to maintain a coherent perception of the environment. Temporal integration, a critical aspect of this process, allows for the combination of visual inputs over time, enhancing the signal-to-noise ratio and supporting high-level cognitive functions. Traditional methods for measuring temporal integration often require a large number of trials made up of a fixation period, stimuli separated by a blank interval, a single forced choice, and then a pause before the next trial. This trial structure potentially introduces fatigue and biases. Here, we introduce a novel continuous temporal integration (CTI) task designed to overcome these limitations by allowing free visual exploration and continuous mouse responses to dynamic stimuli. Fifty participants performed the CTI, which involved adjusting a red bar to indicate the point where a flickering sine wave grating became indistinguishable from noise. Our results, modeled by an exponential function, indicate a reliable temporal integration window of ∼100 ms. The CTI's design facilitates rapid and reliable measurement of temporal integration, demonstrating potential for broader applications across different populations and experimental settings. This task provides a more naturalistic and efficient approach to understanding this fundamental aspect of visual perception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"5"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11622157/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Boris M Sheliga, Edmond J FitzGibbon, Christian Quaia, Richard J Krauzlis
Manipulations of the strength of visual motion coherence have been widely used to study behavioral and neural mechanisms of visual motion processing. Here, we used a novel broadband visual stimulus to test how the strength of motion coherence in different spatial frequency (SF) bands impacts human ocular-following responses (OFRs). Synthesized broadband stimuli were used: a sum of one-dimensional vertical sine-wave gratings (SWs) whose SFs ranged from 0.0625 to 4 cpd in 0.05-log2(cpd) steps. Every 20 ms a proportion of SWs (from 25% to 100%) shifted in the same direction by ¼ of their respective wavelengths (drifting), whereas the rest of the SWs were assigned a random phase (flicker), shifted by half of their respective wavelengths (counterphase), or remained stationary (static): 25% to 100% motion coherence. As expected, the magnitude of the OFRs decreased as the proportion of non-drifting SWs and/or their contrast increased. The effects, however, were SF dependent. For flicker and static SWs, SFs in the range of 0.3 to 0.6 cpd were the most disruptive, whereas, with counterphase SWs, low SFs were the most disruptive. The data were well fit by a model that combined an excitatory drive determined by a SF-weighted sum of drifting components scaled by a SF-weighted contrast normalization term. Flicker, counterphase, or static SWs did not add to or directly impede the drive in the model, but they contributed to the contrast normalization process.
{"title":"Ocular-following responses to broadband visual stimuli of varying motion coherence.","authors":"Boris M Sheliga, Edmond J FitzGibbon, Christian Quaia, Richard J Krauzlis","doi":"10.1167/jov.24.13.4","DOIUrl":"10.1167/jov.24.13.4","url":null,"abstract":"<p><p>Manipulations of the strength of visual motion coherence have been widely used to study behavioral and neural mechanisms of visual motion processing. Here, we used a novel broadband visual stimulus to test how the strength of motion coherence in different spatial frequency (SF) bands impacts human ocular-following responses (OFRs). Synthesized broadband stimuli were used: a sum of one-dimensional vertical sine-wave gratings (SWs) whose SFs ranged from 0.0625 to 4 cpd in 0.05-log2(cpd) steps. Every 20 ms a proportion of SWs (from 25% to 100%) shifted in the same direction by ¼ of their respective wavelengths (drifting), whereas the rest of the SWs were assigned a random phase (flicker), shifted by half of their respective wavelengths (counterphase), or remained stationary (static): 25% to 100% motion coherence. As expected, the magnitude of the OFRs decreased as the proportion of non-drifting SWs and/or their contrast increased. The effects, however, were SF dependent. For flicker and static SWs, SFs in the range of 0.3 to 0.6 cpd were the most disruptive, whereas, with counterphase SWs, low SFs were the most disruptive. The data were well fit by a model that combined an excitatory drive determined by a SF-weighted sum of drifting components scaled by a SF-weighted contrast normalization term. Flicker, counterphase, or static SWs did not add to or directly impede the drive in the model, but they contributed to the contrast normalization process.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"4"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11627248/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The fact that blinks occur more often than necessary for ocular lubrication has led to the proposal that blinks are involved in altering some aspects of visual cognition. Previous studies have suggested that blinking can modulate the alternation of different visual interpretations of the same stimulus, that is, perceptual alternation in multistable perception. This study investigated whether and how different types of blinks, spontaneous and voluntary, interact with perceptual alternation in a multistable perception paradigm called continuous flash suppression. The results showed that voluntary blinking facilitated perceptual alternation, whereas spontaneous blinking did not. Moreover, voluntary eye-widening, as well as eyelid closing, facilitated perceptual alternation. Physical blackouts, which had timing and duration comparable to those of voluntary blinks, did not produce facilitatory effects. These findings suggest that the effects of voluntary eyelid movements are mediated by extraretinal processes and are consistent with previous findings that different types of blinks are at least partially mediated by different neurophysiological processes. Furthermore, perceptual alternation was also found to facilitate spontaneous blinking. These results indicate that eyelid movements and perceptual alternation interact reciprocally with each other.
{"title":"Voluntary blinks and eye-widenings, but not spontaneous blinks, facilitate perceptual alternation during continuous flash suppression.","authors":"Ryoya Sato, Eiji Kimura","doi":"10.1167/jov.24.13.11","DOIUrl":"10.1167/jov.24.13.11","url":null,"abstract":"<p><p>The fact that blinks occur more often than necessary for ocular lubrication has led to the proposal that blinks are involved in altering some aspects of visual cognition. Previous studies have suggested that blinking can modulate the alternation of different visual interpretations of the same stimulus, that is, perceptual alternation in multistable perception. This study investigated whether and how different types of blinks, spontaneous and voluntary, interact with perceptual alternation in a multistable perception paradigm called continuous flash suppression. The results showed that voluntary blinking facilitated perceptual alternation, whereas spontaneous blinking did not. Moreover, voluntary eye-widening, as well as eyelid closing, facilitated perceptual alternation. Physical blackouts, which had timing and duration comparable to those of voluntary blinks, did not produce facilitatory effects. These findings suggest that the effects of voluntary eyelid movements are mediated by extraretinal processes and are consistent with previous findings that different types of blinks are at least partially mediated by different neurophysiological processes. Furthermore, perceptual alternation was also found to facilitate spontaneous blinking. These results indicate that eyelid movements and perceptual alternation interact reciprocally with each other.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"11"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11684486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142856580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel H Baker, Kirralise J Hansford, Federico G Segala, Anisa Y Morsi, Rowan J Huxley, Joel T Martin, Maya Rockman, Alex R Wade
Much progress has been made in understanding how the brain combines signals from the two eyes. However, most of this work has involved achromatic (black and white) stimuli, and it is not clear if the same processes apply in color-sensitive pathways. In our first experiment, we measured contrast discrimination ("dipper") functions for four key ocular configurations (monocular, binocular, half-binocular, and dichoptic), for achromatic, isoluminant L-M and isoluminant S-(L+M) sine-wave grating stimuli (L: long-, M: medium-, S: short-wavelength). We find a similar pattern of results across stimuli, implying equivalently strong interocular suppression within each pathway. Our second experiment measured dichoptic masking within and between pathways using the method of constant stimuli. Masking was strongest within-pathway and weakest between S-(L+M) and achromatic mechanisms. Finally, we repeated the dipper experiment using temporal luminance modulations, which produced slightly weaker interocular suppression than for spatially modulated stimuli. We interpret our results in the context of a contemporary two-stage model of binocular contrast gain control, implemented here using a hierarchical Bayesian framework. Posterior distributions of the weight of interocular suppression overlapped with a value of 1 for all dipper data sets, and the model captured well the pattern of thresholds from all three experiments.
{"title":"Binocular integration of chromatic and luminance signals.","authors":"Daniel H Baker, Kirralise J Hansford, Federico G Segala, Anisa Y Morsi, Rowan J Huxley, Joel T Martin, Maya Rockman, Alex R Wade","doi":"10.1167/jov.24.12.7","DOIUrl":"10.1167/jov.24.12.7","url":null,"abstract":"<p><p>Much progress has been made in understanding how the brain combines signals from the two eyes. However, most of this work has involved achromatic (black and white) stimuli, and it is not clear if the same processes apply in color-sensitive pathways. In our first experiment, we measured contrast discrimination (\"dipper\") functions for four key ocular configurations (monocular, binocular, half-binocular, and dichoptic), for achromatic, isoluminant L-M and isoluminant S-(L+M) sine-wave grating stimuli (L: long-, M: medium-, S: short-wavelength). We find a similar pattern of results across stimuli, implying equivalently strong interocular suppression within each pathway. Our second experiment measured dichoptic masking within and between pathways using the method of constant stimuli. Masking was strongest within-pathway and weakest between S-(L+M) and achromatic mechanisms. Finally, we repeated the dipper experiment using temporal luminance modulations, which produced slightly weaker interocular suppression than for spatially modulated stimuli. We interpret our results in the context of a contemporary two-stage model of binocular contrast gain control, implemented here using a hierarchical Bayesian framework. Posterior distributions of the weight of interocular suppression overlapped with a value of 1 for all dipper data sets, and the model captured well the pattern of thresholds from all three experiments.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11556357/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142582746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}