Pub Date : 2024-05-20DOI: 10.1016/j.visres.2024.108433
Maria Dvoeglazova , Tadamasa Sawada
Rectangularity and perpendicularity of contours are important properties of 3D shape for the visual system and the visual system can use them as a priori constraints for perceiving shape veridically. The present article provides a comprehensive review of prior studies of the perception of rectangularity and perpendicularity and it discusses their effects on 3D shape perception from both theoretical and empirical approaches. It has been shown that the visual system is biased to perceive a rectangular 3D shape from a 2D image. We thought that this bias might be attributable to the likelihood of a rectangular interpretation but this hypothesis is not supported by the results of our psychophysical experiment. Note that the perception of a rectangular shape cannot be explained solely on the basis of geometry. A rectangular shape is perceived from an image that is inconsistent with a rectangular interpretation. To address this issue, we developed a computational model that can recover a rectangular shape from an image of a parallelopiped. The model allows the recovered shape to be slightly inconsistent so that the recovered shape satisfies the a priori constraints of maximum compactness and minimal surface area. This model captures some of the phenomena associated with the perception of the rectangular shape that were reported in prior studies. This finding suggests that rectangularity works for shape perception by incorporating it with some additional constraints.
{"title":"A role of rectangularity in perceiving a 3D shape of an object","authors":"Maria Dvoeglazova , Tadamasa Sawada","doi":"10.1016/j.visres.2024.108433","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108433","url":null,"abstract":"<div><p>Rectangularity and perpendicularity of contours are important properties of 3D shape for the visual system and the visual system can use them as<!--> <em>a priori</em> <!-->constraints for perceiving<!--> <!-->shape veridically. The present<!--> <!-->article provides a comprehensive review of<!--> <!-->prior<!--> <!-->studies<!--> <!-->of<!--> <!-->the perception of rectangularity and perpendicularity and<!--> <!-->it<!--> <!-->discusses<!--> <!-->their effects on<!--> <!-->3D shape perception from both theoretical and empirical<!--> <!-->approaches. It has been shown that the visual system is biased to perceive a rectangular 3D shape from a 2D image. We thought that this bias might be attributable to the likelihood of a rectangular interpretation but this hypothesis is not supported by the results of our psychophysical experiment. Note that the perception of<!--> <!-->a rectangular shape cannot be explained solely on the basis of geometry. A rectangular shape is perceived from an image that is inconsistent with a rectangular interpretation. To address this<!--> <!-->issue, we developed a computational model that can recover a rectangular shape from an image of a parallelopiped. The model allows the recovered shape to be slightly inconsistent so that the recovered shape satisfies the <em>a priori</em> constraints of maximum compactness and minimal surface area. This model captures some<!--> <!-->of the<!--> <!-->phenomena<!--> <!-->associated with<!--> <!-->the perception of the rectangular shape that were reported in<!--> <!-->prior<!--> <!-->studies. This finding suggests that rectangularity works for shape perception by incorporating<!--> <!-->it<!--> <!-->with some<!--> <!-->additional<!--> <!-->constraints.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"221 ","pages":"Article 108433"},"PeriodicalIF":1.8,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141072559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-13DOI: 10.1016/j.visres.2024.108424
Christof Elias Topfstedt , Luca Wollenberg , Thomas Schenk
Visual attention is typically shifted toward the targets of upcoming saccadic eye movements. This observation is commonly interpreted in terms of an obligatory coupling between attentional selection and oculomotor programming. Here, we investigated whether this coupling is facilitated by a habitual expectation of spatial congruence between visual and motor targets. To this end, we conducted a dual-task (i.e., concurrent saccade task and visual discrimination task) experiment in which male and female participants were trained to either anticipate spatial congruence or incongruence between a saccade target and an attention probe stimulus. To assess training-induced effects of expectation on premotor attention allocation, participants subsequently completed a test phase in which the attention probe position was randomized. Results revealed that discrimination performance was systematically biased toward the expected attention probe position, irrespective of whether this position matched the saccade target or not. Overall, our findings demonstrate that visual attention can be substantially decoupled from ongoing oculomotor programming and suggest an important role of habitual expectations in the attention-action coupling.
{"title":"Training enables substantial decoupling of visual attention and saccade preparation","authors":"Christof Elias Topfstedt , Luca Wollenberg , Thomas Schenk","doi":"10.1016/j.visres.2024.108424","DOIUrl":"10.1016/j.visres.2024.108424","url":null,"abstract":"<div><p>Visual attention is typically shifted toward the targets of upcoming saccadic eye movements. This observation is commonly interpreted in terms of an obligatory coupling between attentional selection and oculomotor programming. Here, we investigated whether this coupling is facilitated by a habitual expectation of spatial congruence between visual and motor targets. To this end, we conducted a dual-task (i.e., concurrent saccade task and visual discrimination task) experiment in which male and female participants were trained to either anticipate spatial congruence or incongruence between a saccade target and an attention probe stimulus. To assess training-induced effects of expectation on premotor attention allocation, participants subsequently completed a test phase in which the attention probe position was randomized. Results revealed that discrimination performance was systematically biased toward the expected attention probe position, irrespective of whether this position matched the saccade target or not. Overall, our findings demonstrate that visual attention can be substantially decoupled from ongoing oculomotor programming and suggest an important role of habitual expectations in the attention-action coupling.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"221 ","pages":"Article 108424"},"PeriodicalIF":1.8,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0042698924000683/pdfft?md5=ef8d8e46b93a589da04a1a017591cff1&pid=1-s2.0-S0042698924000683-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140923268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-10DOI: 10.1016/j.visres.2024.108423
Charlotte Falkenberg, Franz Faul
The extent to which hue, saturation, and transmittance of thin light-transmitting layers are perceived as constant when the illumination changes (transparent layer constancy, TLC) has previously been investigated with simple stimuli in asymmetric matching tasks. In this task, a target filter is presented under one illumination and a second filter is matched under a second illumination. Although two different illuminations are applied in the stimulus generation, there is no guarantee that the stimulus will be interpreted appropriately by the visual system. In previous work, we found a higher degree of TLC when both illuminations were presented alternately than when they were presented simultaneously, which could be explained, for example, by an increased plausibility of an illumination change. In this work, we test whether TLC can also be increased in simultaneous presentation when the filter’s belonging to a particular illumination context is made more likely by additional cues. To this end, we presented filters in differently lit areas of complex, naturalistically rendered 3D scenes containing different types of cues to the prevailing illumination, such as scene geometry, object shading, and cast shadows. We found higher degrees of TLC in such complex scenes than in colorimetrically similar simple 2D color mosaics, which is consistent with the results of similar studies in the area of color constancy. To test which of the illumination cues available in the scenes are actually used, the different types of cues were successively removed from the naturalistically rendered complex scene. A total of eight levels of scene complexity were examined. As expected, TLC decreased the more cues were removed. Object shading and illumination gradients due to shadow cast were both found to have a positive effect on TLC. A second filter had a small positive effect on TLC when added in strongly reduced scenes, but not in the complex scenes that already provide many cues about the illumination context of the filter.
{"title":"Transparent layer constancy improves with increased naturalness of the scene","authors":"Charlotte Falkenberg, Franz Faul","doi":"10.1016/j.visres.2024.108423","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108423","url":null,"abstract":"<div><p>The extent to which hue, saturation, and transmittance of thin light-transmitting layers are perceived as constant when the illumination changes (<em>transparent layer constancy</em>, TLC) has previously been investigated with simple stimuli in asymmetric matching tasks. In this task, a target filter is presented under one illumination and a second filter is matched under a second illumination. Although two different illuminations are applied in the stimulus generation, there is no guarantee that the stimulus will be interpreted appropriately by the visual system. In previous work, we found a higher degree of TLC when both illuminations were presented alternately than when they were presented simultaneously, which could be explained, for example, by an increased plausibility of an illumination change. In this work, we test whether TLC can also be increased in simultaneous presentation when the filter’s belonging to a particular illumination context is made more likely by additional cues. To this end, we presented filters in differently lit areas of complex, naturalistically rendered 3D scenes containing different types of cues to the prevailing illumination, such as scene geometry, object shading, and cast shadows. We found higher degrees of TLC in such complex scenes than in colorimetrically similar simple 2D color mosaics, which is consistent with the results of similar studies in the area of color constancy. To test which of the illumination cues available in the scenes are actually used, the different types of cues were successively removed from the naturalistically rendered complex scene. A total of eight levels of scene complexity were examined. As expected, TLC decreased the more cues were removed. Object shading and illumination gradients due to shadow cast were both found to have a positive effect on TLC. A second filter had a small positive effect on TLC when added in strongly reduced scenes, but not in the complex scenes that already provide many cues about the illumination context of the filter.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"221 ","pages":"Article 108423"},"PeriodicalIF":1.8,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0042698924000671/pdfft?md5=e8db80ddadb6f0b3b906a6eb1b041552&pid=1-s2.0-S0042698924000671-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140901204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-07DOI: 10.1016/j.visres.2024.108422
Joshua A. Solomon , Fintan Nagle , Christopher W. Tyler
We used the psychophysical summation paradigm to reveal some spatial characteristics of the mechanism responsible for detecting a motion-defined visual target in central vision. There has been much previous work on spatial summation for motion detection and direction discrimination, but none has assessed it in terms of the velocity threshold or used velocity noise to provide a measure of the efficiency of the velocity processing mechanism. Motion-defined targets were centered within square fields of randomly selected gray levels. The motion was produced within the disk-shaped target region by shifting the pixels rightwards for 0.2 s. The uniform target motion was perturbed by Gaussian motion noise in horizontal strips of 16 pixels. Independent variables were field size, the diameter of the disk target, and the variance of an independent perturbation added to the (signed) velocity of each 16-pixel strip. The dependent variable was the threshold velocity for target detection. Velocity thresholds formed swoosh-shaped (descending, then ascending) functions of target diameter. Minimum values were obtained when targets subtended approximately 2 degrees of visual angle. The data were fit with a continuum of models, extending from the theoretically ideal observer through various inefficient and noisy refinements thereof. In particular, we introduce the concept of sparse sampling to account for the relative inefficiency of the velocity thresholds. The best fits were obtained from a model observer whose responses were determined by comparing the velocity profile of each stimulus with a limited set of sparsely sampled “DoG” templates, each of which is the product of a random binary array and the difference between two 2-D Gaussian density functions.
{"title":"Spatial summation for motion detection","authors":"Joshua A. Solomon , Fintan Nagle , Christopher W. Tyler","doi":"10.1016/j.visres.2024.108422","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108422","url":null,"abstract":"<div><p>We used the psychophysical summation paradigm to reveal some spatial characteristics of the mechanism responsible for detecting a motion-defined visual target in central vision. There has been much previous work on spatial summation for motion detection and direction discrimination, but none has assessed it in terms of the velocity threshold or used velocity noise to provide a measure of the efficiency of the velocity processing mechanism. Motion-defined targets were centered within square fields of randomly selected gray levels. The motion was produced within the disk-shaped target region by shifting the pixels rightwards for 0.2 s. The uniform target motion was perturbed by Gaussian motion noise in horizontal strips of 16 pixels. Independent variables were field size, the diameter of the disk target, and the variance of an independent perturbation added to the (signed) velocity of each 16-pixel strip. The dependent variable was the threshold velocity for target detection. Velocity thresholds formed swoosh-shaped (descending, then ascending) functions of target diameter. Minimum values were obtained when targets subtended approximately 2 degrees of visual angle. The data were fit with a continuum of models, extending from the theoretically ideal observer through various inefficient and noisy refinements thereof. In particular, we introduce the concept of sparse sampling to account for the relative inefficiency of the velocity thresholds. The best fits were obtained from a model observer whose responses were determined by comparing the velocity profile of each stimulus with a limited set of sparsely sampled “DoG” templates, each of which is the product of a random binary array and the difference between two 2-D Gaussian density functions.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"221 ","pages":"Article 108422"},"PeriodicalIF":1.8,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S004269892400066X/pdfft?md5=4d383e7288048973388d21b75e0398c0&pid=1-s2.0-S004269892400066X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140844313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-04DOI: 10.1016/j.visres.2024.108402
Frank Schaeffel , Barbara Swiatczak
Studies in animal models and humans have shown that refractive state is optimized during postnatal development by a closed-loop negative feedback system that uses retinal image defocus as an error signal, a mechanism called emmetropization. The sensor to detect defocus and its sign resides in the retina itself. The retina and/or the retinal pigment epithelium (RPE) presumably releases biochemical messengers to change choroidal thickness and modulate the growth rates of the underlying sclera. A central question arises: if emmetropization operates as a closed-loop system, why does it not stop myopia development? Recent experiments in young human subjects have shown that (1) the emmetropic retina can perfectly distinguish between real positive defocus and simulated defocus, and trigger transient axial eye shortening or elongation, respectively. (2) Strikingly, the myopic retina has reduced ability to inhibit eye growth when positive defocus is imposed. (3) The bi-directional response of the emmetropic retina is elicited with low spatial frequency information below 8 cyc/deg, which makes it unlikely that optical higher-order aberrations play a role. (4) The retinal mechanism for the detection of the sign of defocus involves a comparison of defocus blur in the blue (S-cone) and red end of the spectrum (L + M−cones) but, again, the myopic retina is not responsive, at least not in short-term experiments. This suggests that it cannot fully trigger the inhibitory arm of the emmetropization feedback loop. As a result, with an open feedback loop, myopia development becomes “open-loop”.
{"title":"Mechanisms of emmetropization and what might go wrong in myopia","authors":"Frank Schaeffel , Barbara Swiatczak","doi":"10.1016/j.visres.2024.108402","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108402","url":null,"abstract":"<div><p>Studies in animal models and humans have shown that refractive state is optimized during postnatal development by a closed-loop negative feedback system that uses retinal image defocus as an error signal, a mechanism called emmetropization. The sensor to detect defocus and its sign resides in the retina itself. The retina and/or the retinal pigment epithelium (RPE) presumably releases biochemical messengers to change choroidal thickness and modulate the growth rates of the underlying sclera. A central question arises: if emmetropization operates as a closed-loop system, why does it not stop myopia development? Recent experiments in young human subjects have shown that (1) the emmetropic retina can perfectly distinguish between real positive defocus and simulated defocus, and trigger transient axial eye shortening or elongation, respectively. (2) Strikingly, the myopic retina has reduced ability to inhibit eye growth when positive defocus is imposed. (3) The bi-directional response of the emmetropic retina is elicited with low spatial frequency information below 8 cyc/deg, which makes it unlikely that optical higher-order aberrations play a role. (4) The retinal mechanism for the detection of the sign of defocus involves a comparison of defocus blur in the blue (S-cone) and red end of the spectrum (L + M−cones) but, again, the myopic retina is not responsive, at least not in short-term experiments. This suggests that it cannot fully trigger the inhibitory arm of the emmetropization feedback loop. As a result, with an open feedback loop, myopia development becomes “open-loop”.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"220 ","pages":"Article 108402"},"PeriodicalIF":1.8,"publicationDate":"2024-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0042698924000464/pdfft?md5=267b452a6f0c8cc848f39fad61988039&pid=1-s2.0-S0042698924000464-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140823836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent studies suggest that binocular adding S+ and differencing S- channels play an important role in binocular vision. To test for such a role in the context of binocular contrast detection and binocular summation, we employed a surround masking paradigm consisting of a central target disk surrounded by a mask annulus. All stimuli were horizontally oriented 0.5c/d sinusoidal gratings. Correlated stimuli were identical in interocular spatial phase while anticorrelated stimuli were opposite in interocular spatial phase. There were four target conditions: monocular left eye, monocular right eye, binocular correlated and binocular anticorrelated, and three surround mask conditions: no surround, binocularly correlated and binocularly anticorrelated. We observed consistent elevation of detection thresholds for monocular and binocular targets across the two binocular surround mask conditions. In addition, we found an interaction between the type of surround and the type of binocular target: both detection and summation were relatively enhanced by surround masks and targets with opposite interocular phase relationships and reduced by surround masks and targets with the same interocular phase relationships. The data were reasonably well accounted for by a model of binocular combination termed MAX (S+S-), in which the decision variable is the probability summation of modeled S+ and S- channel responses, with a free parameter determining the relative gains of the two channels. Our results support the existence of two channels involved in binocular combination, S+ and S-, whose relative gains are adjustable by surround context.
{"title":"Surround masking reveals binocular adding and differencing channels","authors":"Rinku Sarkar , Kiana Zanetti , Alexandre Reynaud , Frederick A.A. Kingdom","doi":"10.1016/j.visres.2024.108396","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108396","url":null,"abstract":"<div><p>Recent studies suggest that binocular adding <em>S</em>+ and differencing <em>S</em>- channels play an important role in binocular vision. To test for such a role in the context of binocular contrast detection and binocular summation, we employed a surround masking paradigm consisting of a central target disk surrounded by a mask annulus. All stimuli were horizontally oriented 0.5c/d sinusoidal gratings. Correlated stimuli were identical in interocular spatial phase while anticorrelated stimuli were opposite in interocular spatial phase. There were four target conditions: monocular left eye, monocular right eye, binocular correlated and binocular anticorrelated, and three surround mask conditions: no surround, binocularly correlated and binocularly anticorrelated. We observed consistent elevation of detection thresholds for monocular and binocular targets across the two binocular surround mask conditions. In addition, we found an interaction between the type of surround and the type of binocular target: both detection and summation were relatively enhanced by surround masks and targets with opposite interocular phase relationships and reduced by surround masks and targets with the same interocular phase relationships. The data were reasonably well accounted for by a model of binocular combination termed MAX (S+S-), in which the decision variable is the probability summation of modeled <em>S</em>+ and <em>S</em>- channel responses, with a free parameter determining the relative gains of the two channels. Our results support the existence of two channels involved in binocular combination, <em>S</em>+ and <em>S</em>-, whose relative gains are adjustable by surround context.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"219 ","pages":"Article 108396"},"PeriodicalIF":1.8,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140619079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-18DOI: 10.1016/j.visres.2024.108414
Pablo A. Barrionuevo , María L. Sandoval Salinas , José M. Fanchini
{"title":"Corrigendum to “Are ipRGCs involved in human color vision? Hints from physiology, psychophysics, and natural image statistics” [Vis. Res. 217 (2024) 108378]","authors":"Pablo A. Barrionuevo , María L. Sandoval Salinas , José M. Fanchini","doi":"10.1016/j.visres.2024.108414","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108414","url":null,"abstract":"","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"219 ","pages":"Article 108414"},"PeriodicalIF":1.8,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0042698924000580/pdfft?md5=7ed065bf25d3391b55fb3ea1bc5d564b&pid=1-s2.0-S0042698924000580-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140605457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-15DOI: 10.1016/j.visres.2024.108406
Yannan Su , Zhuanghua Shi , Thomas Wachtler
Incorporating statistical characteristics of stimuli in perceptual processing can be highly beneficial for reliable estimation from noisy sensory measurements but may generate perceptual bias. According to Bayesian inference, perceptual biases arise from the integration of internal priors with noisy sensory inputs. In this study, we used a Bayesian observer model to derive biases and priors in hue perception based on discrimination data for hue ensembles with varying levels of chromatic noise. Our results showed that discrimination thresholds for isoluminant stimuli with hue defined by azimuth angle in cone-opponent color space exhibited a bimodal pattern, with lowest thresholds near a non-cardinal blue-yellow axis that aligns closely with the variation of natural daylights. Perceptual biases showed zero crossings around this axis, indicating repulsion away from yellow and attraction towards blue. These biases could be explained by the Bayesian observer model through a non-uniform prior with a preference for blue. Our findings suggest that visual processing takes advantage of knowledge of the distribution of colors in natural environments for hue perception.
{"title":"A Bayesian observer model reveals a prior for natural daylights in hue perception","authors":"Yannan Su , Zhuanghua Shi , Thomas Wachtler","doi":"10.1016/j.visres.2024.108406","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108406","url":null,"abstract":"<div><p>Incorporating statistical characteristics of stimuli in perceptual processing can be highly beneficial for reliable estimation from noisy sensory measurements but may generate perceptual bias. According to Bayesian inference, perceptual biases arise from the integration of internal priors with noisy sensory inputs. In this study, we used a Bayesian observer model to derive biases and priors in hue perception based on discrimination data for hue ensembles with varying levels of chromatic noise. Our results showed that discrimination thresholds for isoluminant stimuli with hue defined by azimuth angle in cone-opponent color space exhibited a bimodal pattern, with lowest thresholds near a non-cardinal blue-yellow axis that aligns closely with the variation of natural daylights. Perceptual biases showed zero crossings around this axis, indicating repulsion away from yellow and attraction towards blue. These biases could be explained by the Bayesian observer model through a non-uniform prior with a preference for blue. Our findings suggest that visual processing takes advantage of knowledge of the distribution of colors in natural environments for hue perception.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"220 ","pages":"Article 108406"},"PeriodicalIF":1.8,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0042698924000506/pdfft?md5=5ddb538c62f3f03af3f6f492638ca905&pid=1-s2.0-S0042698924000506-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140554435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}