Corey S Shayman, Maggie K McCracken, Hunter C Finney, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr
Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.
{"title":"Integration of auditory and visual cues in spatial navigation under normal and impaired viewing conditions.","authors":"Corey S Shayman, Maggie K McCracken, Hunter C Finney, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr","doi":"10.1167/jov.24.11.7","DOIUrl":"10.1167/jov.24.11.7","url":null,"abstract":"<p><p>Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11469273/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Onset rivalry can be modulated by a preceding stimulus with features similar to rivalrous test stimuli. In this study, we used this modulating effect to investigate the integration of color and orientation during onset rivalry using equiluminant chromatic gratings. Specifically, we explored whether this modulating effect leads to a decoupling of color and orientation in chromatic gratings, resulting in a percept distinct from either of the rivalrous gratings. The results demonstrated that color-binding errors can be observed in a form where rivalrous green-gray clockwise and red-gray counterclockwise gratings yield the percept of a bichromatic, red-green grating with either clockwise or counterclockwise orientation. These errors were observed under a brief test duration (30 ms), with both monocular and binocular presentations of the preceding stimulus. The specific color and orientation combination of the preceding stimulus was not critical for inducing color-binding errors, provided it was composed of the test color and orientation. We also found a notable covariant relationship between the perception of color-binding errors and exclusive dominance, where the perceived orientation in color-binding errors generally matched that in exclusive dominance. This finding suggests that the mechanisms underlying color-binding errors may be related to, or partially overlap with, those determining exclusive dominance. These errors can be explained by the decoupling of color and orientation in the representation of the suppressed grating, with the color binding to the dominant grating, resulting in an erroneously perceived bichromatic grating.
{"title":"Color-binding errors induced by modulating effects of the preceding stimulus on onset rivalry.","authors":"Satoru Abe, Eiji Kimura","doi":"10.1167/jov.24.11.10","DOIUrl":"10.1167/jov.24.11.10","url":null,"abstract":"<p><p>Onset rivalry can be modulated by a preceding stimulus with features similar to rivalrous test stimuli. In this study, we used this modulating effect to investigate the integration of color and orientation during onset rivalry using equiluminant chromatic gratings. Specifically, we explored whether this modulating effect leads to a decoupling of color and orientation in chromatic gratings, resulting in a percept distinct from either of the rivalrous gratings. The results demonstrated that color-binding errors can be observed in a form where rivalrous green-gray clockwise and red-gray counterclockwise gratings yield the percept of a bichromatic, red-green grating with either clockwise or counterclockwise orientation. These errors were observed under a brief test duration (30 ms), with both monocular and binocular presentations of the preceding stimulus. The specific color and orientation combination of the preceding stimulus was not critical for inducing color-binding errors, provided it was composed of the test color and orientation. We also found a notable covariant relationship between the perception of color-binding errors and exclusive dominance, where the perceived orientation in color-binding errors generally matched that in exclusive dominance. This finding suggests that the mechanisms underlying color-binding errors may be related to, or partially overlap with, those determining exclusive dominance. These errors can be explained by the decoupling of color and orientation in the representation of the suppressed grating, with the color binding to the dominant grating, resulting in an erroneously perceived bichromatic grating.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472883/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The perception of an object's location is profoundly influenced by the surrounding dynamics. This is dramatically demonstrated by the frame effect, where a moving frame induces substantial shifts in the perceived location of objects that flash within it. In this study, we examined the elements contributing to the large magnitude of this effect. Across three experiments, we manipulated the number of probes, the dynamics of the frame, and the spatiotemporal relationships between probes and the frame. We found that the presence of multiple probes amplified the position shift, whereas the accumulation of the frame effect over repeated motion cycles was minimal. Notably, an oscillating frame generated more pronounced effects compared to a unidirectional moving frame. Furthermore, the spatiotemporal distance between the frame and the probe played pivotal roles, with larger shifts observed near the leading edge of the frame. Interestingly, although larger frames produced stronger position shifts, the maximum shift occurred almost at the same distance relative to the frame's center across all tested sizes. Our findings suggest that the number of probes, frame size, relative probe-frame distance, and frame dynamics collectively contribute to the magnitude of the position shift.
{"title":"Deconstructing the frame effect.","authors":"Mohammad Shams, Peter J Kohler, Patrick Cavanagh","doi":"10.1167/jov.24.11.8","DOIUrl":"10.1167/jov.24.11.8","url":null,"abstract":"<p><p>The perception of an object's location is profoundly influenced by the surrounding dynamics. This is dramatically demonstrated by the frame effect, where a moving frame induces substantial shifts in the perceived location of objects that flash within it. In this study, we examined the elements contributing to the large magnitude of this effect. Across three experiments, we manipulated the number of probes, the dynamics of the frame, and the spatiotemporal relationships between probes and the frame. We found that the presence of multiple probes amplified the position shift, whereas the accumulation of the frame effect over repeated motion cycles was minimal. Notably, an oscillating frame generated more pronounced effects compared to a unidirectional moving frame. Furthermore, the spatiotemporal distance between the frame and the probe played pivotal roles, with larger shifts observed near the leading edge of the frame. Interestingly, although larger frames produced stronger position shifts, the maximum shift occurred almost at the same distance relative to the frame's center across all tested sizes. Our findings suggest that the number of probes, frame size, relative probe-frame distance, and frame dynamics collectively contribute to the magnitude of the position shift.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472888/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eliana G Dellinger, Katelyn M Becker, Frank H Durgin
Four experimental studies are reported using a total of 712 participants to investigate the basis of a recently reported numerosity illusion called "weak-outnumber-strong" (WOS). In the weak-outnumber-strong illusion, when equal numbers of white and gray dots (e.g., 50 of each) are intermixed against a darker gray background, the gray dots seem much more numerous than the white. Two principles seem to be supported by these new results: 1) Subsets of mixtures are generally underestimated; thus, in mixtures of red and green dots, both sets are underestimated (using a matching task) just as the white dots are in the weak-outnumber-strong illusion, but 2) the gray dots seem to be filled in as if partially occluded by the brighter white dots. This second principle is supported by manipulations of depth perception both by pictorial cues (partial occlusion) and by binocular cues (stereopsis), such that the illusion is abolished when the gray dots are depicted as closer than the white dots, but remains strong when they are depicted as lying behind the white dots. Finally, an online investigation of a prior false-floor hypothesis concerning the effect suggests that manipulations of relative contrast may affect the segmentation process, which produces the visual bias known as subset underestimation.
{"title":"Implied occlusion and subset underestimation contribute to the weak-outnumber-strong numerosity illusion.","authors":"Eliana G Dellinger, Katelyn M Becker, Frank H Durgin","doi":"10.1167/jov.24.11.14","DOIUrl":"10.1167/jov.24.11.14","url":null,"abstract":"<p><p>Four experimental studies are reported using a total of 712 participants to investigate the basis of a recently reported numerosity illusion called \"weak-outnumber-strong\" (WOS). In the weak-outnumber-strong illusion, when equal numbers of white and gray dots (e.g., 50 of each) are intermixed against a darker gray background, the gray dots seem much more numerous than the white. Two principles seem to be supported by these new results: 1) Subsets of mixtures are generally underestimated; thus, in mixtures of red and green dots, both sets are underestimated (using a matching task) just as the white dots are in the weak-outnumber-strong illusion, but 2) the gray dots seem to be filled in as if partially occluded by the brighter white dots. This second principle is supported by manipulations of depth perception both by pictorial cues (partial occlusion) and by binocular cues (stereopsis), such that the illusion is abolished when the gray dots are depicted as closer than the white dots, but remains strong when they are depicted as lying behind the white dots. Finally, an online investigation of a prior false-floor hypothesis concerning the effect suggests that manipulations of relative contrast may affect the segmentation process, which produces the visual bias known as subset underestimation.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11498648/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our senses are constantly exposed to external stimulation. Part of the sensory stimulation is produced by our own movement, like visual motion on the retina or tactile sensations from touch. Sensations caused by our movements appear attenuated. The interpretation of current stimuli is influenced by previous experiences, known as serial dependencies. Here we investigated how sensory attenuation and serial dependencies interact. In Experiment 1, we showed that temporal predictability causes sensory attenuation. In Experiment 2, we isolated temporal predictability in a visuospatial localization task. Attenuated stimuli are influenced by serial dependencies. However, the magnitude of the serial dependence effects varies, with greater effects when the certainty of the previous trial is equal to or greater than the current one. Experiment 3 examined sensory attenuation's influence on serial dependencies. Participants localized a briefly flashed stimulus after pressing a button (self-generated) or without pressing a button (externally generated). Stronger serial dependencies occurred in self-generated trials compared to externally generated ones when presented alternately but not when presented in blocks. We conclude that the relative uncertainty in stimulation between trials determines serial dependency strengths.
{"title":"Serial dependencies for externally and self-generated stimuli.","authors":"Clara Fritz, Antonella Pomè, Eckart Zimmermann","doi":"10.1167/jov.24.11.1","DOIUrl":"10.1167/jov.24.11.1","url":null,"abstract":"<p><p>Our senses are constantly exposed to external stimulation. Part of the sensory stimulation is produced by our own movement, like visual motion on the retina or tactile sensations from touch. Sensations caused by our movements appear attenuated. The interpretation of current stimuli is influenced by previous experiences, known as serial dependencies. Here we investigated how sensory attenuation and serial dependencies interact. In Experiment 1, we showed that temporal predictability causes sensory attenuation. In Experiment 2, we isolated temporal predictability in a visuospatial localization task. Attenuated stimuli are influenced by serial dependencies. However, the magnitude of the serial dependence effects varies, with greater effects when the certainty of the previous trial is equal to or greater than the current one. Experiment 3 examined sensory attenuation's influence on serial dependencies. Participants localized a briefly flashed stimulus after pressing a button (self-generated) or without pressing a button (externally generated). Stronger serial dependencies occurred in self-generated trials compared to externally generated ones when presented alternately but not when presented in blocks. We conclude that the relative uncertainty in stimulation between trials determines serial dependency strengths.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11451828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Color can be used to group similar elements, and ensemble percepts of color can be formed for such groups. In real-life settings, however, elements of similar color are often spatially interspersed among other elements and seen against a background. Forming an ensemble percept of these elements would require the segmentation of the correct color signals for integration. Can the human visual system do this? We examined whether observers can extract the ensemble mean hue from a target hue distribution among distractors and whether a color category boundary between target and distractor hues facilitates ensemble hue formation. Observers were able to selectively judge the target ensemble mean hue, but the presence of distractor hues added noise to the ensemble estimates and caused perceptual biases. The more similar the distractor hues were to the target hues, the noisier the estimates became, possibly reflecting incomplete or inaccurate segmentation of the two hue ensembles. Asymmetries between nominally equidistant distractors and substantial individual variability, however, point to additional factors beyond simple mixing of target and distractor distributions. Finally, we found no evidence for categorical facilitation in selective ensemble hue formation.
{"title":"Ensemble percepts of colored targets among distractors are influenced by hue similarity, not categorical identity.","authors":"Lari S Virtanen, Toni P Saarela, Maria Olkkonen","doi":"10.1167/jov.24.11.12","DOIUrl":"10.1167/jov.24.11.12","url":null,"abstract":"<p><p>Color can be used to group similar elements, and ensemble percepts of color can be formed for such groups. In real-life settings, however, elements of similar color are often spatially interspersed among other elements and seen against a background. Forming an ensemble percept of these elements would require the segmentation of the correct color signals for integration. Can the human visual system do this? We examined whether observers can extract the ensemble mean hue from a target hue distribution among distractors and whether a color category boundary between target and distractor hues facilitates ensemble hue formation. Observers were able to selectively judge the target ensemble mean hue, but the presence of distractor hues added noise to the ensemble estimates and caused perceptual biases. The more similar the distractor hues were to the target hues, the noisier the estimates became, possibly reflecting incomplete or inaccurate segmentation of the two hue ensembles. Asymmetries between nominally equidistant distractors and substantial individual variability, however, point to additional factors beyond simple mixing of target and distractor distributions. Finally, we found no evidence for categorical facilitation in selective ensemble hue formation.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11498646/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Falling objects are commonplace in daily life, requiring precise perceptual judgments for interception and avoidance. We argue that human judgments of projectile motion arise from the interplay between sensory information and predictions constrained by Newtonian mechanics. Our study investigates how individuals perceive falling objects under various gravitational conditions, aiming to understand the role of internalized gravity in visual perception. Through meticulously controlling the available information, we demonstrated that these phenomena cannot be explained solely by simple heuristics nor representational momentum. Instead, we found that the perceptual judgments of humans (n = 11, 13, 14, and 11, respectively, in Experiments 1, 2, 3, and 4) are influenced by a combination of sensory information and gravity predictions, highlighting the role of internalized physical constraints in the perception of projectile motion.
{"title":"Embeddedness of Earth's gravity in visual perception.","authors":"Abdul-Rahim Deeb, Fulvio Domini","doi":"10.1167/jov.24.11.4","DOIUrl":"10.1167/jov.24.11.4","url":null,"abstract":"<p><p>Falling objects are commonplace in daily life, requiring precise perceptual judgments for interception and avoidance. We argue that human judgments of projectile motion arise from the interplay between sensory information and predictions constrained by Newtonian mechanics. Our study investigates how individuals perceive falling objects under various gravitational conditions, aiming to understand the role of internalized gravity in visual perception. Through meticulously controlling the available information, we demonstrated that these phenomena cannot be explained solely by simple heuristics nor representational momentum. Instead, we found that the perceptual judgments of humans (n = 11, 13, 14, and 11, respectively, in Experiments 1, 2, 3, and 4) are influenced by a combination of sensory information and gravity predictions, highlighting the role of internalized physical constraints in the perception of projectile motion.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11463708/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alex S Baldwin, Marie-Céline Lorenzini, Annabel Wing-Yan Fan, Robert F Hess, Alexandre Reynaud
In binocular vision, the relative strength of the input from the two eyes can have significant functional impact. These inputs are typically balanced; however, in some conditions (e.g., amblyopia), one eye will dominate over the other. To quantify imbalances in binocular vision, we have developed the Dichoptic Contrast Ordering Test (DiCOT). Implemented on a tablet device, the program uses rankings of perceived contrast (of dichoptically presented stimuli) to find a scaling factor that balances the two eyes. We measured how physical interventions (applied to one eye) affect the DiCOT measurements, including neutral density (ND) filters, Bangerter filters, and optical blur introduced by a +3-diopter (D) lens. The DiCOT results were compared to those from the Dichoptic Letter Test (DLT). Both the DiCOT and the DLT showed excellent test-retest reliability; however, the magnitude of the imbalances introduced by the interventions was greater in the DLT. To find consistency between the methods, rescaling the DiCOT results from individual conditions gave good results. However, the adjustments required for the +3-D lens condition were quite different from those for the ND and Bangerter filters. Our results indicate that the DiCOT and DLT measures partially separate aspects of binocular imbalance. This supports the simultaneous use of both measures in future studies.
{"title":"The dichoptic contrast ordering test: A method for measuring the depth of binocular imbalance.","authors":"Alex S Baldwin, Marie-Céline Lorenzini, Annabel Wing-Yan Fan, Robert F Hess, Alexandre Reynaud","doi":"10.1167/jov.24.11.2","DOIUrl":"10.1167/jov.24.11.2","url":null,"abstract":"<p><p>In binocular vision, the relative strength of the input from the two eyes can have significant functional impact. These inputs are typically balanced; however, in some conditions (e.g., amblyopia), one eye will dominate over the other. To quantify imbalances in binocular vision, we have developed the Dichoptic Contrast Ordering Test (DiCOT). Implemented on a tablet device, the program uses rankings of perceived contrast (of dichoptically presented stimuli) to find a scaling factor that balances the two eyes. We measured how physical interventions (applied to one eye) affect the DiCOT measurements, including neutral density (ND) filters, Bangerter filters, and optical blur introduced by a +3-diopter (D) lens. The DiCOT results were compared to those from the Dichoptic Letter Test (DLT). Both the DiCOT and the DLT showed excellent test-retest reliability; however, the magnitude of the imbalances introduced by the interventions was greater in the DLT. To find consistency between the methods, rescaling the DiCOT results from individual conditions gave good results. However, the adjustments required for the +3-D lens condition were quite different from those for the ND and Bangerter filters. Our results indicate that the DiCOT and DLT measures partially separate aspects of binocular imbalance. This supports the simultaneous use of both measures in future studies.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11460568/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew J Coia, Joseph M Arizpe, Peter A Smith, Thomas K Kuyk, Julie A Lovell
The visual system adapts dynamically to stabilize perception over widely varying illuminations. Such adaptation allows the colors of objects to appear constant despite changes in spectral illumination. Similarly, the wearing of colored filters also alters spectral content, but this alteration can be more extreme than typically encountered in nature, presenting a unique challenge to color constancy mechanisms. While it is known that chromatic adaptation is affected by surrounding spatial context, a recent study reported a gradual temporal adaptation effect to colored filters such that colors initially appear strongly shifted but over hours of wear are perceived as closer to an unfiltered appearance. Presently, it is not clear whether the luminance system adapts spatially and temporally like the chromatic system. To address this, spatial and temporal adaptation effects to a colored filter were measured using tasks that assess chromatic and luminance adaptation separately. Prior to and for 1 hour after putting on a pair of colored filters, participants made achromatic and heterochromatic flicker photometry (HFP) settings to measure chromatic and luminance adaptation, respectively. Results showed significant chromatic adaptation with achromatic settings moving closer to baseline settings over 1 hour of wearing the filters and greater adaptation with spatial context. Conversely, there was no significant luminance adaptation and HFP matches fell close to what was predicted photometrically. The results are discussed in the context of prior studies of chromatic and luminance adaptation.
{"title":"Measurements of chromatic adaptation and luminous efficiency while wearing colored filters.","authors":"Andrew J Coia, Joseph M Arizpe, Peter A Smith, Thomas K Kuyk, Julie A Lovell","doi":"10.1167/jov.24.11.9","DOIUrl":"10.1167/jov.24.11.9","url":null,"abstract":"<p><p>The visual system adapts dynamically to stabilize perception over widely varying illuminations. Such adaptation allows the colors of objects to appear constant despite changes in spectral illumination. Similarly, the wearing of colored filters also alters spectral content, but this alteration can be more extreme than typically encountered in nature, presenting a unique challenge to color constancy mechanisms. While it is known that chromatic adaptation is affected by surrounding spatial context, a recent study reported a gradual temporal adaptation effect to colored filters such that colors initially appear strongly shifted but over hours of wear are perceived as closer to an unfiltered appearance. Presently, it is not clear whether the luminance system adapts spatially and temporally like the chromatic system. To address this, spatial and temporal adaptation effects to a colored filter were measured using tasks that assess chromatic and luminance adaptation separately. Prior to and for 1 hour after putting on a pair of colored filters, participants made achromatic and heterochromatic flicker photometry (HFP) settings to measure chromatic and luminance adaptation, respectively. Results showed significant chromatic adaptation with achromatic settings moving closer to baseline settings over 1 hour of wearing the filters and greater adaptation with spatial context. Conversely, there was no significant luminance adaptation and HFP matches fell close to what was predicted photometrically. The results are discussed in the context of prior studies of chromatic and luminance adaptation.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472893/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When we view the world, our eyes saccade quickly between points of interest. Even when fixating a target our eyes are not completely at rest but execute small fixational eye movements (FEMs). That vision is not blurred despite this ever-present jitter has seemingly motivated an increasingly popular theory denying the reliance of the visual system on pure spatial processing in favor of a space-to-time mechanism generated by the eye drifting across the image. Accordingly, FEMs are not detrimental but rather essential to good visibility. However, the space-to-time theory is incompatible with physiological data showing that all information is conveyed by the short neural volleys generated when the eyes land on a target, and with our faithful perception of briefly displayed objects, during which time FEMs have no effect. Another difficulty in rejecting the idea of image representation by the locations and nature of responding cells in favor of a timecode, is that somewhere, somehow, this code must be decoded into a parallel spatial one when reaching perception. Thus, in addition to the implausibility of generating meaningful responses during retinal drift, the space-to-time hypothesis calls for replacing efficient point-to-point parallel transmission with a cumbersome, delayed, space-to-time-to-space process. A novel physiological framework is presented here wherein the ability of the visual system to quickly process information is mediated by the short, powerful neural volleys generated by the landing saccades. These volleys are necessary and sufficient for normal perception without FEMs contribution. This mechanism enables our excellent perception of brief stimuli and explains that visibility is not blurred by FEMs because they do not generate useful information.
{"title":"Seeing on the fly: Physiological and behavioral evidence show that space-to-space representation and processing enable fast and efficient performance by the visual system.","authors":"Moshe Gur","doi":"10.1167/jov.24.11.11","DOIUrl":"10.1167/jov.24.11.11","url":null,"abstract":"<p><p>When we view the world, our eyes saccade quickly between points of interest. Even when fixating a target our eyes are not completely at rest but execute small fixational eye movements (FEMs). That vision is not blurred despite this ever-present jitter has seemingly motivated an increasingly popular theory denying the reliance of the visual system on pure spatial processing in favor of a space-to-time mechanism generated by the eye drifting across the image. Accordingly, FEMs are not detrimental but rather essential to good visibility. However, the space-to-time theory is incompatible with physiological data showing that all information is conveyed by the short neural volleys generated when the eyes land on a target, and with our faithful perception of briefly displayed objects, during which time FEMs have no effect. Another difficulty in rejecting the idea of image representation by the locations and nature of responding cells in favor of a timecode, is that somewhere, somehow, this code must be decoded into a parallel spatial one when reaching perception. Thus, in addition to the implausibility of generating meaningful responses during retinal drift, the space-to-time hypothesis calls for replacing efficient point-to-point parallel transmission with a cumbersome, delayed, space-to-time-to-space process. A novel physiological framework is presented here wherein the ability of the visual system to quickly process information is mediated by the short, powerful neural volleys generated by the landing saccades. These volleys are necessary and sufficient for normal perception without FEMs contribution. This mechanism enables our excellent perception of brief stimuli and explains that visibility is not blurred by FEMs because they do not generate useful information.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472890/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}