Facial color is closely linked to the perception of emotion, with reddish tones often being associated with anger. Although previous studies have shown that static reddish facial tones enhance the perception of anger, whether dynamic changes in facial color further amplify this effect remains unclear. This study investigated how differences in facial color influence the perception of expression using a judgment task that involved morphed facial stimuli (fearful to angry). The participants evaluated facial expressions under two conditions: faces with dynamic color changes and faces with static colors. Experiment 1 compared redder (CIELAB a*+) faces to original-colored faces, and Experiment 2 compared greener (CIELAB a*-) faces to original-colored faces. Experiment 3 compared redder faces to original-colored faces under rapid facial color change conditions. None the experiments revealed significant differences between dynamic and static facial colors; however, faces with a final reddish color (higher a* value) were more likely to be perceived as angry. These findings suggest that the final facial color influences the perception of anger independent of whether the color change is dynamic or static. Our findings support the idea that the recognition of anger is modulated by the relationship between an angry expression and the color red. This study provides a new perspective on the interaction between facial expression and facial color, suggesting that the final facial color plays a significant role in facial expression judgment.
{"title":"Dynamic versus static facial color changes: Evidence for terminal color dominance in expression recognition.","authors":"Miku Shibusawa, Yuya Hasegawa, Hideki Tamura, Shigeki Nakauchi, Tetsuto Minami","doi":"10.1167/jov.25.12.8","DOIUrl":"10.1167/jov.25.12.8","url":null,"abstract":"<p><p>Facial color is closely linked to the perception of emotion, with reddish tones often being associated with anger. Although previous studies have shown that static reddish facial tones enhance the perception of anger, whether dynamic changes in facial color further amplify this effect remains unclear. This study investigated how differences in facial color influence the perception of expression using a judgment task that involved morphed facial stimuli (fearful to angry). The participants evaluated facial expressions under two conditions: faces with dynamic color changes and faces with static colors. Experiment 1 compared redder (CIELAB a*+) faces to original-colored faces, and Experiment 2 compared greener (CIELAB a*-) faces to original-colored faces. Experiment 3 compared redder faces to original-colored faces under rapid facial color change conditions. None the experiments revealed significant differences between dynamic and static facial colors; however, faces with a final reddish color (higher a* value) were more likely to be perceived as angry. These findings suggest that the final facial color influences the perception of anger independent of whether the color change is dynamic or static. Our findings support the idea that the recognition of anger is modulated by the relationship between an angry expression and the color red. This study provides a new perspective on the interaction between facial expression and facial color, suggesting that the final facial color plays a significant role in facial expression judgment.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"8"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12514975/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145233872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Stiles et al. (2022), we showed that experienced Argus II retinal prosthesis users could accurately match visual and tactile shape stimuli (n = 6; ≤42 months of use). In this follow-up paper, we studied longer using participants (n = 5; ≤121 months of use) to evaluate visual and multisensory performance over prolonged visual restoration. With the combined cohort of participants from both studies (N = 11), we found that there was a significant positive correlation in multisensory performance up to the median duration of use (42 months) and a positive slope fit but not a significant correlation for the median duration of use and beyond. Therefore, there seems to be evidence for initial performance improvement with Argus II use. Nevertheless, there is also evidence for substantial individual differences with more extended device use, supported by a participant self-evaluation/questionnaire. Variations in the frequency of device usage, device functionality, or neurostructural plasticity could contribute to these individual differences. We also found a negative correlation in Argus II participants (N = 11) between task performance and the duration of blindness, potentially indicating the deleterious effects of atrophy and neurostructural changes during blindness on visual restoration functionality. Finally, a d' analysis showed that the Argus II participants in all tasks (including tactile-tactile matching) had significant differences in sensitivity and bias relative to controls, highlighting variation in the shape task strategy. Overall, these data highlight individual differences in performance over prolonged device use and the negative impact of prolonged blindness on visual restoration.
{"title":"Visual-tactile shape perception in Argus II Participants: The impact of prolonged device use and blindness on performance.","authors":"Stephanie Saltzmann, Noelle Stiles","doi":"10.1167/jov.25.12.19","DOIUrl":"10.1167/jov.25.12.19","url":null,"abstract":"<p><p>In Stiles et al. (2022), we showed that experienced Argus II retinal prosthesis users could accurately match visual and tactile shape stimuli (n = 6; ≤42 months of use). In this follow-up paper, we studied longer using participants (n = 5; ≤121 months of use) to evaluate visual and multisensory performance over prolonged visual restoration. With the combined cohort of participants from both studies (N = 11), we found that there was a significant positive correlation in multisensory performance up to the median duration of use (42 months) and a positive slope fit but not a significant correlation for the median duration of use and beyond. Therefore, there seems to be evidence for initial performance improvement with Argus II use. Nevertheless, there is also evidence for substantial individual differences with more extended device use, supported by a participant self-evaluation/questionnaire. Variations in the frequency of device usage, device functionality, or neurostructural plasticity could contribute to these individual differences. We also found a negative correlation in Argus II participants (N = 11) between task performance and the duration of blindness, potentially indicating the deleterious effects of atrophy and neurostructural changes during blindness on visual restoration functionality. Finally, a d' analysis showed that the Argus II participants in all tasks (including tactile-tactile matching) had significant differences in sensitivity and bias relative to controls, highlighting variation in the shape task strategy. Overall, these data highlight individual differences in performance over prolonged device use and the negative impact of prolonged blindness on visual restoration.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"19"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12530443/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145287557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kan Misumi, Hiroshi Ueda, Yuichiro Nishiura, Katsumi Watanabe
Prior research showed that observers can tolerate large speed alterations in real-world videos. The present study examined how sensitive the human visual system is to the change of kinematic information of human actions with altered playback speeds. We recorded four persons walking at various speeds and produced point-light walker stimuli (standard stimuli), from which we also created test stimuli either by speeding up or slowing down the playback speed. In the experiments, two point-light walkers were presented sequentially: one standard and the other test stimuli. Importantly, in each trial, the expected speed of translation was kept constant (e.g., a pair of one walking at 5.40 km/h and the other walking at 2.70 km/h but played with double speed), differing only in gait kinematic information. Participants reported which stimulus was played at a normal speed. We also included the manipulations of orientation (upright vs. inverted) and spatial scrambling of the point-light dots. The results showed that the unnaturalness detection was performed at above chance levels, confirming that kinematic inconsistencies provided a discernible cue. However, detection was only reliable when the speed alteration in a test stimulus was fairly large. Interestingly, we found little differences in performance among upright-intact, inverted, and scrambled conditions. The lack of the large detriments from inversion or scrambling suggests that the participants did not rely strongly on global form or orientation cues to perform the unnaturalness detection and points to greater contributions of local motion signals.
{"title":"Detecting unnaturalness in biological motion with altered playback speeds.","authors":"Kan Misumi, Hiroshi Ueda, Yuichiro Nishiura, Katsumi Watanabe","doi":"10.1167/jov.25.12.9","DOIUrl":"10.1167/jov.25.12.9","url":null,"abstract":"<p><p>Prior research showed that observers can tolerate large speed alterations in real-world videos. The present study examined how sensitive the human visual system is to the change of kinematic information of human actions with altered playback speeds. We recorded four persons walking at various speeds and produced point-light walker stimuli (standard stimuli), from which we also created test stimuli either by speeding up or slowing down the playback speed. In the experiments, two point-light walkers were presented sequentially: one standard and the other test stimuli. Importantly, in each trial, the expected speed of translation was kept constant (e.g., a pair of one walking at 5.40 km/h and the other walking at 2.70 km/h but played with double speed), differing only in gait kinematic information. Participants reported which stimulus was played at a normal speed. We also included the manipulations of orientation (upright vs. inverted) and spatial scrambling of the point-light dots. The results showed that the unnaturalness detection was performed at above chance levels, confirming that kinematic inconsistencies provided a discernible cue. However, detection was only reliable when the speed alteration in a test stimulus was fairly large. Interestingly, we found little differences in performance among upright-intact, inverted, and scrambled conditions. The lack of the large detriments from inversion or scrambling suggests that the participants did not rely strongly on global form or orientation cues to perform the unnaturalness detection and points to greater contributions of local motion signals.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"9"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12524866/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
João V X Cardoso, Hsin-Hung Li, David J Heeger, Laura Dugué
Cortical traveling waves-smooth changes of phase over time across the cortical surface-have been proposed to modulate perception periodically as they travel through retinotopic cortex, yet little is known about the underlying computational principles. Here, we make use of binocular rivalry, a perceptual phenomenon in which perceptual (illusory) waves are perceived when a shift in dominance occurs between two rival images. First, we assessed these perceptual waves using psychophysics. Participants viewed a stimulus restricted to an annulus around fixation, with orthogonal orientations presented to each eye. The stimulus presented to one eye was of greater contrast, thus generating perceptual dominance. When a patch of greater contrast was flashed briefly at one position in the other eye, it created a change in dominance that started at that location of the flash and expanded progressively, like a wave, as the previously suppressed stimulus became dominant. We found that the duration of the perceptual propagation increased with both distance traveled and eccentricity of the annulus. Diverting attention away from the annulus reduced drastically the occurrence and the speed of the wave. Second, we developed a computational model of traveling waves in which competition between the neural representations of the two stimuli is driven by both attentional modulation and mutual inhibition. We found that the model captured the key features of wave propagation dynamics. Together, these findings provide new insights into the functional relevance of cortical traveling waves and offer a framework for further experimental investigation into their role in perception.
{"title":"Attention-induced perceptual traveling waves in binocular rivalry.","authors":"João V X Cardoso, Hsin-Hung Li, David J Heeger, Laura Dugué","doi":"10.1167/jov.25.12.18","DOIUrl":"10.1167/jov.25.12.18","url":null,"abstract":"<p><p>Cortical traveling waves-smooth changes of phase over time across the cortical surface-have been proposed to modulate perception periodically as they travel through retinotopic cortex, yet little is known about the underlying computational principles. Here, we make use of binocular rivalry, a perceptual phenomenon in which perceptual (illusory) waves are perceived when a shift in dominance occurs between two rival images. First, we assessed these perceptual waves using psychophysics. Participants viewed a stimulus restricted to an annulus around fixation, with orthogonal orientations presented to each eye. The stimulus presented to one eye was of greater contrast, thus generating perceptual dominance. When a patch of greater contrast was flashed briefly at one position in the other eye, it created a change in dominance that started at that location of the flash and expanded progressively, like a wave, as the previously suppressed stimulus became dominant. We found that the duration of the perceptual propagation increased with both distance traveled and eccentricity of the annulus. Diverting attention away from the annulus reduced drastically the occurrence and the speed of the wave. Second, we developed a computational model of traveling waves in which competition between the neural representations of the two stimuli is driven by both attentional modulation and mutual inhibition. We found that the model captured the key features of wave propagation dynamics. Together, these findings provide new insights into the functional relevance of cortical traveling waves and offer a framework for further experimental investigation into their role in perception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"18"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12524872/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145276547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For humans, storing facial identities in visual working memory (VWM) is crucial. Despite vast research on VWM, it is not well known how face identity and physical features (e.g., eyes) are encoded in VWM representations. Moreover, while it is widely assumed that VWM face representations encode efficiently the subtle individual differences in facial features, this assumption has been difficult to investigate directly. Finally, it is not known how facial representations are forgotten. Some facial features could be more susceptible to forgetting than others, or conversely, all features could decay randomly. Here, we use a novel application of psychophysical reverse correlation, enabling us to estimate how various facial features are weighted in VWM representations, how statistically efficient these representations are, and how representations decay with time. We employed the same-different task with two retention times (1 s and 4 s) with morphed face stimuli, enabling us to control the appearance of each facial feature independently. We found that only a few features, most prominently the eyes, had high weighting, suggesting face VWM representations are based on storing a few key features. A classifier using stimulus information near-optimally showed markedly similar weightings to human participants-albeit weighing eyes less and other features more-suggesting that human VWM face representations are surprisingly close to statistically optimal encoding. There was no difference in weightings between retention times; instead, internal noise increased, suggesting that forgetting in face VWM works as a random process rather than as a change in remembered facial features.
{"title":"Facial feature representations in visual working memory: A reverse correlation study.","authors":"Crista Kuuramo, Ilmari Kurki","doi":"10.1167/jov.25.12.23","DOIUrl":"10.1167/jov.25.12.23","url":null,"abstract":"<p><p>For humans, storing facial identities in visual working memory (VWM) is crucial. Despite vast research on VWM, it is not well known how face identity and physical features (e.g., eyes) are encoded in VWM representations. Moreover, while it is widely assumed that VWM face representations encode efficiently the subtle individual differences in facial features, this assumption has been difficult to investigate directly. Finally, it is not known how facial representations are forgotten. Some facial features could be more susceptible to forgetting than others, or conversely, all features could decay randomly. Here, we use a novel application of psychophysical reverse correlation, enabling us to estimate how various facial features are weighted in VWM representations, how statistically efficient these representations are, and how representations decay with time. We employed the same-different task with two retention times (1 s and 4 s) with morphed face stimuli, enabling us to control the appearance of each facial feature independently. We found that only a few features, most prominently the eyes, had high weighting, suggesting face VWM representations are based on storing a few key features. A classifier using stimulus information near-optimally showed markedly similar weightings to human participants-albeit weighing eyes less and other features more-suggesting that human VWM face representations are surprisingly close to statistically optimal encoding. There was no difference in weightings between retention times; instead, internal noise increased, suggesting that forgetting in face VWM works as a random process rather than as a change in remembered facial features.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"23"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12574738/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The gaze of other people is of interest to human observers, particularly in cases of direct gaze, that is, when it targets the observer. Gaze direction research has successfully clarified some of the mechanisms underlying gaze perception, but little is known about the active perception of direct gaze. Three eye-tracking experiments were conducted in which fixations and scan paths were recorded during the task to judge direct gaze. Somewhat surprisingly, judgments were issued after a single eye fixation only in a minority of trials. In most cases, observers fixated both eyes of a looker model, sometimes even scanning them repeatedly. Fixation duration showed a consistent pattern, where first fixations were longer when the task response followed immediately, and second fixations were shorter just before the response. A direct-gaze bias was tested but was not found: visiting the second eye was even more likely when the first fixation was on a straight-gazing rather than an averted eye. There was no systematic pattern in the final fixation, contradicting the expectation that it would fall on the abducting (leading) eye. It is argued that overt looking behavior during direct gaze judgments reflects a cumulative decision process that spans over consecutive fixations. Several factors may contribute to the high incidence of multiple-eye scans, including vergence and angle kappa. Vergence, in particular, is considered an important candidate, because the depth of fixation is ambiguous when only one eye is visible, but can be limited by probing the gaze direction of both eyes.
{"title":"Eye movements during gaze perception.","authors":"Gernot Horstmann","doi":"10.1167/jov.25.12.3","DOIUrl":"10.1167/jov.25.12.3","url":null,"abstract":"<p><p>The gaze of other people is of interest to human observers, particularly in cases of direct gaze, that is, when it targets the observer. Gaze direction research has successfully clarified some of the mechanisms underlying gaze perception, but little is known about the active perception of direct gaze. Three eye-tracking experiments were conducted in which fixations and scan paths were recorded during the task to judge direct gaze. Somewhat surprisingly, judgments were issued after a single eye fixation only in a minority of trials. In most cases, observers fixated both eyes of a looker model, sometimes even scanning them repeatedly. Fixation duration showed a consistent pattern, where first fixations were longer when the task response followed immediately, and second fixations were shorter just before the response. A direct-gaze bias was tested but was not found: visiting the second eye was even more likely when the first fixation was on a straight-gazing rather than an averted eye. There was no systematic pattern in the final fixation, contradicting the expectation that it would fall on the abducting (leading) eye. It is argued that overt looking behavior during direct gaze judgments reflects a cumulative decision process that spans over consecutive fixations. Several factors may contribute to the high incidence of multiple-eye scans, including vergence and angle kappa. Vergence, in particular, is considered an important candidate, because the depth of fixation is ambiguous when only one eye is visible, but can be limited by probing the gaze direction of both eyes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"3"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12510381/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computational models that explain lightness/brightness illusions have been proposed. These models have been assessed using a simplistic criterion: the number of illusions each model can correctly predict from the test set. This simple method of evaluation assumes that each illusion is independent; however, because the independence and similarity among lightness illusions have not been well established, potential interdependencies among the illusions in the test set could distort the evaluation of models. Moreover, evaluating models with a single value obscures where the model's strengths and weaknesses lie. We collected the magnitudes of various lightness illusions through two online experiments and applied exploratory factor analyses. Both experiments identified some underlying factors in these illusions, suggesting that they can be classified into a few distinct groups. Experiment 1 identified three common factors; assimilation, contrast, and White's effect. Experiment 2, with a different illusion set, identified two factors-assimilation and contrast. We then examined three well-known models that are based on early visual processes, using the outcomes of the experiments. The examination of these models revealed biases in the models toward specific factors or sets of illusions, which suggested their limitations. This study clarified that correlations of illusion magnitudes provide valuable insights into both illusions and models and highlighted the need to assess models based on their ability to account for underlying factors rather than individual illusions.
{"title":"Relationships among lightness illusions uncovered by analyses of individual differences.","authors":"Yuki Kobayashi, Arthur G Shapiro","doi":"10.1167/jov.25.12.14","DOIUrl":"10.1167/jov.25.12.14","url":null,"abstract":"<p><p>Computational models that explain lightness/brightness illusions have been proposed. These models have been assessed using a simplistic criterion: the number of illusions each model can correctly predict from the test set. This simple method of evaluation assumes that each illusion is independent; however, because the independence and similarity among lightness illusions have not been well established, potential interdependencies among the illusions in the test set could distort the evaluation of models. Moreover, evaluating models with a single value obscures where the model's strengths and weaknesses lie. We collected the magnitudes of various lightness illusions through two online experiments and applied exploratory factor analyses. Both experiments identified some underlying factors in these illusions, suggesting that they can be classified into a few distinct groups. Experiment 1 identified three common factors; assimilation, contrast, and White's effect. Experiment 2, with a different illusion set, identified two factors-assimilation and contrast. We then examined three well-known models that are based on early visual processes, using the outcomes of the experiments. The examination of these models revealed biases in the models toward specific factors or sets of illusions, which suggested their limitations. This study clarified that correlations of illusion magnitudes provide valuable insights into both illusions and models and highlighted the need to assess models based on their ability to account for underlying factors rather than individual illusions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"14"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12517106/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William Swann, Matthew Davidson, Gabriel Clouston, David Alais
Presenting unique visual stimuli to each eye induces a dynamic perceptual state where only one image is perceived at a time, and the other is suppressed from awareness. This phenomenon, known as interocular suppression, has allowed researchers to probe the dynamics of visual awareness and unconscious processing in the visual system. A key result is that different categories of visual stimuli may not be suppressed equally, but there is still a wide debate as to whether low- or high-level visual features modulate interocular suppression. Here we quantify and compare the strength of suppression for various motion stimuli in comparison to biological motion stimuli that are rich in high-level semantic information. We employ the tracking continuous flash suppression method, which recently demonstrated uniform suppression depth for a variety of static images that varied in semantic content. The accumulative findings of our three experiments outline that suppression depth is varied not by the strength of the suppressor alone but with different low-level visual motion features, in contrast to the uniform suppression depth previously shown for static images. Notably, disrupting high-level semantic information via the inversion or rotation of biological motion did not alter suppression depth. Ultimately, our data support the dependency of suppression depth on local motion information, further supporting the low-level local-precedence hypothesis of interocular suppression.
{"title":"Local motion governs visibility and suppression of biological motion in continuous flash suppression.","authors":"William Swann, Matthew Davidson, Gabriel Clouston, David Alais","doi":"10.1167/jov.25.12.25","DOIUrl":"10.1167/jov.25.12.25","url":null,"abstract":"<p><p>Presenting unique visual stimuli to each eye induces a dynamic perceptual state where only one image is perceived at a time, and the other is suppressed from awareness. This phenomenon, known as interocular suppression, has allowed researchers to probe the dynamics of visual awareness and unconscious processing in the visual system. A key result is that different categories of visual stimuli may not be suppressed equally, but there is still a wide debate as to whether low- or high-level visual features modulate interocular suppression. Here we quantify and compare the strength of suppression for various motion stimuli in comparison to biological motion stimuli that are rich in high-level semantic information. We employ the tracking continuous flash suppression method, which recently demonstrated uniform suppression depth for a variety of static images that varied in semantic content. The accumulative findings of our three experiments outline that suppression depth is varied not by the strength of the suppressor alone but with different low-level visual motion features, in contrast to the uniform suppression depth previously shown for static images. Notably, disrupting high-level semantic information via the inversion or rotation of biological motion did not alter suppression depth. Ultimately, our data support the dependency of suppression depth on local motion information, further supporting the low-level local-precedence hypothesis of interocular suppression.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"25"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12582193/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145379669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martina Morea, Michael H Herzog, Gregory Francis, Mauro Manassi
Vision is often understood as a hierarchical, feedforward process, where visual processing proceeds from low-level features to high-level representations. Within tens of milliseconds, the fundamental features of the percept are established. Traditional models use this framework to explain visual crowding, where nearby elements impair target perception with minimal influence from stimulus duration. Here, we show that, at least for more complex displays, crowding involves highly dynamic processes. We determined vernier offset discrimination thresholds for different flanker configurations. In Experiment 1, for a 160-ms stimulus duration, crowding was lower for flanking Cubes/Rectangles compared to Lines, pointing toward underlying grouping processes. However, strong crowding occurred in all conditions at 20 ms, showing that grouping requires a minimum stimulus duration to occur. In Experiment 2, the crowded vernier (20 ms) was preceded by a 20-ms Cubes display. This brief preview led to uncrowding of the subsequently presented flanked vernier, but only for flankers that ungroup for longer durations (i.e., Cubes). This uncrowding effect occurred for time spans up to 1 s (Experiment 3) but could be interrupted by elements presented between the preview and the flanked vernier (Experiment 4). Our findings are well predicted by the LAMINART model, which employs recurrent segmentation processes unfolding over time to separate objects into distinct representation layers. Taken together, our novel preview effect highlights the importance of spatiotemporal grouping in crowding. In contrast to classic feedforward models, we propose that crowding is a dynamic process where multiple interpretations are modulated and gated by grouping mechanisms evolving over time.
{"title":"Dynamics of vision: Grouping takes longer than crowding.","authors":"Martina Morea, Michael H Herzog, Gregory Francis, Mauro Manassi","doi":"10.1167/jov.25.12.16","DOIUrl":"10.1167/jov.25.12.16","url":null,"abstract":"<p><p>Vision is often understood as a hierarchical, feedforward process, where visual processing proceeds from low-level features to high-level representations. Within tens of milliseconds, the fundamental features of the percept are established. Traditional models use this framework to explain visual crowding, where nearby elements impair target perception with minimal influence from stimulus duration. Here, we show that, at least for more complex displays, crowding involves highly dynamic processes. We determined vernier offset discrimination thresholds for different flanker configurations. In Experiment 1, for a 160-ms stimulus duration, crowding was lower for flanking Cubes/Rectangles compared to Lines, pointing toward underlying grouping processes. However, strong crowding occurred in all conditions at 20 ms, showing that grouping requires a minimum stimulus duration to occur. In Experiment 2, the crowded vernier (20 ms) was preceded by a 20-ms Cubes display. This brief preview led to uncrowding of the subsequently presented flanked vernier, but only for flankers that ungroup for longer durations (i.e., Cubes). This uncrowding effect occurred for time spans up to 1 s (Experiment 3) but could be interrupted by elements presented between the preview and the flanked vernier (Experiment 4). Our findings are well predicted by the LAMINART model, which employs recurrent segmentation processes unfolding over time to separate objects into distinct representation layers. Taken together, our novel preview effect highlights the importance of spatiotemporal grouping in crowding. In contrast to classic feedforward models, we propose that crowding is a dynamic process where multiple interpretations are modulated and gated by grouping mechanisms evolving over time.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"16"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12517361/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A well-known finding from research on gaze perception in triadic gaze tasks is the overestimation of horizontal gaze directions. In general, a looker model's gaze appears to deviate more from the straight line of sight than is objectively the case. Although there is, up to now, a substantial amount of evidence for what Anstis et al. (1969) termed the overestimation effect, results vary regarding the absolute overestimation factor. Starting from the occlusion hypothesis by Anstis et al. (1969), the present study examines the influence of horizontal iris movement range, operationalized as the sclera size index on overestimation factors acquired for a sample of 40 looker models. The study rendered two main findings. First, horizontal iris movement range (sclera size index: M = 2.02, SD = 0.11, min = 1.79, max = 2.25) proved not useful for the explanation of variance in the overestimation factors (M = 1.79, SD = 0.16, min = 1.49, max = 2.24) obtained separately for each of the looker models. Second, intraclass correlations revealed that variance in perceived gaze directions between observers was roughly 10 times larger (ICC = 0.189) than variance between looker models (ICC = 0.019). The results strongly emphasize the need for larger and more diverse observer samples and may serve as a post hoc justification for using only a few or no different looker models in triadic gaze judgment tasks.
三合一凝视任务中凝视感知的一个著名研究发现是对水平凝视方向的高估。一般来说,模特的视线偏离直线的程度比客观情况要大。尽管到目前为止,有大量证据证明Anstis等人(1969)所说的高估效应,但关于绝对高估因素的结果各不相同。从Anstis等人(1969)的遮挡假说出发,本研究考察了虹膜水平运动范围(作为巩膜大小指数)对40个looker模型样本中获得的高估因子的影响。这项研究有两个主要发现。首先,水平虹膜运动范围(巩膜大小指数:M = 2.02, SD = 0.11, min = 1.79, max = 2.25)被证明对每个观察者模型分别获得的高估因子(M = 1.79, SD = 0.16, min = 1.49, max = 2.24)的方差解释无效。其次,类内相关性表明,观察者之间感知凝视方向的差异(ICC = 0.189)大约是观察者模型之间差异(ICC = 0.019)的10倍。结果强烈强调需要更大、更多样化的观察者样本,并可能作为在三合一凝视判断任务中只使用少数或不使用不同的观察者模型的事后理由。
{"title":"In the eye of the beholder? Gaze perception and the external morphology of the human eye.","authors":"Conrad Alting, Gernot Horstmann","doi":"10.1167/jov.25.12.24","DOIUrl":"10.1167/jov.25.12.24","url":null,"abstract":"<p><p>A well-known finding from research on gaze perception in triadic gaze tasks is the overestimation of horizontal gaze directions. In general, a looker model's gaze appears to deviate more from the straight line of sight than is objectively the case. Although there is, up to now, a substantial amount of evidence for what Anstis et al. (1969) termed the overestimation effect, results vary regarding the absolute overestimation factor. Starting from the occlusion hypothesis by Anstis et al. (1969), the present study examines the influence of horizontal iris movement range, operationalized as the sclera size index on overestimation factors acquired for a sample of 40 looker models. The study rendered two main findings. First, horizontal iris movement range (sclera size index: M = 2.02, SD = 0.11, min = 1.79, max = 2.25) proved not useful for the explanation of variance in the overestimation factors (M = 1.79, SD = 0.16, min = 1.49, max = 2.24) obtained separately for each of the looker models. Second, intraclass correlations revealed that variance in perceived gaze directions between observers was roughly 10 times larger (ICC = 0.189) than variance between looker models (ICC = 0.019). The results strongly emphasize the need for larger and more diverse observer samples and may serve as a post hoc justification for using only a few or no different looker models in triadic gaze judgment tasks.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 12","pages":"24"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12574756/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145372412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}