The three-factor model argues that the spatial orienting benefit triggered by the cue, the spatial selection benefit of cue-target matching, and the detection cost of distinguishing the cue from the target contribute to the measured inhibition of return (IOR) effect. According to the three-factor model, the spatial selection benefit dominates the occurrence of the IOR effect in the discrimination task, while the detection cost is negligible. The present study verified the three-factor model in the discrimination task under the cue-target paradigm by manipulating the spatial location and nonspatial feature consistency of the cue and the target as well as the promotion or hindrance of attentional disengagement from the cued location with a central reorienting cue. The results indicated that the three factors of the three-factor model contributed to the measured IOR effect in the discrimination task. Interestingly, the IOR effect was stable when the cue and target were perfectly repeated and attention was maintained at the cued location, implying that detection cost was not a negligible factor. The current study supported the contribution of all three factors in the three-factor model to the measured IOR effect; however, we argue that the role of detection cost in the discrimination task under different paradigms should be further refined.
In contrast to prototypical facial expressions, we show less perceptual tolerance in perceiving vague expressions by demonstrating an interpretation bias, such as more frequent perception of anger or happiness when categorizing ambiguous expressions of angry and happy faces that are morphed in different proportions and displayed under high- or low-quality conditions. However, it remains unclear whether this interpretation bias is specific to emotion categories or reflects a general negativity versus positivity bias and whether the degree of this bias is affected by the valence or category of two morphed expressions. These questions were examined in two eye-tracking experiments by systematically manipulating expression ambiguity and image quality in fear- and sad-happiness faces (Experiment 1) and by directly comparing anger-, fear-, sadness-, and disgust-happiness expressions (Experiment 2). We found that increasing expression ambiguity and degrading image quality induced a general negativity versus positivity bias in expression categorization. The degree of negativity bias, the associated reaction time and face-viewing gaze allocation were further manipulated by different expression combinations. It seems that although we show a viewing condition-dependent bias in interpreting vague facial expressions that display valence-contradicting expressive cues, it appears that the perception of these ambiguous expressions is guided by a categorical process similar to that involved in perceiving prototypical expressions.
Holistic processing aids in the discrimination of visually similar objects, but it may also come with a cost. Indeed holistic processing may improve the ability to detect changes to a face while impairing the ability to locate where the changes occur. We investigated the capacity to detect the occurrence of a change versus the capacity to detect the localization of a change for faces, houses, and words. Change detection was better than change localization for faces. Change localization outperformed change detection for houses. For words, there was no difference between detection and localization. We know from previous studies that words are processed holistically. However, being an object of visual expertise processed holistically, visual words are also a linguistic entity. Previously, the word composite effect was found for phonologically consistent words but not for phonologically inconsistent words. Being an object of visual expertise for which linguistic information is important, letter position information, is also crucial. Thus, the importance of localization of letters and features may augment the capacity to localize a change in words making the detection of a change and the detection of localization of a change equivalent.
Whether the direction of a hand motion that is congruent or incongruent with a concurrent target motion can influence representational momentum for that target was examined. Participants viewed a leftward or rightward moving target while moving their hand rightward, leftward, or not moving their hand. Prior studies of mental rotation found that congruency or incongruency of the direction of mental rotation and the direction of a concurrent physical rotation of a stimulus influenced mental rotation. As mental rotation and representational momentum each involve extrapolation of target motion, it could be predicted that congruency of the direction of hand motion and the direction of target motion might influence representational momentum of the target. Robust representational momentum occurred in all conditions, but there was no effect of congruency of hand motion and target motion, nor of the presence or absence of hand motion, on representational momentum. The results are consistent with a hypothesis that the generation of representational momentum involves sensory processes rather than motor processes.
The cone of gaze is a looker's range of gaze directions that is accepted as direct by an observer. The present research asks how the condition of mild strabismus, that is, when the two eyes point in slightly different directions, influences the cone of gaze. Normally, both eyes are rotated in a coordinated manner such that both eyes are directed to the same fixation point. With strabismus, there are two fixation points, and, therefore, two directions into which the two eyes point. This raises the question of the direction and the shape (i.e., width) of the gaze cone. Two experiments are conducted with simulated mild strabismus. Three conditions are tested, the two strabismic conditions of esotropia, and exotropia and one orthotropic (nonstrabismic) condition. Results show that the direction of the gaze cone is roughly the average of the directions of the two eyes. Furthermore, the width of the gaze cone is not affected by simulated strabismus and is thus the same for the strabismic and the orthotropic conditions. The results imply a model where at first the direction of gaze based on both eyes is perceived, and where the gaze cone is implied on the basis of the combined gaze direction.
Raven matrices are widely considered a pure test of cognitive abilities. Previous research has examined the extent to which cognitive strategies are predictive of the number of correct responses to Raven items. This study examined whether response times can be explained directly from the centrality and visual complexity of the matrix cells (edge density and perceived complexity). A total of 159 participants completed a 12-item version of the Raven Advanced Progressive Matrices. In addition to item number (an index of item difficulty), the findings demonstrated a positive correlation between the visual complexity of Raven items and both the mean response time and the number of fixations on the matrix (a strong correlate of response time). Moreover, more centrally placed cells as well as more complex cells received more fixations. It is concluded that response times on Raven matrices are impacted by low-level stimulus attributes, namely, visual complexity and eccentricity.