{"title":"News From the Field","authors":"David Free","doi":"10.1089/g4h.2014.2144","DOIUrl":"https://doi.org/10.1089/g4h.2014.2144","url":null,"abstract":"","PeriodicalId":19838,"journal":{"name":"Perception & Psychophysics","volume":"43 1","pages":"3-4"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81392640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We examined time course of grouping of shape by perceptual closure in three experiments using a primed matching task. The gaps between the closure-inducing contours varied in size. In addition, depending on the distribution of the gaps along the closure-inducing contours--occurring at straight contour segments or at the point of change in contour direction--collinearity was either present or absent. In the absence of collinearity, early priming of the shape was observed for spatially close fragments, but not for spatially distant fragments. When collinearity was available, the shape of both spatially close and spatially distant fragments was primed at brief exposures. These results suggest that spatial proximity is critical for the rapid grouping of shape by perceptual closure in the absence of collinearity, but collinearity facilitates the rapid grouping of shape when the closure-inducing fragments are spatially distant. In addition, shape priming persisted over time only when the collinear fragments were spatially close, suggesting that a stable representation of shape depends both on collinearity and spatial proximity between the closure-inducing fragments.
{"title":"Time course of grouping of shape by perceptual closure: effects of spatial proximity and collinearity.","authors":"Bat-Sheva Hadad, R. Kimchi","doi":"10.1167/8.6.582","DOIUrl":"https://doi.org/10.1167/8.6.582","url":null,"abstract":"We examined time course of grouping of shape by perceptual closure in three experiments using a primed matching task. The gaps between the closure-inducing contours varied in size. In addition, depending on the distribution of the gaps along the closure-inducing contours--occurring at straight contour segments or at the point of change in contour direction--collinearity was either present or absent. In the absence of collinearity, early priming of the shape was observed for spatially close fragments, but not for spatially distant fragments. When collinearity was available, the shape of both spatially close and spatially distant fragments was primed at brief exposures. These results suggest that spatial proximity is critical for the rapid grouping of shape by perceptual closure in the absence of collinearity, but collinearity facilitates the rapid grouping of shape when the closure-inducing fragments are spatially distant. In addition, shape priming persisted over time only when the collinear fragments were spatially close, suggesting that a stable representation of shape depends both on collinearity and spatial proximity between the closure-inducing fragments.","PeriodicalId":19838,"journal":{"name":"Perception & Psychophysics","volume":"86 1","pages":"818-27"},"PeriodicalIF":0.0,"publicationDate":"2010-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83194438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a three-alternative forced-choice task, 4 pigeons were trained to discriminate a target stimulus consisting of two colored shapes, one of which partially occluded the other, from two foil stimuli that portrayed either a complete or an incomplete version of the occluded shape. The dependent measure was the percentage of total errors that the birds committed to the complete foil. At the outset of training, the pigeons committed approximately 50% of total errors to the complete foil, but as training progressed, the percentage of errors to the complete foil rose. When the pigeons were given a second exposure to the initial set of stimuli, they committed 70% of total errors to the complete foil, suggesting that they now saw the complete foil as more similar to the occluded target than the incomplete foil. These results suggest that experience with 2-D images may facilitate amodal completion in pigeons, perhaps via perceptual learning.
{"title":"Prior experience affects amodal completion in pigeons.","authors":"Y. Nagasaka, O. Lazareva, E. Wasserman","doi":"10.1167/6.6.764","DOIUrl":"https://doi.org/10.1167/6.6.764","url":null,"abstract":"In a three-alternative forced-choice task, 4 pigeons were trained to discriminate a target stimulus consisting of two colored shapes, one of which partially occluded the other, from two foil stimuli that portrayed either a complete or an incomplete version of the occluded shape. The dependent measure was the percentage of total errors that the birds committed to the complete foil. At the outset of training, the pigeons committed approximately 50% of total errors to the complete foil, but as training progressed, the percentage of errors to the complete foil rose. When the pigeons were given a second exposure to the initial set of stimuli, they committed 70% of total errors to the complete foil, suggesting that they now saw the complete foil as more similar to the occluded target than the incomplete foil. These results suggest that experience with 2-D images may facilitate amodal completion in pigeons, perhaps via perceptual learning.","PeriodicalId":19838,"journal":{"name":"Perception & Psychophysics","volume":"28 1","pages":"596-605"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87992495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In four experiments, we examined whether generalization to unfamiliar views was better under stereo viewing or under nonstereo viewing across different tasks and stimuli. In the first three experiments, we used a sequential matching task in which observers matched the identities of shaded tube-like objects. Across Experiments 1-3, we manipulated the presentation method of the nonstereo stimuli (having observers wear an eye patch vs. showing observers the same screen image) and the magnitude of the viewpoint change (30 degrees vs. 38 degrees). In Experiment 4, observers identified "easy" and "hard" rotating wire-frame objects at the individual level under stereo and nonstereo viewing conditions. We found a stereo advantage for generalizing to unfamiliar views in all the experiments. However, in these experiments, performance remained view dependent even under stereo viewing. These results strongly argue against strictly 2-D image-based models of object recognition, at least for the stimuli and recognition tasks used, and suggest that observers used representations that contained view-specific local depth information.
{"title":"A stereo advantage in generalizing over changes in viewpoint on object recognition tasks.","authors":"D. J. Bennett, Q. Vuong","doi":"10.1167/6.6.313","DOIUrl":"https://doi.org/10.1167/6.6.313","url":null,"abstract":"In four experiments, we examined whether generalization to unfamiliar views was better under stereo viewing or under nonstereo viewing across different tasks and stimuli. In the first three experiments, we used a sequential matching task in which observers matched the identities of shaded tube-like objects. Across Experiments 1-3, we manipulated the presentation method of the nonstereo stimuli (having observers wear an eye patch vs. showing observers the same screen image) and the magnitude of the viewpoint change (30 degrees vs. 38 degrees). In Experiment 4, observers identified \"easy\" and \"hard\" rotating wire-frame objects at the individual level under stereo and nonstereo viewing conditions. We found a stereo advantage for generalizing to unfamiliar views in all the experiments. However, in these experiments, performance remained view dependent even under stereo viewing. These results strongly argue against strictly 2-D image-based models of object recognition, at least for the stimuli and recognition tasks used, and suggest that observers used representations that contained view-specific local depth information.","PeriodicalId":19838,"journal":{"name":"Perception & Psychophysics","volume":"52 1","pages":"1082-93"},"PeriodicalIF":0.0,"publicationDate":"2010-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82271255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Norman, Hideko F. Norman, Young-lim Lee, D. Stockton, J. Lappin
The ability of observers to perceive three-dimensional (3-D) distances or lengths along intrinsically curved surfaces was investigated in three experiments. Three physically curved surfaces were used: convex and/or concave hemispheres (Experiments 1 and 3) and a hyperbolic paraboloid (Experiment 2). The first two experiments employed a visual length-matching task, but in the final experiment the observers estimated the surface lengths motorically by varying the separation between their two index fingers. In general, the observers' judgments of surface length in both tasks (perceptual vs. motoric matching) were very precise but were not necessarily accurate. Large individual differences (overestimation, underestimation, etc.) in the perception of length occurred. There were also significant effects of viewing distance, type of surface, and orientation of the spatial intervals on the observers' judgments of surface length. The individual differences and failures of perceptual constancy that were obtained indicate that there is no single relationship between physical and perceived distances on 3-D surfaces that is consistent across observers.
{"title":"The visual perception of length along intrinsically curved surfaces.","authors":"J. Norman, Hideko F. Norman, Young-lim Lee, D. Stockton, J. Lappin","doi":"10.1167/2.7.84","DOIUrl":"https://doi.org/10.1167/2.7.84","url":null,"abstract":"The ability of observers to perceive three-dimensional (3-D) distances or lengths along intrinsically curved surfaces was investigated in three experiments. Three physically curved surfaces were used: convex and/or concave hemispheres (Experiments 1 and 3) and a hyperbolic paraboloid (Experiment 2). The first two experiments employed a visual length-matching task, but in the final experiment the observers estimated the surface lengths motorically by varying the separation between their two index fingers. In general, the observers' judgments of surface length in both tasks (perceptual vs. motoric matching) were very precise but were not necessarily accurate. Large individual differences (overestimation, underestimation, etc.) in the perception of length occurred. There were also significant effects of viewing distance, type of surface, and orientation of the spatial intervals on the observers' judgments of surface length. The individual differences and failures of perceptual constancy that were obtained indicate that there is no single relationship between physical and perceived distances on 3-D surfaces that is consistent across observers.","PeriodicalId":19838,"journal":{"name":"Perception & Psychophysics","volume":"160 1","pages":"77-88"},"PeriodicalIF":0.0,"publicationDate":"2010-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80642467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is considerable evidence that covert visual attention precedes voluntary eye movements to an intended location. What happens to covert attention when an involuntary saccadic eye movement is made? In agreement with other researchers, we found that attention and voluntary eye movements are tightly coupled in such a way that attention always shifts to the intended location before the eyes begin to move. However, we found that when an involuntary eye movement is made, attention first precedes the eyes to the unintended location and then switches to the intended location, with the eyes following this pattern a short time later. These results support the notion that attention and saccade programming are tightly coupled.
{"title":"Covert shifts of attention precede involuntary eye movements.","authors":"M. Peterson, A. Kramer, D. E. Irwin","doi":"10.1167/2.7.163","DOIUrl":"https://doi.org/10.1167/2.7.163","url":null,"abstract":"There is considerable evidence that covert visual attention precedes voluntary eye movements to an intended location. What happens to covert attention when an involuntary saccadic eye movement is made? In agreement with other researchers, we found that attention and voluntary eye movements are tightly coupled in such a way that attention always shifts to the intended location before the eyes begin to move. However, we found that when an involuntary eye movement is made, attention first precedes the eyes to the unintended location and then switches to the intended location, with the eyes following this pattern a short time later. These results support the notion that attention and saccade programming are tightly coupled.","PeriodicalId":19838,"journal":{"name":"Perception & Psychophysics","volume":"260 1","pages":"398-405"},"PeriodicalIF":0.0,"publicationDate":"2010-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76699571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this large-scale study was to find out which points along the contour of a shape are most salient and why. Many subjects (N=161) were asked to mark salient points on contour stimuli, derived from a large set of line drawings of everyday objects (N=260). The database of more than 200,000 marked points was analyzed extensively to test the hypothesis, first formulated by Attneave (1954), that curvature extrema are most salient. This hypothesis was confirmed by the data: Highly salient points are usually very close to strong curvature extrema (positive maxima and negative minima). However, perceptual saliency of points along the contour is determined by more factors than just local absolute curvature. This was confirmed by an extensive correlational analysis of perceptual saliency in relation to ten different stimulus factors. A point is more salient when the two line segments connecting it with its two neighboring salient points make a sharp turning angle and when the 2-D part defined by the triplet of salient points is less compact and sticks out more.
{"title":"Perceptual saliency of points along the contour of everyday objects: a large-scale study.","authors":"J. De Winter, J. Wagemans","doi":"10.1167/2.7.487","DOIUrl":"https://doi.org/10.1167/2.7.487","url":null,"abstract":"The aim of this large-scale study was to find out which points along the contour of a shape are most salient and why. Many subjects (N=161) were asked to mark salient points on contour stimuli, derived from a large set of line drawings of everyday objects (N=260). The database of more than 200,000 marked points was analyzed extensively to test the hypothesis, first formulated by Attneave (1954), that curvature extrema are most salient. This hypothesis was confirmed by the data: Highly salient points are usually very close to strong curvature extrema (positive maxima and negative minima). However, perceptual saliency of points along the contour is determined by more factors than just local absolute curvature. This was confirmed by an extensive correlational analysis of perceptual saliency in relation to ten different stimulus factors. A point is more salient when the two line segments connecting it with its two neighboring salient points make a sharp turning angle and when the 2-D part defined by the triplet of salient points is less compact and sticks out more.","PeriodicalId":19838,"journal":{"name":"Perception & Psychophysics","volume":"19 1","pages":"50-64"},"PeriodicalIF":0.0,"publicationDate":"2010-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90039522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cole, Gellatly, and Blurton have shown that targets presented adjacent to geometric corners are detected more efficiently than targets presented adjacent to straight edges. In six experiments, we examined how this corner enhancement effect is modulated by corner-of-object representations (i.e., corners that define an object's shape) and local base-level corners that occur as a result of, for instance, overlapping the straight edges of two objects. The results show that the corner phenomenon is greater for corners of object representations than for corners that do not define an object's shape. We also examined whether the corner effect persists within the contour boundaries of an object, as well as on the outside. The results showed that a spatial gradient of attention accompanies the corner effect outside the contour boundaries of an object but that processing within an object is uniform, with no corner effect occurring. We discuss these findings in relation to space-based and object-based theories of attention.
{"title":"Object and spatial representations in the corner enhancement effect.","authors":"G. Cole, P. Skarratt, A. Gellatly","doi":"10.1068/V070101","DOIUrl":"https://doi.org/10.1068/V070101","url":null,"abstract":"Cole, Gellatly, and Blurton have shown that targets presented adjacent to geometric corners are detected more efficiently than targets presented adjacent to straight edges. In six experiments, we examined how this corner enhancement effect is modulated by corner-of-object representations (i.e., corners that define an object's shape) and local base-level corners that occur as a result of, for instance, overlapping the straight edges of two objects. The results show that the corner phenomenon is greater for corners of object representations than for corners that do not define an object's shape. We also examined whether the corner effect persists within the contour boundaries of an object, as well as on the outside. The results showed that a spatial gradient of attention accompanies the corner effect outside the contour boundaries of an object but that processing within an object is uniform, with no corner effect occurring. We discuss these findings in relation to space-based and object-based theories of attention.","PeriodicalId":19838,"journal":{"name":"Perception & Psychophysics","volume":"5 1","pages":"400-12"},"PeriodicalIF":0.0,"publicationDate":"2007-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87791364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual cuing studies have been widely used to demonstrate and explore contributions from both object- and location-based attention systems. A common finding has been a response advantage for shifts of attention occurring within an object, relative to shifts of an equal distance between objects. The present study examined this advantage for within-object shifts in terms of engage and disengage operations within the object- and location-based attention systems. The rationale was that shifts of attention between objects require object-based attention to disengage from one object before shifting to another, something that is not required for shifts of attention within an object or away from a location. One- and two-object displays were used to assess object-based contributions related to disengaging and engaging attention within, between, into, and out of objects. The results suggest that the “object advantage” commonly found in visual cuing experiments in which shifts of attention are required is primarily due to disengage operations associated with object-based attention.
{"title":"Shifting attention into and out of objects: Evaluating the processes underlying the object advantage","authors":"James M. Brown, Hope I. Denney","doi":"10.1167/5.8.1032","DOIUrl":"https://doi.org/10.1167/5.8.1032","url":null,"abstract":"Visual cuing studies have been widely used to demonstrate and explore contributions from both object- and location-based attention systems. A common finding has been a response advantage for shifts of attention occurring within an object, relative to shifts of an equal distance between objects. The present study examined this advantage for within-object shifts in terms of engage and disengage operations within the object- and location-based attention systems. The rationale was that shifts of attention between objects require object-based attention to disengage from one object before shifting to another, something that is not required for shifts of attention within an object or away from a location. One- and two-object displays were used to assess object-based contributions related to disengaging and engaging attention within, between, into, and out of objects. The results suggest that the “object advantage” commonly found in visual cuing experiments in which shifts of attention are required is primarily due to disengage operations associated with object-based attention.","PeriodicalId":19838,"journal":{"name":"Perception & Psychophysics","volume":"133 1","pages":"606-618"},"PeriodicalIF":0.0,"publicationDate":"2005-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86198312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans can readily perceive biological motion from point-light (PL) animations, which create an image of a moving human figure by tracing the trajectories of a small number of light points affixed to a moving human body. We have applied ideal observer analysis to a standard biological motion discrimination task involving either full-figure or PL displays. Contrary to current dogma, we find that PL animations can be rich inpotential stimulus information but that human observers are remarkably inefficient at exploiting this information. Although our findings do not discount the utility of PL animation, they do provide a realistic measure of the computational challenge posed by biological motion perception.
{"title":"The efficiency of biological motion perception.","authors":"J. Gold, D. Tadin, Susan C. Cook, R. Blake","doi":"10.1167/5.8.1057","DOIUrl":"https://doi.org/10.1167/5.8.1057","url":null,"abstract":"Humans can readily perceive biological motion from point-light (PL) animations, which create an image of a moving human figure by tracing the trajectories of a small number of light points affixed to a moving human body. We have applied ideal observer analysis to a standard biological motion discrimination task involving either full-figure or PL displays. Contrary to current dogma, we find that PL animations can be rich inpotential stimulus information but that human observers are remarkably inefficient at exploiting this information. Although our findings do not discount the utility of PL animation, they do provide a realistic measure of the computational challenge posed by biological motion perception.","PeriodicalId":19838,"journal":{"name":"Perception & Psychophysics","volume":"6 1","pages":"88-95"},"PeriodicalIF":0.0,"publicationDate":"2005-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73915778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}