Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.786912
J. Gluckman, S. Nayar
By using mirror reflections of a scene, stereo images can be captured with a single camera (catadioptric stereo). Single camera stereo provides both geometric and radiometric advantages over traditional two camera stereo. In this paper we discuss the geometry and calibration of catadioptric stereo with two planar mirrors and show how the relative orientation, the epipolar geometry and the estimation of the focal length are constrained by planar motion. In addition, we have implemented a real-time system which demonstrates the viability of stereo with mirrors as an alternative to traditional two camera stereo.
{"title":"Planar catadioptric stereo: geometry and calibration","authors":"J. Gluckman, S. Nayar","doi":"10.1109/CVPR.1999.786912","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786912","url":null,"abstract":"By using mirror reflections of a scene, stereo images can be captured with a single camera (catadioptric stereo). Single camera stereo provides both geometric and radiometric advantages over traditional two camera stereo. In this paper we discuss the geometry and calibration of catadioptric stereo with two planar mirrors and show how the relative orientation, the epipolar geometry and the estimation of the focal length are constrained by planar motion. In addition, we have implemented a real-time system which demonstrates the viability of stereo with mirrors as an alternative to traditional two camera stereo.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"224 1","pages":"22-28 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80063035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.786973
A. L. Ratan, O. Maron, W. Grimson, Tomas Lozano-Perez
In this paper, we adapt the Multiple Instance Learning paradigm using the Diverse Density algorithm as a way of modeling the ambiguity in images in order to learn "visual concepts" that can be used to classify new images. In this framework, a user labels an image as positive if the image contains the concept. Each example image is a bag of instances (sub-images) where only the bag is labeled-not the individual instances (sub-images). From a small collection of positive and negative examples, the system learns the concept and uses it to retrieve images that contain the concept from a large database. The learned "concepts" are simple templates that capture the color, texture and spatial properties of the class of images. We introduced this method earlier in the domain of natural scene classification using simple, low resolution sub-images as instances. In this paper, we extend the bag generator (the mechanism which takes an image and generates a set of instances) to generate more complex instances using multiple cues on segmented high resolution images. We show that this method can be used to learn certain object class concepts (e.g. cars) in addition, to natural scenes.
{"title":"A framework for learning query concepts in image classification","authors":"A. L. Ratan, O. Maron, W. Grimson, Tomas Lozano-Perez","doi":"10.1109/CVPR.1999.786973","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786973","url":null,"abstract":"In this paper, we adapt the Multiple Instance Learning paradigm using the Diverse Density algorithm as a way of modeling the ambiguity in images in order to learn \"visual concepts\" that can be used to classify new images. In this framework, a user labels an image as positive if the image contains the concept. Each example image is a bag of instances (sub-images) where only the bag is labeled-not the individual instances (sub-images). From a small collection of positive and negative examples, the system learns the concept and uses it to retrieve images that contain the concept from a large database. The learned \"concepts\" are simple templates that capture the color, texture and spatial properties of the class of images. We introduced this method earlier in the domain of natural scene classification using simple, low resolution sub-images as instances. In this paper, we extend the bag generator (the mechanism which takes an image and generates a set of instances) to generate more complex instances using multiple cues on segmented high resolution images. We show that this method can be used to learn certain object class concepts (e.g. cars) in addition, to natural scenes.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"27 1","pages":"423-429 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77336349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.784996
T. Drummond, R. Cipolla
A novel approach to visual servoing is presented, which takes advantage of the structure of the Lie algebra of affine transformations. The aim of this project is to use feedback from a visual sensor to guide a robot arm to a target position. The sensor is placed in the end effector of the robot, the 'camera-in-hand' approach, and thus provides direct feedback of the robot motion relative to the target scene via observed transformations of the scene. These scene transformations are obtained by measuring the affine deformations of a target planar contour, captured by use of an active contour, or snake. Deformations of the snake are constrained using the Lie groups of affine and projective transformations. Properties of the Lie algebra of affine transformations are exploited to integrate observed deformations to the target contour which can be compensated with appropriate robot motion using a non-linear control structure. These techniques have been implemented using a video camera to control a 5 DoF robot arm. Experiments with this implementation are presented, together with a discussion of the results.
{"title":"Visual tracking and control using Lie algebras","authors":"T. Drummond, R. Cipolla","doi":"10.1109/CVPR.1999.784996","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784996","url":null,"abstract":"A novel approach to visual servoing is presented, which takes advantage of the structure of the Lie algebra of affine transformations. The aim of this project is to use feedback from a visual sensor to guide a robot arm to a target position. The sensor is placed in the end effector of the robot, the 'camera-in-hand' approach, and thus provides direct feedback of the robot motion relative to the target scene via observed transformations of the scene. These scene transformations are obtained by measuring the affine deformations of a target planar contour, captured by use of an active contour, or snake. Deformations of the snake are constrained using the Lie groups of affine and projective transformations. Properties of the Lie algebra of affine transformations are exploited to integrate observed deformations to the target contour which can be compensated with appropriate robot motion using a non-linear control structure. These techniques have been implemented using a video camera to control a 5 DoF robot arm. Experiments with this implementation are presented, together with a discussion of the results.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"1 1","pages":"652-657 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87063423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.784731
D. Zhang, M. Hebert
The surface-matching problem is investigated in this paper using a mathematical tool called harmonic maps. The theory of harmonic maps studies the mapping between different metric manifolds from the energy-minimization point of view. With the application of harmonic maps, a surface representation called harmonic shape images is generated to represent and match 3D freeform surfaces. The basic idea of harmonic shape images is to map a 3D surface patch with disc topology to a 2D domain and encode the shape information of the surface patch into the 2D image. This simplifies the surface-matching problem to a 2D image-matching problem. Due to the application of harmonic maps in generating harmonic shape images, harmonic shape images have the following advantages: they have sound mathematical background; they preserve both the shape and continuity of the underlying surfaces; and they are robust to occlusion and independent of any specific surface sampling scheme. The performance of surface matching using harmonic maps is evaluated using real data. Preliminary results are presented in the paper.
{"title":"Harmonic maps and their applications in surface matching","authors":"D. Zhang, M. Hebert","doi":"10.1109/CVPR.1999.784731","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784731","url":null,"abstract":"The surface-matching problem is investigated in this paper using a mathematical tool called harmonic maps. The theory of harmonic maps studies the mapping between different metric manifolds from the energy-minimization point of view. With the application of harmonic maps, a surface representation called harmonic shape images is generated to represent and match 3D freeform surfaces. The basic idea of harmonic shape images is to map a 3D surface patch with disc topology to a 2D domain and encode the shape information of the surface patch into the 2D image. This simplifies the surface-matching problem to a 2D image-matching problem. Due to the application of harmonic maps in generating harmonic shape images, harmonic shape images have the following advantages: they have sound mathematical background; they preserve both the shape and continuity of the underlying surfaces; and they are robust to occlusion and independent of any specific surface sampling scheme. The performance of surface matching using harmonic maps is evaluated using real data. Preliminary results are presented in the paper.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"19 1","pages":"524-530 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87334145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.784990
A. Yuille, J. Coughlan
We analyze the problem of detecting a road target in background clutter and investigate the amount of prior (i.e. target specific) knowledge needed to perform this search task. The problem is formulated in terms of Bayesian inference and we define a Bayesian ensemble of problem instances. This formulation implies that the performance measures of different models depend on order parameters which characterize the problem. This demonstrates that if there is little clutter then only weak knowledge about the target is required in order to detect the target. However at a critical value of the order parameters there is a phase transition and it becomes effectively impossible to detect the target unless high-level target specific knowledge is used. These phase transitions determine different regimes within which different search strategies will be effective. These results have implications for bottom-up and top-down theories of vision.
{"title":"High-level and generic models for visual search: When does high level knowledge help?","authors":"A. Yuille, J. Coughlan","doi":"10.1109/CVPR.1999.784990","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784990","url":null,"abstract":"We analyze the problem of detecting a road target in background clutter and investigate the amount of prior (i.e. target specific) knowledge needed to perform this search task. The problem is formulated in terms of Bayesian inference and we define a Bayesian ensemble of problem instances. This formulation implies that the performance measures of different models depend on order parameters which characterize the problem. This demonstrates that if there is little clutter then only weak knowledge about the target is required in order to detect the target. However at a critical value of the order parameters there is a phase transition and it becomes effectively impossible to detect the target unless high-level target specific knowledge is used. These phase transitions determine different regimes within which different search strategies will be effective. These results have implications for bottom-up and top-down theories of vision.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"46 1","pages":"631-637 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87809201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.784635
M. Covell, Trevor Darrell
Dynamic contours, or snakes, provide an effective method for tracking complex moving objects for segmentation and recognition tasks, but have difficulty tracking occluding boundaries on cluttered backgrounds. To compensate for this shortcoming, dynamic contours often rely on detailed object-shape or motion models to distinguish between the boundary of the tracked object and other boundaries in the background. In this paper we present a complementary approach to detailed object models: We improve the discriminative power of the local image measurements that drive the tracking process. We describe a new, robust external-energy term for dynamic contours that can track occluding boundaries without detailed object models. We show how our image model improves tracking in cluttered scenes, and describe how a fine-grained image-segmentation mask is created directly from the local image measurements used for tracking.
{"title":"Dynamic occluding contours: a new external-energy term for snakes","authors":"M. Covell, Trevor Darrell","doi":"10.1109/CVPR.1999.784635","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784635","url":null,"abstract":"Dynamic contours, or snakes, provide an effective method for tracking complex moving objects for segmentation and recognition tasks, but have difficulty tracking occluding boundaries on cluttered backgrounds. To compensate for this shortcoming, dynamic contours often rely on detailed object-shape or motion models to distinguish between the boundary of the tracked object and other boundaries in the background. In this paper we present a complementary approach to detailed object models: We improve the discriminative power of the local image measurements that drive the tracking process. We describe a new, robust external-energy term for dynamic contours that can track occluding boundaries without detailed object models. We show how our image model improves tracking in cluttered scenes, and describe how a fine-grained image-segmentation mask is created directly from the local image measurements used for tracking.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"50 1","pages":"232-238 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91166286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.784633
C. Fermüller, T. Brodský, Y. Aloimonos
Since estimation of camera motion requires knowledge of independent motion, and moving object detection and localization requires knowledge about the camera motion, the two problems of motion estimation and segmentation need to be solved together in a synergistic manner. This paper provides an approach to treating both these problems simultaneously. The technique introduced here is based on a novel concept, "scene ruggedness" which parameterizes the variation in estimated scene depth with the error in the underlying three-dimensional (3D) motion. The idea is that incorrect 3D motion estimates cause distortions in the estimated depth map, and as a result smooth scene patches are computed as rugged surfaces. The correct 3D motion can be distinguished, as it does not cause any distortion and thus gives rise to the background patches with the least depth variation between depth discontinuities, with the locations corresponding to independent motion being rugged. The algorithm presented employs a binocular observer whose nature is exploited in the extraction of depth discontinuities, a step that facilitates the overall procedure, but the technique can be extended to a monocular observer in a variety of ways.
{"title":"Motion segmentation: a synergistic approach","authors":"C. Fermüller, T. Brodský, Y. Aloimonos","doi":"10.1109/CVPR.1999.784633","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784633","url":null,"abstract":"Since estimation of camera motion requires knowledge of independent motion, and moving object detection and localization requires knowledge about the camera motion, the two problems of motion estimation and segmentation need to be solved together in a synergistic manner. This paper provides an approach to treating both these problems simultaneously. The technique introduced here is based on a novel concept, \"scene ruggedness\" which parameterizes the variation in estimated scene depth with the error in the underlying three-dimensional (3D) motion. The idea is that incorrect 3D motion estimates cause distortions in the estimated depth map, and as a result smooth scene patches are computed as rugged surfaces. The correct 3D motion can be distinguished, as it does not cause any distortion and thus gives rise to the background patches with the least depth variation between depth discontinuities, with the locations corresponding to independent motion being rugged. The algorithm presented employs a binocular observer whose nature is exploited in the extraction of depth discontinuities, a step that facilitates the overall procedure, but the technique can be extended to a monocular observer in a variety of ways.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"30 1","pages":"226-231"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83782712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.786972
B. Frey, N. Jojic
Mixture modeling and clustering algorithms are effective, simple ways to represent images using a set of data centers. However, in situations where the images include background clutter and transformations such as translation, rotation, shearing and warping, these methods extract data centers that include clutter and represent different transformations of essentially the same data. Taking face images as an example, it would be more useful for the different clusters to represent different poses and expressions, instead of cluttered versions of different translations, scales and rotations. By including clutter and transformation as unobserved, latent variables in a mixture model, we obtain a new "transformed mixture of Gaussians", which is invariant to a specified set of transformations. We show how a linear-time EM algorithm can be used to fit this model by jointly estimating a mixture model for the data and inferring the transformation for each image. We show that this algorithm can jointly align images of a human head and learn different poses. We also find that the algorithm performs better than k-nearest neighbors and mixtures of Gaussians on handwritten digit recognition.
{"title":"Estimating mixture models of images and inferring spatial transformations using the EM algorithm","authors":"B. Frey, N. Jojic","doi":"10.1109/CVPR.1999.786972","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786972","url":null,"abstract":"Mixture modeling and clustering algorithms are effective, simple ways to represent images using a set of data centers. However, in situations where the images include background clutter and transformations such as translation, rotation, shearing and warping, these methods extract data centers that include clutter and represent different transformations of essentially the same data. Taking face images as an example, it would be more useful for the different clusters to represent different poses and expressions, instead of cluttered versions of different translations, scales and rotations. By including clutter and transformation as unobserved, latent variables in a mixture model, we obtain a new \"transformed mixture of Gaussians\", which is invariant to a specified set of transformations. We show how a linear-time EM algorithm can be used to fit this model by jointly estimating a mixture model for the data and inferring the transformation for each image. We show that this algorithm can jointly align images of a human head and learn different poses. We also find that the algorithm performs better than k-nearest neighbors and mixtures of Gaussians on handwritten digit recognition.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"57 1","pages":"416-422 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83894606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.786921
D. Lee, In-So Kweon, R. Cipolla
In this paper we propose a novel and practical stereo camera system that uses only one camera and a biprism placed in front of the camera. The equivalent of a stereo pair of images is formed as the left and right halves of a single CCD image using a biprism. The system is therefore cheap and extremely easy to calibrate since it requires only one CCD camera. An additional advantage of the geometrical set-up is that corresponding features lie on the same scanline automatically. The single camera and biprism have led to a simple stereo system for which correspondence is very easy and which is accurate for nearby objects in a small field of view. Since we we only, a single lens, calibration of the system is greatly simplified. This is due to the fact that we need to estimate only one focal length and one center of projection. Given the parameters in the biprism-stereo camera system, we can recover the depth of the object using only the disparity between the corresponding points.
{"title":"A biprism-stereo camera system","authors":"D. Lee, In-So Kweon, R. Cipolla","doi":"10.1109/CVPR.1999.786921","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786921","url":null,"abstract":"In this paper we propose a novel and practical stereo camera system that uses only one camera and a biprism placed in front of the camera. The equivalent of a stereo pair of images is formed as the left and right halves of a single CCD image using a biprism. The system is therefore cheap and extremely easy to calibrate since it requires only one CCD camera. An additional advantage of the geometrical set-up is that corresponding features lie on the same scanline automatically. The single camera and biprism have led to a simple stereo system for which correspondence is very easy and which is accurate for nearby objects in a small field of view. Since we we only, a single lens, calibration of the system is greatly simplified. This is due to the fact that we need to estimate only one focal length and one center of projection. Given the parameters in the biprism-stereo camera system, we can recover the depth of the object using only the disparity between the corresponding points.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"44 1","pages":"-87 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88269579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.786967
Megumi Saito, Yoichi Sato, K. Ikeuchi, H. Kashiwagi
This paper proposes a method for obtaining surface orientations of transparent objects using polarization in highlight. Since the highlight, the specular component of reflection light from objects, is observed only near the specular direction, it appears merely on limited parts on an object surface. In order to obtain orientations of a whole object surface, we employ a spherical extended light source. This paper reports its experimental apparatus, a shape recovery algorithm, and its performance evaluation.
{"title":"Measurement of surface orientations of transparent objects using polarization in highlight","authors":"Megumi Saito, Yoichi Sato, K. Ikeuchi, H. Kashiwagi","doi":"10.1109/CVPR.1999.786967","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786967","url":null,"abstract":"This paper proposes a method for obtaining surface orientations of transparent objects using polarization in highlight. Since the highlight, the specular component of reflection light from objects, is observed only near the specular direction, it appears merely on limited parts on an object surface. In order to obtain orientations of a whole object surface, we employ a spherical extended light source. This paper reports its experimental apparatus, a shape recovery algorithm, and its performance evaluation.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"72 1","pages":"381-386 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82393288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}