Three-dimensional edge detection in voxel images is used to locate points corresponding to surfaces of 3-D structures, and the local geometry of these surfaces is characterized in order to extract points or lines which may be used by registration and tracking procedures. Typically, second-order differential characteristics of the surfaces must be calculated. To avoid the problem of establishing links between 3-D edge detection and local surface approximation it is proposed to compute the curvatures at locations designated as edge points, using the partial derivatives of the image directly. By assuming that the surface is defined locally by an iso-intensity-contour, it is possible to calculate directly the curvatures and characterize the local curvature extrema (ridge points) from the first, second, and third derivatives of the gray-level function. These partial derivatives can be computed using the operators of the edge detection. Experimental results obtained using real X-ray scanner data are presented.<>
{"title":"From partial derivatives of 3-D density images to ridge lines","authors":"O. Monga, S. Benayoun, O. Faugeras","doi":"10.1117/12.131072","DOIUrl":"https://doi.org/10.1117/12.131072","url":null,"abstract":"Three-dimensional edge detection in voxel images is used to locate points corresponding to surfaces of 3-D structures, and the local geometry of these surfaces is characterized in order to extract points or lines which may be used by registration and tracking procedures. Typically, second-order differential characteristics of the surfaces must be calculated. To avoid the problem of establishing links between 3-D edge detection and local surface approximation it is proposed to compute the curvatures at locations designated as edge points, using the partial derivatives of the image directly. By assuming that the surface is defined locally by an iso-intensity-contour, it is possible to calculate directly the curvatures and characterize the local curvature extrema (ridge points) from the first, second, and third derivatives of the gray-level function. These partial derivatives can be computed using the operators of the edge detection. Experimental results obtained using real X-ray scanner data are presented.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130158131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223177
C. Stewart
Probability density functions (PDFs) are derived for many of the geometric measurements upon which stereo matching techniques are based, including orientation differences between matching line segments or curves, the gradient of disparity, the directional derivative of disparity, and disparity differences between matches. The PDFs resulting from the transformations are used to critically examine many existing stereo techniques. Several techniques based on these PDFs are proposed.<>
{"title":"On the derivation of geometric constraints in stereo","authors":"C. Stewart","doi":"10.1109/CVPR.1992.223177","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223177","url":null,"abstract":"Probability density functions (PDFs) are derived for many of the geometric measurements upon which stereo matching techniques are based, including orientation differences between matching line segments or curves, the gradient of disparity, the directional derivative of disparity, and disparity differences between matches. The PDFs resulting from the transformations are used to critically examine many existing stereo techniques. Several techniques based on these PDFs are proposed.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114076217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223238
H. Richardson, S. Blostein
A unified decision-theoretic framework for automating the establishment of feature point correspondences in a temporally dense sequence of images is discussed. The approach extends a recent sequential detection algorithm to guide the detection and tracking of object feature points through an image sequence. The resulting extended feature tracks provide robust feature correspondences, for the estimation of three-dimensional structure and motion, over an extended number of image frames.<>
{"title":"A sequential detection framework for feature tracking within computational constraints","authors":"H. Richardson, S. Blostein","doi":"10.1109/CVPR.1992.223238","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223238","url":null,"abstract":"A unified decision-theoretic framework for automating the establishment of feature point correspondences in a temporally dense sequence of images is discussed. The approach extends a recent sequential detection algorithm to guide the detection and tracking of object feature points through an image sequence. The resulting extended feature tracks provide robust feature correspondences, for the estimation of three-dimensional structure and motion, over an extended number of image frames.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134090391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223214
R. Rimey, C. Brown
TEA is a task-oriented computer vision system that uses Bayes nets and a maximum expected-utility decision rule to choose a sequence of task-dependent and opportunistic visual operations on the basis of their cost and (present and future) benefit. The authors discuss technical problems regarding utilities, present TEA-1's utility function (which approximates a two-step lookahead), and compare it to various simpler utility functions in experiments with real and simulated scenes.<>
{"title":"Task-specific utility in a general Bayes net vision system","authors":"R. Rimey, C. Brown","doi":"10.1109/CVPR.1992.223214","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223214","url":null,"abstract":"TEA is a task-oriented computer vision system that uses Bayes nets and a maximum expected-utility decision rule to choose a sequence of task-dependent and opportunistic visual operations on the basis of their cost and (present and future) benefit. The authors discuss technical problems regarding utilities, present TEA-1's utility function (which approximates a two-step lookahead), and compare it to various simpler utility functions in experiments with real and simulated scenes.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132452372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223268
J. Feldman
It is shown how a parameterization of the structure of the observed object (interpretable as the space of dimensions of generative operations which brought the object into existence) entails a lattice that enumerates all the allowable categories and subcategories for the class of object, along with an inferential preference hierarchy among them. Each model is constrained to be generic in its parameterization, so that each node in the lattice stands in for an entire class of objects that, all being parameterized the same way, can all be treated as equivalent to one another: the observed object's natural category.<>
{"title":"Constructing perceptual categories","authors":"J. Feldman","doi":"10.1109/CVPR.1992.223268","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223268","url":null,"abstract":"It is shown how a parameterization of the structure of the observed object (interpretable as the space of dimensions of generative operations which brought the object into existence) entails a lattice that enumerates all the allowable categories and subcategories for the class of object, along with an inferential preference hierarchy among them. Each model is constrained to be generic in its parameterization, so that each node in the lattice stands in for an entire class of objects that, all being parameterized the same way, can all be treated as equivalent to one another: the observed object's natural category.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130264557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223174
Y. Yacoob, L. Davis
An approach for autonomous localization of ground vehicles on natural terrain is proposed. The localization problem is solved using measurements including attitude, heading, and distances to specific environmental points. The algorithm utilizes random acquisition of distance measurements to prune the possible location(s) of the viewer. The approach is also applicable to airborne localization.<>
{"title":"Computational ground and airborne localization over rough terrain","authors":"Y. Yacoob, L. Davis","doi":"10.1109/CVPR.1992.223174","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223174","url":null,"abstract":"An approach for autonomous localization of ground vehicles on natural terrain is proposed. The localization problem is solved using measurements including attitude, heading, and distances to specific environmental points. The algorithm utilizes random acquisition of distance measurements to prune the possible location(s) of the viewer. The approach is also applicable to airborne localization.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127087710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223208
Xinming Yu, T. D. Bui, A. Krzyżak
The authors randomly sample appropriate range image points and solve equations determined by these points for the parameters of selected primitive type. From K samples they measure residual consensus to choose one set of sample points that determines an equation having the best fit for the largest homogeneous surface patch in the current processing region. The residual consensus is measured by a compressed histogram method that works at various noise levels. The estimated surface patch is extracted out of the processing region to avoid further computation. A genetic algorithm is used to accelerate the search speed.<>
{"title":"Range image segmentation and fitting by residual consensus","authors":"Xinming Yu, T. D. Bui, A. Krzyżak","doi":"10.1109/CVPR.1992.223208","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223208","url":null,"abstract":"The authors randomly sample appropriate range image points and solve equations determined by these points for the parameters of selected primitive type. From K samples they measure residual consensus to choose one set of sample points that determines an equation having the best fit for the largest homogeneous surface patch in the current processing region. The residual consensus is measured by a compressed histogram method that works at various noise levels. The estimated surface patch is extracted out of the processing region to avoid further computation. A genetic algorithm is used to accelerate the search speed.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"2001 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127252081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223131
P. Kube
The author introduces a framework for investigating the properties of energy edge detectors and uses it to derive some results of interest. He shows a necessary condition on the form of constituent linear filters in quadratic detectors, subject to some conditions, and demonstrates some limitations of such detectors. It is shown that no quadratic detector can detect an edge at 0 for both a sinewave and a cosine wave, which has implications for detecting narrowband edges with spatially local filters. It is also shown that the scale-space behavior of energy detectors is not well-behaved, in that it contains bifurcations as scale increases, i.e. new edges can be created as the image is smoothed.<>
{"title":"Properties of energy edge detectors","authors":"P. Kube","doi":"10.1109/CVPR.1992.223131","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223131","url":null,"abstract":"The author introduces a framework for investigating the properties of energy edge detectors and uses it to derive some results of interest. He shows a necessary condition on the form of constituent linear filters in quadratic detectors, subject to some conditions, and demonstrates some limitations of such detectors. It is shown that no quadratic detector can detect an edge at 0 for both a sinewave and a cosine wave, which has implications for detecting narrowband edges with spatially local filters. It is also shown that the scale-space behavior of energy detectors is not well-behaved, in that it contains bifurcations as scale increases, i.e. new edges can be created as the image is smoothed.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117329233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223150
M. Bichsel, A. Pentland
A shape-from-shading algorithm that recovers depth from a brightness image typically in fewer than ten iterations, is described. This algorithm, which is a simplification of the algorithm of J. Oliensis and P. Dupuis (1991), is based on a minimum downhill principle that guarantees continuous surfaces and stable results. The algorithm is applicable to a broad variety of objects and reflectance maps.<>
{"title":"A simple algorithm for shape from shading","authors":"M. Bichsel, A. Pentland","doi":"10.1109/CVPR.1992.223150","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223150","url":null,"abstract":"A shape-from-shading algorithm that recovers depth from a brightness image typically in fewer than ten iterations, is described. This algorithm, which is a simplification of the algorithm of J. Oliensis and P. Dupuis (1991), is based on a minimum downhill principle that guarantees continuous surfaces and stable results. The algorithm is applicable to a broad variety of objects and reflectance maps.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123733898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223259
S. Nayar
A shape-from-focus method that uses different focus levels to obtain a sequence of object images is described. A sum-modified-Laplacian operator is developed to provide local measures of the quality of image focus. The operator is applied to the sequence of images of the object to determine a set of focus measures as each image point. A model is developed to describe the variation of focus measure values due to defocusing. This model is used by a depth estimation algorithm to interpolate focus measure values and obtain accurate depth estimates. A fully automated system that has been implemented using an optical microscope and tested on a variety of industrial samples is described.<>
{"title":"Shape from focus system","authors":"S. Nayar","doi":"10.1109/CVPR.1992.223259","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223259","url":null,"abstract":"A shape-from-focus method that uses different focus levels to obtain a sequence of object images is described. A sum-modified-Laplacian operator is developed to provide local measures of the quality of image focus. The operator is applied to the sequence of images of the object to determine a set of focus measures as each image point. A model is developed to describe the variation of focus measure values due to defocusing. This model is used by a depth estimation algorithm to interpolate focus measure values and obtain accurate depth estimates. A fully automated system that has been implemented using an optical microscope and tested on a variety of industrial samples is described.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124405223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}