Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223132
D. Schonfeld
A mathematical framework for the solution of statistical inference problems on a class of random sets is proposed. It is based on a new definition of expected pattern. The least-mean-difference estimator (restoration filter) is proved, under certain conditions, to be equivalent to the minimization of the measure of size (area) of the set-difference between the original pattern and the expected pattern of the estimated (restored) pattern. Consequently, it is proved that, under certain conditions, if the estimator (restoration filter) is unbiased, then it is the least mean difference estimator (restoration filter).<>
{"title":"Optimal nonlinear pattern restoration from noisy binary figures","authors":"D. Schonfeld","doi":"10.1109/CVPR.1992.223132","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223132","url":null,"abstract":"A mathematical framework for the solution of statistical inference problems on a class of random sets is proposed. It is based on a new definition of expected pattern. The least-mean-difference estimator (restoration filter) is proved, under certain conditions, to be equivalent to the minimization of the measure of size (area) of the set-difference between the original pattern and the expected pattern of the estimated (restored) pattern. Consequently, it is proved that, under certain conditions, if the estimator (restoration filter) is unbiased, then it is the least mean difference estimator (restoration filter).<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126397864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223160
J. Mundy, J. Noble, Constantinos Marinos, V.-D. Nguyen, Aaron J. Heller, J. Farley, A. T. Tran
The concepts and design issues that provide the basis for the I/sup 2/F (image interpretation foundations) system are described. The I/sup 2/F system combines object-oriented design for machine vision software and constraint-based geometric modeling into a flexible and effective system for automatic template-guided visual inspection. Object-oriented design for 2-D geometry-based image analysis is discussed, and results from processing experimental X-ray data are presented.<>
{"title":"An object-oriented approach to template guided visual inspection","authors":"J. Mundy, J. Noble, Constantinos Marinos, V.-D. Nguyen, Aaron J. Heller, J. Farley, A. T. Tran","doi":"10.1109/CVPR.1992.223160","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223160","url":null,"abstract":"The concepts and design issues that provide the basis for the I/sup 2/F (image interpretation foundations) system are described. The I/sup 2/F system combines object-oriented design for machine vision software and constraint-based geometric modeling into a flexible and effective system for automatic template-guided visual inspection. Object-oriented design for 2-D geometry-based image analysis is discussed, and results from processing experimental X-ray data are presented.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126464555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223140
S. Nadabar, Anil K. Jain
A scheme for the estimation of the Markov random field (MRF) line process parameters that uses geometric CAD models of the objects in the scene is presented. The models are used to generate synthetic images of the objects from random viewpoints. The edge maps computed from the synthesized images are used as training samples to estimate the line process parameters using a least squares method. It is shown that this parameter estimation method is useful for detecting edges in range as well as intensity images.<>
{"title":"Parameter estimation in MRF line process models","authors":"S. Nadabar, Anil K. Jain","doi":"10.1109/CVPR.1992.223140","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223140","url":null,"abstract":"A scheme for the estimation of the Markov random field (MRF) line process parameters that uses geometric CAD models of the objects in the scene is presented. The models are used to generate synthetic images of the objects from random viewpoints. The edge maps computed from the synthesized images are used as training samples to estimate the line process parameters using a least squares method. It is shown that this parameter estimation method is useful for detecting edges in range as well as intensity images.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127956117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223205
Mou-Yen Chen, A. Kundu, Jian Zhou
A complete scheme for totally unconstrained handwritten word recognition based on a single contextual hidden Markov model (HMM) is proposed. The scheme includes a morphology- and heuristics-based segmentation algorithm and a modified Viterbi algorithm that searches the (l+1)st globally best path based on the previous l best paths. The results of detailed experiments for which the overall recognition rate is up to 89.4% are reported.<>
{"title":"Off-line handwritten word recognition (HWR) using a single contextual hidden Markov model","authors":"Mou-Yen Chen, A. Kundu, Jian Zhou","doi":"10.1109/CVPR.1992.223205","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223205","url":null,"abstract":"A complete scheme for totally unconstrained handwritten word recognition based on a single contextual hidden Markov model (HMM) is proposed. The scheme includes a morphology- and heuristics-based segmentation algorithm and a modified Viterbi algorithm that searches the (l+1)st globally best path based on the previous l best paths. The results of detailed experiments for which the overall recognition rate is up to 89.4% are reported.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128939316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223195
T. A. Mancini, L. B. Wolff
A methodology for accurate determination of surface normals and light source location from depth and reflectance data is introduced. Estimation of local surface orientation using depth data alone from range finders with standard depth errors can produce significant error, while shape-from-shading using reflectance data alone produces approximate surface orientation results that are highly dependent on the correct initial surface orientation estimates and regularization parameters. Combining these two sources of information gives vastly more accurate surface orientation estimates under general conditions than either one alone, and can also provide better knowledge of local curvature. Novel iterative methods that enforce satisfaction of the image irradiance equation and surface integrability without using regularization are proposed. These iterative methods work when the light source is any finite distance from the object, producing variable incident light orientation over the object.<>
{"title":"3 D shape and light source location from depth and reflectance","authors":"T. A. Mancini, L. B. Wolff","doi":"10.1109/CVPR.1992.223195","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223195","url":null,"abstract":"A methodology for accurate determination of surface normals and light source location from depth and reflectance data is introduced. Estimation of local surface orientation using depth data alone from range finders with standard depth errors can produce significant error, while shape-from-shading using reflectance data alone produces approximate surface orientation results that are highly dependent on the correct initial surface orientation estimates and regularization parameters. Combining these two sources of information gives vastly more accurate surface orientation estimates under general conditions than either one alone, and can also provide better knowledge of local curvature. Novel iterative methods that enforce satisfaction of the image irradiance equation and surface integrability without using regularization are proposed. These iterative methods work when the light source is any finite distance from the object, producing variable incident light orientation over the object.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129149813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223253
G. Gordon
Face recognition from a representation based on features extracted from range images is explored. Depth and curvature features have several advantages over more traditional intensity-based features. Specifically, curvature descriptors have the potential for higher accuracy in describing surface-based events, are better suited to describe properties of the face in areas such as the cheeks, forehead, and chin, and are viewpoint invariant. Faces are represented in terms of a vector of feature descriptors. Comparisons between two faces is made based on their relationship in the feature space. The author provides a detailed analysis of the accuracy and discrimination of the particular features extracted, and the effectiveness of the recognition system for a test database of 24 faces. Recognition rates are in the range of 80% to 100%. In many cases, feature accuracy is limited more by surface resolution than by the extraction process.<>
{"title":"Face recognition based on depth and curvature features","authors":"G. Gordon","doi":"10.1109/CVPR.1992.223253","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223253","url":null,"abstract":"Face recognition from a representation based on features extracted from range images is explored. Depth and curvature features have several advantages over more traditional intensity-based features. Specifically, curvature descriptors have the potential for higher accuracy in describing surface-based events, are better suited to describe properties of the face in areas such as the cheeks, forehead, and chin, and are viewpoint invariant. Faces are represented in terms of a vector of feature descriptors. Comparisons between two faces is made based on their relationship in the feature space. The author provides a detailed analysis of the accuracy and discrimination of the particular features extracted, and the effectiveness of the recognition system for a test database of 24 faces. Recognition rates are in the range of 80% to 100%. In many cases, feature accuracy is limited more by surface resolution than by the extraction process.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131442053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223216
R. Polana, R. Nelson
A method of visual motion recognition applicable to a range of naturally occurring motions that are characterized by spatial and temporal uniformity is described. The underlying motivation is the observation that, for objects that typically move, it is frequently easier to identify them when they are moving than when they are stationary. Specifically, it is shown that certain statistical spatial and temporal features that can be derived from approximations to the motion field have invariant properties, and can be used to classify regional activities such as windblown trees, ripples on water, or chaotic fluid flow, that are characterized by complex, non-rigid motion. The technique is referred to as temporal texture analysis, in analogy to the techniques developed to classify gray-scale textures. The techniques are demonstrated on a number of real-world image sequences containing complex movement. The work has practical application in monitoring and surveillance, and as a component of a sophisticated visual system.<>
{"title":"Recognition of motion from temporal texture","authors":"R. Polana, R. Nelson","doi":"10.1109/CVPR.1992.223216","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223216","url":null,"abstract":"A method of visual motion recognition applicable to a range of naturally occurring motions that are characterized by spatial and temporal uniformity is described. The underlying motivation is the observation that, for objects that typically move, it is frequently easier to identify them when they are moving than when they are stationary. Specifically, it is shown that certain statistical spatial and temporal features that can be derived from approximations to the motion field have invariant properties, and can be used to classify regional activities such as windblown trees, ripples on water, or chaotic fluid flow, that are characterized by complex, non-rigid motion. The technique is referred to as temporal texture analysis, in analogy to the techniques developed to classify gray-scale textures. The techniques are demonstrated on a number of real-world image sequences containing complex movement. The work has practical application in monitoring and surveillance, and as a component of a sophisticated visual system.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125618155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223202
J. Skrzypek, B. Ringer
A physiologically motivated model of illusory contour perception is examined by simulating a neural network architecture that was tested with gray-level images. The results indicate that a model that combines a bottom-up feature aggregation strategy with recurrent processing is best suited for describing this type of perceptual completion.<>
{"title":"Neural network models for illusory contour perception","authors":"J. Skrzypek, B. Ringer","doi":"10.1109/CVPR.1992.223202","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223202","url":null,"abstract":"A physiologically motivated model of illusory contour perception is examined by simulating a neural network architecture that was tested with gray-level images. The results indicate that a model that combines a bottom-up feature aggregation strategy with recurrent processing is best suited for describing this type of perceptual completion.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117167908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223189
M. Asada, Takayuki Nakamura, Y. Shirai
Weak Lambertian assumption is proposed and used to determine shape and pose of cylindrical objects from a monocular intensity image. The method does not require the knowledge of lighting conditions (light intensity and lighting direction), surface properties, or albedos. Experimental results for both synthesized and real images showing the validity of the method are presented.<>
{"title":"Weak Lambertian assumption for determining cylindrical shape and pose from shading and contour","authors":"M. Asada, Takayuki Nakamura, Y. Shirai","doi":"10.1109/CVPR.1992.223189","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223189","url":null,"abstract":"Weak Lambertian assumption is proposed and used to determine shape and pose of cylindrical objects from a monocular intensity image. The method does not require the knowledge of lighting conditions (light intensity and lighting direction), surface properties, or albedos. Experimental results for both synthesized and real images showing the validity of the method are presented.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116738544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-15DOI: 10.1109/CVPR.1992.223136
W. Costa, R. Haralick
The opening of a model signal with a convex, zero-height structuring element is studied empirically. Experiments are performed in which the input signal model parameters and the opening length are varied over an acceptable range and the corresponding grey level distributions in the opened signal are fit to Pearson distributions. Regressions are then used to relate the Pearson distribution parameters to the input parameters, resulting in equations that may be used to predict the effect of an opening. Characterization experiments show that the maximum absolute errors between actual and predicted cumulative distributions using these regression equations have a mean of 0.036 and a standard deviation of 0.011 (for a range of zero to one); the worst-case maximum absolute error encountered between the cumulative distributions is 0.066.<>
{"title":"Predicting expected gray level statistics of opened signals","authors":"W. Costa, R. Haralick","doi":"10.1109/CVPR.1992.223136","DOIUrl":"https://doi.org/10.1109/CVPR.1992.223136","url":null,"abstract":"The opening of a model signal with a convex, zero-height structuring element is studied empirically. Experiments are performed in which the input signal model parameters and the opening length are varied over an acceptable range and the corresponding grey level distributions in the opened signal are fit to Pearson distributions. Regressions are then used to relate the Pearson distribution parameters to the input parameters, resulting in equations that may be used to predict the effect of an opening. Characterization experiments show that the maximum absolute errors between actual and predicted cumulative distributions using these regression equations have a mean of 0.036 and a standard deviation of 0.011 (for a range of zero to one); the worst-case maximum absolute error encountered between the cumulative distributions is 0.066.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115136226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}