A new objective function for functional link net classifier design is presented, which has more free parameters than the classical objective function. An iterative minimization technique for the objective function is described which requires the solution of multiple sets of numerically ill conditioned linear equations. A numerically stable solution to the functional link neural network design equations, which utilizes the conjugate gradient algorithm, is presented. The design method is applied to networks used to classify SAR imagery from remote sensing. The functional link discriminants are seen to outperform Bayes-Gaussian discriminants in the examples.<>
{"title":"Image classification in remote sensing using functional link neural networks","authors":"L.M. Liu, M. Manry, F. Amar, M. Dawson, A. Fung","doi":"10.1109/IAI.1994.336685","DOIUrl":"https://doi.org/10.1109/IAI.1994.336685","url":null,"abstract":"A new objective function for functional link net classifier design is presented, which has more free parameters than the classical objective function. An iterative minimization technique for the objective function is described which requires the solution of multiple sets of numerically ill conditioned linear equations. A numerically stable solution to the functional link neural network design equations, which utilizes the conjugate gradient algorithm, is presented. The design method is applied to networks used to classify SAR imagery from remote sensing. The functional link discriminants are seen to outperform Bayes-Gaussian discriminants in the examples.<<ETX>>","PeriodicalId":438137,"journal":{"name":"Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126079099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Theoretical work has shown that meteorological range, a standard measure of visibility, can be related to changes in contrast in the spatial domain of a scene, and changes in the non-zero frequencies of its frequency response. The present work employs aircraft images of mountain scenes to show that changes in the energy of non-zero frequencies trace a decaying exponential curve whose logarithmic slope describes the meteorological range for that particular scene.<>
{"title":"Frequency domain measurement of meteorological range from aircraft images","authors":"J. Barrios, D. Williams, J. Cogan, J. Smith","doi":"10.1109/IAI.1994.336684","DOIUrl":"https://doi.org/10.1109/IAI.1994.336684","url":null,"abstract":"Theoretical work has shown that meteorological range, a standard measure of visibility, can be related to changes in contrast in the spatial domain of a scene, and changes in the non-zero frequencies of its frequency response. The present work employs aircraft images of mountain scenes to show that changes in the energy of non-zero frequencies trace a decaying exponential curve whose logarithmic slope describes the meteorological range for that particular scene.<<ETX>>","PeriodicalId":438137,"journal":{"name":"Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114221006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Industrial inspection systems have been in use for some time now. However to-date these systems have been built specifically for the application in which it will function. This has lead to such systems becoming obsolete if the manufacturing process changes. Such systems also relied on the programmer's competence in selecting appropriate algorithms to carry out the tasks of image processing and segmentation. This paper presents a system that is adaptable for many inspection tasks and generic in nature. It selects algorithms automatically depending on the task at hand and the domain knowledge given.<>
{"title":"Artificially intelligent 3D industrial inspection system for metal inspection","authors":"S. Panayiotou, A. Soper","doi":"10.1109/IAI.1994.336670","DOIUrl":"https://doi.org/10.1109/IAI.1994.336670","url":null,"abstract":"Industrial inspection systems have been in use for some time now. However to-date these systems have been built specifically for the application in which it will function. This has lead to such systems becoming obsolete if the manufacturing process changes. Such systems also relied on the programmer's competence in selecting appropriate algorithms to carry out the tasks of image processing and segmentation. This paper presents a system that is adaptable for many inspection tasks and generic in nature. It selects algorithms automatically depending on the task at hand and the domain knowledge given.<<ETX>>","PeriodicalId":438137,"journal":{"name":"Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133423966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scene description plays a major role in the interpretation of images. In the paper a novel data- and rule-driven system for 3D segmentation and scene description in an unknown environment is presented. This system generates hierarchies of features that correspond to structural elements such as boundaries and shape classes of individual objects as well as relationships between objects. It is implemented as an added high-level component to an existing low-level binocular vision system (Don and He, 1988). Based on a pair of matched stereo images produced by that system, 3D segmentation is first performed to group object boundary data into several edge-sets each of which is believed to belong to one particular object. Then gross features of each object are extracted and stored in an object record. The final structural description of the scene is accomplished with information in the object record, a set of rules and a rule implementor. The system is designed to handle partially occluded objects of different shape and size on the 2D images. Experimental results have shown its success in computing both object and structural level descriptions of common man-made objects.<>
{"title":"An algorithm for 3D scene description in an unknown environment","authors":"Y. Dong, T. Chen, L. Sheppard","doi":"10.1109/IAI.1994.336669","DOIUrl":"https://doi.org/10.1109/IAI.1994.336669","url":null,"abstract":"Scene description plays a major role in the interpretation of images. In the paper a novel data- and rule-driven system for 3D segmentation and scene description in an unknown environment is presented. This system generates hierarchies of features that correspond to structural elements such as boundaries and shape classes of individual objects as well as relationships between objects. It is implemented as an added high-level component to an existing low-level binocular vision system (Don and He, 1988). Based on a pair of matched stereo images produced by that system, 3D segmentation is first performed to group object boundary data into several edge-sets each of which is believed to belong to one particular object. Then gross features of each object are extracted and stored in an object record. The final structural description of the scene is accomplished with information in the object record, a set of rules and a rule implementor. The system is designed to handle partially occluded objects of different shape and size on the 2D images. Experimental results have shown its success in computing both object and structural level descriptions of common man-made objects.<<ETX>>","PeriodicalId":438137,"journal":{"name":"Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115380852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses variations of a model of images and develops algorithms for estimation of all the parameters from the raw image data. The model is suitable for some cases of (1) lossy image compression and realistic reconstruction, (2) texture synthesis and identification, (3) classification of remotely sensed data, and (4) analysis of medical images. Each pixel in the image is modeled as an element of a set of very few known intensity levels (henceforth called pixel-classes) plus an independent zero mean Gaussian random variable. Different statistical structures in the two dimensional lattice of pixel-classes lead to variations in the model. The image representation problem corresponds to estimation of the parameters of the discrete random field formed by the pixel classes, and the parameters of the additive Gaussian field. The authors discuss variations of the model and corresponding applications, and develop convergent estimators for all parameters.<>
{"title":"Parameter estimation and applications of a class of Gaussian image models","authors":"G. Dattatreya, Xiaori Fang","doi":"10.1109/IAI.1994.336689","DOIUrl":"https://doi.org/10.1109/IAI.1994.336689","url":null,"abstract":"This paper discusses variations of a model of images and develops algorithms for estimation of all the parameters from the raw image data. The model is suitable for some cases of (1) lossy image compression and realistic reconstruction, (2) texture synthesis and identification, (3) classification of remotely sensed data, and (4) analysis of medical images. Each pixel in the image is modeled as an element of a set of very few known intensity levels (henceforth called pixel-classes) plus an independent zero mean Gaussian random variable. Different statistical structures in the two dimensional lattice of pixel-classes lead to variations in the model. The image representation problem corresponds to estimation of the parameters of the discrete random field formed by the pixel classes, and the parameters of the additive Gaussian field. The authors discuss variations of the model and corresponding applications, and develop convergent estimators for all parameters.<<ETX>>","PeriodicalId":438137,"journal":{"name":"Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"460 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124348568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two approaches are described to obtain range-Doppler images using either adaptive weighted norm extrapolation or autoregressive modeling. These approaches are used to extend two dimensional data in the frequency-space aperture plane. The data collection process is viewed as sampling limited to a two dimensional window area, corresponding to a set of frequency bounds and observation angles. Image formation is achieved by Fourier processing of the data. The effect of extending this observation window is to increase the resolution in both range and Doppler. The improvement in resolution makes it possible to observe closely-spaced point scatterers. Examples using these data extrapolation methods are presented.<>
{"title":"Radar imaging using 2D adaptive non-parametric extrapolation and autoregressive modeling","authors":"C. Chen, G. Thomas, B. Flores, S. Cabrera","doi":"10.1109/IAI.1994.336682","DOIUrl":"https://doi.org/10.1109/IAI.1994.336682","url":null,"abstract":"Two approaches are described to obtain range-Doppler images using either adaptive weighted norm extrapolation or autoregressive modeling. These approaches are used to extend two dimensional data in the frequency-space aperture plane. The data collection process is viewed as sampling limited to a two dimensional window area, corresponding to a set of frequency bounds and observation angles. Image formation is achieved by Fourier processing of the data. The effect of extending this observation window is to increase the resolution in both range and Doppler. The improvement in resolution makes it possible to observe closely-spaced point scatterers. Examples using these data extrapolation methods are presented.<<ETX>>","PeriodicalId":438137,"journal":{"name":"Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114436028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An efficient and simple segmentation algorithm is presented which is based on a good edge-preserving smoothing filter and a fast anisotropic diffusion technique. A number of examples are shown to demonstrate the capabilities of this algorithm.<>
{"title":"An image segmentation technique based on edge-preserving smoothing filter and anisotropic diffusion","authors":"T. Dang, O. Jamet, H. Maître","doi":"10.1109/IAI.1994.336683","DOIUrl":"https://doi.org/10.1109/IAI.1994.336683","url":null,"abstract":"An efficient and simple segmentation algorithm is presented which is based on a good edge-preserving smoothing filter and a fast anisotropic diffusion technique. A number of examples are shown to demonstrate the capabilities of this algorithm.<<ETX>>","PeriodicalId":438137,"journal":{"name":"Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123321143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the active research fields in computer vision is the recognition of complex 3D objects. The task of object recognition is tightly bound to background understanding or suppression. Current literature describes the top down approaches as promising but not complete and the bottom-up approaches as not robust. The paper describes a model based vision system in which a commercial 3D computer graphics system has been used for object modeling and visual clue generation. Given the computer generated model image, a conventional CCD camera image and the corresponding scanned 3D dense range map of the real scene, the object can be located in it. The paper deals with how this is done using newly developed segmentation algorithms extracting "focus features" from range images (depth map) of the scene. The system uses the image pyramid of resolution and prediction-verification process. First the authors generate a hypothesis in a low resolution description, giving rough clues for the object boundaries, position and orientation. These regions of interest are then used as the field of comparison with higher resolution models. Such an iterative process is repeated until a given threshold of similarity is reached. Next an intensity image of the model in the scene is created using the available a priori knowledge. Direct correlation is then performed between the model and the "focus feature" of the scene. Illustrative examples of object recognition in simple and complex scenes are presented.<>
{"title":"Recognition of 3-D objects on complex backgrounds using model based vision and range images","authors":"E. Natonek, C. Baur","doi":"10.1109/IAI.1994.336667","DOIUrl":"https://doi.org/10.1109/IAI.1994.336667","url":null,"abstract":"One of the active research fields in computer vision is the recognition of complex 3D objects. The task of object recognition is tightly bound to background understanding or suppression. Current literature describes the top down approaches as promising but not complete and the bottom-up approaches as not robust. The paper describes a model based vision system in which a commercial 3D computer graphics system has been used for object modeling and visual clue generation. Given the computer generated model image, a conventional CCD camera image and the corresponding scanned 3D dense range map of the real scene, the object can be located in it. The paper deals with how this is done using newly developed segmentation algorithms extracting \"focus features\" from range images (depth map) of the scene. The system uses the image pyramid of resolution and prediction-verification process. First the authors generate a hypothesis in a low resolution description, giving rough clues for the object boundaries, position and orientation. These regions of interest are then used as the field of comparison with higher resolution models. Such an iterative process is repeated until a given threshold of similarity is reached. Next an intensity image of the model in the scene is created using the available a priori knowledge. Direct correlation is then performed between the model and the \"focus feature\" of the scene. Illustrative examples of object recognition in simple and complex scenes are presented.<<ETX>>","PeriodicalId":438137,"journal":{"name":"Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129554821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents simulation results on the performance of similarity metrices for image matching. Specifically, matching and determining the transformation between slightly rotated images are addressed. The use of a correlation function and an error function as similarity metrices is reexamined when there is a coupling between translation and rotation. Sensitivity of these two metrices in the above context is compared. A hybrid iterative strategy is proposed based on the sensitivity analysis to enhance the matching accuracy.<>
{"title":"Sensitivity analysis of similarity metrices for image matching","authors":"R. Malla, V. Devarajan","doi":"10.1109/IAI.1994.336681","DOIUrl":"https://doi.org/10.1109/IAI.1994.336681","url":null,"abstract":"This paper presents simulation results on the performance of similarity metrices for image matching. Specifically, matching and determining the transformation between slightly rotated images are addressed. The use of a correlation function and an error function as similarity metrices is reexamined when there is a coupling between translation and rotation. Sensitivity of these two metrices in the above context is compared. A hybrid iterative strategy is proposed based on the sensitivity analysis to enhance the matching accuracy.<<ETX>>","PeriodicalId":438137,"journal":{"name":"Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130050752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The efficacy of using spatial grey level dependence (SGLD) methods is proposed for the evaluation of the textural content of digitized mammograms. In film-screen mammography, the physician uses his awareness of features present on the mammogram to achieve the diagnosis of (or absence of) a disease state. The image perceived by the physician represents the projection of a 3D object onto film and certain limitations are imposed by the characteristics of the imaging modality as well as by the means for creating a discrete representation of the image. Spatial grey level dependence methods have the promise to reveal significant salient information about the underlying structural elements that indicate disease and also have the potential to provide additional information with regard to the medical objective. In the paper, statistics computed from the SGLD are used to highlight features of potential medical interest in mammograms. In particular, the local energy and inertia are calculated for malignant and benign lesions. In preliminary results, it is found that these measurements have an apparent ability to provide discrimination between regions of low textural energy and randomness from regions of high textural energy and randomness. Typically, these types of regions are associated with benign and malignant image profiles, respectively. Examples are given where these techniques are applied to lesions in digitized mammograms at a 100 micron spatial resolution and 12 bit gray scale resolution.<>
{"title":"Application of spatial grey level dependence methods to digitized mammograms","authors":"B. Aldrich, M. Desai","doi":"10.1109/IAI.1994.336675","DOIUrl":"https://doi.org/10.1109/IAI.1994.336675","url":null,"abstract":"The efficacy of using spatial grey level dependence (SGLD) methods is proposed for the evaluation of the textural content of digitized mammograms. In film-screen mammography, the physician uses his awareness of features present on the mammogram to achieve the diagnosis of (or absence of) a disease state. The image perceived by the physician represents the projection of a 3D object onto film and certain limitations are imposed by the characteristics of the imaging modality as well as by the means for creating a discrete representation of the image. Spatial grey level dependence methods have the promise to reveal significant salient information about the underlying structural elements that indicate disease and also have the potential to provide additional information with regard to the medical objective. In the paper, statistics computed from the SGLD are used to highlight features of potential medical interest in mammograms. In particular, the local energy and inertia are calculated for malignant and benign lesions. In preliminary results, it is found that these measurements have an apparent ability to provide discrimination between regions of low textural energy and randomness from regions of high textural energy and randomness. Typically, these types of regions are associated with benign and malignant image profiles, respectively. Examples are given where these techniques are applied to lesions in digitized mammograms at a 100 micron spatial resolution and 12 bit gray scale resolution.<<ETX>>","PeriodicalId":438137,"journal":{"name":"Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127308708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}