It is possible to estimate the limitations on ground-to-air visibility around a point using digital terrain models in which terrain height is given on a rectangular grid and the vegetation class within defined areas is specified. The screening effect of the vegetation can be estimated better by using the inherent autocorrelation in elevation angle data given as a function of azimuth angle about a fixed point. This type of model is useful when the proposed number of observation sites is too large to warrant surveying them, or the precise actual locations cannot be ascertained in advance of the requirement, as is often the case in simulation studies involving mobile observers. The distribution of elevation angle may be non-normal. A procedure is described here to facilitate the use of autoregressive equations which require normally distributed variables.
{"title":"Vegetation-Limited Ground-to-Air Surveillance","authors":"Fogg D.A.","doi":"10.1006/cgip.1993.1032","DOIUrl":"10.1006/cgip.1993.1032","url":null,"abstract":"<div><p>It is possible to estimate the limitations on ground-to-air visibility around a point using digital terrain models in which terrain height is given on a rectangular grid and the vegetation class within defined areas is specified. The screening effect of the vegetation can be estimated better by using the inherent autocorrelation in elevation angle data given as a function of azimuth angle about a fixed point. This type of model is useful when the proposed number of observation sites is too large to warrant surveying them, or the precise actual locations cannot be ascertained in advance of the requirement, as is often the case in simulation studies involving mobile observers. The distribution of elevation angle may be non-normal. A procedure is described here to facilitate the use of autoregressive equations which require normally distributed variables.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 6","pages":"Pages 419-427"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1032","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130120433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper shows that an error spectrum can be used to describe the performance of any convolutional interpolator used to shift an oversampled image. This spectrum is linear in the image power spectrum and in an error factor that depends only on the interpolator and the shift. The same form is shown to describe the interpolation of undersampled data, in an average sense. Simple formulas are derived for the error factor in either Fourier or real space, and standard interpolators are evaluated with them. Optimal interpolators are derived for various theoretical spectra: constant inband, Lorentzian, power law, and Gaussian. Practical methods of interpolator design are devised for use with image spectra that are known only partially or are not easily characterized analytically.
{"title":"Theory and Design of Local Interpolators","authors":"Schaum A.","doi":"10.1006/cgip.1993.1035","DOIUrl":"10.1006/cgip.1993.1035","url":null,"abstract":"<div><p>This paper shows that an error spectrum can be used to describe the performance of any convolutional interpolator used to shift an oversampled image. This spectrum is linear in the image power spectrum and in an error factor that depends only on the interpolator and the shift. The same form is shown to describe the interpolation of undersampled data, in an average sense. Simple formulas are derived for the error factor in either Fourier or real space, and standard interpolators are evaluated with them. Optimal interpolators are derived for various theoretical spectra: constant inband, Lorentzian, power law, and Gaussian. Practical methods of interpolator design are devised for use with image spectra that are known only partially or are not easily characterized analytically.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 6","pages":"Pages 464-481"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1035","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129947770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stochastic relaxation algorithms in image processing are usually computationally intensive, partially because the images of interest comprise only a small fraction of the total (digital) configuration space. A new locally bounded image subspace is introduced, which is shown rich enough to contain most images which are reasonably smooth except for (possibly) sharp discontinuities. New versions of the Gibbs Sampler and Metropolis algorithms are defined on the locally bounded image space, and their asymptotic convergence is proven. Experiments in image restoration and reconstruction demonstrate that these algorithms perform more cost-effectively than the standard versions.
{"title":"Efficient Stochastic Algorithms on Locally Bounded Image Space","authors":"Yang C.D.","doi":"10.1006/cgip.1993.1037","DOIUrl":"10.1006/cgip.1993.1037","url":null,"abstract":"<div><p>Stochastic relaxation algorithms in image processing are usually computationally intensive, partially because the images of interest comprise only a small fraction of the total (digital) configuration space. A new <em>locally bounded</em> image subspace is introduced, which is shown rich enough to contain most images which are reasonably smooth except for (possibly) sharp discontinuities. New versions of the Gibbs Sampler and Metropolis algorithms are defined on the locally bounded image space, and their asymptotic convergence is proven. Experiments in image restoration and reconstruction demonstrate that these algorithms perform more cost-effectively than the standard versions.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 6","pages":"Pages 494-506"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114548824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A numerically robust algorithm for the ordinary Voronoi diagrams is applied to the approximation of various types of generalized Voronoi diagrams. The generalized Voronoi diagrams treated here include Voronoi diagrams for figures, additively weighted Voronoi diagrams, Voronoi diagrams in a river, Voronoi diagrams in a Riemannian plane, and Voronoi diagrams with respect to collision-avoiding shortest paths. The construction of these generalized Voronoi diagrams is reduced to the construction of the ordinary Voronoi diagrams. The methods proposed here can save much time which is otherwise necessary for writing a computer program for each type of generalized Voronoi diagram.
{"title":"Approximation of Generalized Voronoi Diagrams by Ordinary Voronoi Diagrams","authors":"Sugihara K.","doi":"10.1006/cgip.1993.1039","DOIUrl":"10.1006/cgip.1993.1039","url":null,"abstract":"<div><p>A numerically robust algorithm for the ordinary Voronoi diagrams is applied to the approximation of various types of generalized Voronoi diagrams. The generalized Voronoi diagrams treated here include Voronoi diagrams for figures, additively weighted Voronoi diagrams, Voronoi diagrams in a river, Voronoi diagrams in a Riemannian plane, and Voronoi diagrams with respect to collision-avoiding shortest paths. The construction of these generalized Voronoi diagrams is reduced to the construction of the ordinary Voronoi diagrams. The methods proposed here can save much time which is otherwise necessary for writing a computer program for each type of generalized Voronoi diagram.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 6","pages":"Pages 522-531"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1039","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132181699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Author Index for Volume 55","authors":"","doi":"10.1006/cgip.1993.1042","DOIUrl":"https://doi.org/10.1006/cgip.1993.1042","url":null,"abstract":"","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 6","pages":"Page 544"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1042","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136559125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An efficient algorithm is presented in this paper for correcting skew of text lines in scanned document images. In this method, the cross-correlation between two lines in the image with a fixed distance is calculated. The correlation functions for all pairs of lines in the image are accumulated. The shift for which the accumulated cross-correlation function takes the maximum is then used for determining the skew angle. The image is rotated in the opposite direction for skew correction. The correlation function can be calculated without multiplications for binary images, thus the algorithm can be very efficiently implemented. The method can be used directly for gray-scale and color images as well as binary images. It has been tested on scanned document images with good results.
{"title":"Skew Correction of Document Images Using Interline Cross-Correlation","authors":"Yan H.","doi":"10.1006/cgip.1993.1041","DOIUrl":"10.1006/cgip.1993.1041","url":null,"abstract":"<div><p>An efficient algorithm is presented in this paper for correcting skew of text lines in scanned document images. In this method, the cross-correlation between two lines in the image with a fixed distance is calculated. The correlation functions for all pairs of lines in the image are accumulated. The shift for which the accumulated cross-correlation function takes the maximum is then used for determining the skew angle. The image is rotated in the opposite direction for skew correction. The correlation function can be calculated without multiplications for binary images, thus the algorithm can be very efficiently implemented. The method can be used directly for gray-scale and color images as well as binary images. It has been tested on scanned document images with good results.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 6","pages":"Pages 538-543"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1041","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126318720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a method called the Truncation method for computing Walsh-Hadamard transforms of one- and two-dimensional data. In one dimension, the method uses binary trees as a basis for representing the data and computing the transform. In two dimensions, the method uses quadtrees (pyramids), adaptive quad-trees, or binary trees as a basis. We analyze the storage and time complexity of this method in worst and general cases. The results show that the Truncation method degenerates to the Fast Walsh Transform (FWT) in the worst case, while the Truncation method is faster than the Fast Walsh Transform when there is coherence in the input data, as will typically be the case for image data. In one dimension, the performance of the Truncation method for N data samples is between O(N) and O(N log2N), and it is between O(N2) and O(N2 log2N) in two dimensions. Practical results on several images are presented to show that both the expected and actual overall times taken to compute Walsh transforms using the Truncation method are less than those required by a similar implementation of the FWT method.
{"title":"A Truncation Method for Computing Walsh Transforms with Applications to Image Processing","authors":"Anguh M.M., Martin R.R.","doi":"10.1006/cgip.1993.1036","DOIUrl":"10.1006/cgip.1993.1036","url":null,"abstract":"<div><p>We present a method called the <em>Truncation</em> method for computing Walsh-Hadamard transforms of one- and two-dimensional data. In one dimension, the method uses binary trees as a basis for representing the data and computing the transform. In two dimensions, the method uses quadtrees (pyramids), adaptive quad-trees, or binary trees as a basis. We analyze the storage and time complexity of this method in worst and general cases. The results show that the Truncation method degenerates to the Fast Walsh Transform (FWT) in the worst case, while the Truncation method is faster than the Fast Walsh Transform when there is coherence in the input data, as will typically be the case for image data. In one dimension, the performance of the Truncation method for <em>N</em> data samples is between <em>O</em>(<em>N</em>) and <em>O</em>(<em>N</em> log<sub>2</sub><em>N</em>), and it is between <em>O</em>(<em>N</em><sup>2</sup>) and <em>O</em>(<em>N</em><sup>2</sup> log<sub>2</sub><em>N</em>) in two dimensions. Practical results on several images are presented to show that both the expected and actual <em>overall</em> times taken to compute Walsh transforms using the Truncation method are less than those required by a similar implementation of the FWT method.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 6","pages":"Pages 482-493"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1036","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129951646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present a new class of algorithms for the reconstruction (filtering) of piecewise smooth images. These algorithms are obtained by modeling the image as a deterministic, dynamical system of interacting particles, and compare favorably with others that are conunonly used for the same purpose, with respect to both computational complexity and to the quality of the reconstruction. It is shown that, given a particular choice of the particle interaction potentials, it is possible to select optimal values for the parameters that remain valid for a whole class of problems. Examples of applications to image processing and computer graphics are also given.
{"title":"Deterministic Interactive Particle Models for Image Processing and Computer Graphics","authors":"Marroquin J.L.","doi":"10.1006/cgip.1993.1031","DOIUrl":"10.1006/cgip.1993.1031","url":null,"abstract":"<div><p>In this paper we present a new class of algorithms for the reconstruction (filtering) of piecewise smooth images. These algorithms are obtained by modeling the image as a deterministic, dynamical system of interacting particles, and compare favorably with others that are conunonly used for the same purpose, with respect to both computational complexity and to the quality of the reconstruction. It is shown that, given a particular choice of the particle interaction potentials, it is possible to select optimal values for the parameters that remain valid for a whole class of problems. Examples of applications to image processing and computer graphics are also given.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 5","pages":"Pages 408-417"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115002059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For people engaged in image analysis without the advantage of parallel processors or specialized hardware, the computational cost of greyscale morphological operations is a major issue. A method known as radial decomposition is presented here which enables dilations or erosions by discs or spheres to be approximated by a series of dilations or erosions by elements defined on line segments. This achieves a reduction in the number of computations involved which is of the order of the radius of the element, thus speeding up such operations as the top hat and rolling ball transformations. The method has been tested successfully and a lookup table is included in this paper enabling a user to incorporate it into an image processing package.
{"title":"Radial Decomposition of Disks and Spheres","authors":"Adams R.","doi":"10.1006/cgip.1993.1024","DOIUrl":"10.1006/cgip.1993.1024","url":null,"abstract":"<div><p>For people engaged in image analysis without the advantage of parallel processors or specialized hardware, the computational cost of greyscale morphological operations is a major issue. A method known as radial decomposition is presented here which enables dilations or erosions by discs or spheres to be approximated by a series of dilations or erosions by elements defined on line segments. This achieves a reduction in the number of computations involved which is of the order of the radius of the element, thus speeding up such operations as the top hat and rolling ball transformations. The method has been tested successfully and a lookup table is included in this paper enabling a user to incorporate it into an image processing package.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 5","pages":"Pages 325-332"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1024","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130845868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Photo-realistic texturing of computer-generated scenes by mapping a digitized terrain photograph on a three dimensional terrain elevation model has many real-time applications. Combat mission rehearsal, for instance, is enhanced by portraying geo-specific terrain. This paper addresses fundamental pixel processing requirements which result when display resolution limited, high quality, alias free perspective output imagery is sought and when a set of prefiltered multi-resolution texture maps is used. To this end, a perspective scene generation model is developed first. Then the ramifications of the real-time pixel processing requirements are determined when perspective eyepoint orientation and altitude are changed. Finally, a trade-off analysis is presented that characterizes the relationship between pixels per frame and the scaling factor used to form the multi-resolution imagery data base.
{"title":"On 3-D Real-Time Perspective Generation from a Multiresolution Photo-Mosaic Data Base","authors":"Hooks J.T., Martinsen G.J., Devarajan V.","doi":"10.1006/cgip.1993.1025","DOIUrl":"10.1006/cgip.1993.1025","url":null,"abstract":"<div><p>Photo-realistic texturing of computer-generated scenes by mapping a digitized terrain photograph on a three dimensional terrain elevation model has many real-time applications. Combat mission rehearsal, for instance, is enhanced by portraying geo-specific terrain. This paper addresses fundamental pixel processing requirements which result when display resolution limited, high quality, alias free perspective output imagery is sought and when a set of prefiltered multi-resolution texture maps is used. To this end, a perspective scene generation model is developed first. Then the ramifications of the real-time pixel processing requirements are determined when perspective eyepoint orientation and altitude are changed. Finally, a trade-off analysis is presented that characterizes the relationship between pixels per frame and the scaling factor used to form the multi-resolution imagery data base.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 5","pages":"Pages 333-345"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1025","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122168299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}