In this paper, we introduce an image combinatorial model based on hypergraph theory. Hypergraph theory is an efficient formal frame for developing image processing applications such as segmentation. Under the assumption that a hypergraph satisfies the Helly property, we develop a segmentation algorithm that partitions the image by inspecting packets of pixels. This process is controlled by a homogeneity criterion. We also present a preprocessing algorithm that ensures that the hypergraph associated with any image satisfies the Helly property. We show that the algorithm is convergent. A performance analysis of the model and of the segmentation algorithm is included.
{"title":"Combinatorics and Image Processing","authors":"A. Bretto, J. Azema, H. Cherifi, B. Laget","doi":"10.1006/gmip.1997.0437","DOIUrl":"10.1006/gmip.1997.0437","url":null,"abstract":"<div><p>In this paper, we introduce an image combinatorial model based on hypergraph theory. Hypergraph theory is an efficient formal frame for developing image processing applications such as segmentation. Under the assumption that a hypergraph satisfies the Helly property, we develop a segmentation algorithm that partitions the image by inspecting packets of pixels. This process is controlled by a homogeneity criterion. We also present a preprocessing algorithm that ensures that the hypergraph associated with any image satisfies the Helly property. We show that the algorithm is convergent. A performance analysis of the model and of the segmentation algorithm is included.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 5","pages":"Pages 265-277"},"PeriodicalIF":0.0,"publicationDate":"1997-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0437","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133860546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The approximation of topographical surfaces is required in a variety of disciplines, for example, computer graphics and geographic information systems (GIS). The constrained Delaunay pyramid is a hierarchical model for approximating 2-dimensional surfaces at a variety of predefined resolutions. Basically, the topographical data are given by a set of three-dimensional points, but an additional set of nonintersecting line segments describing linear surface features like valleys, ridges, and coast lines is required to constrain the representation. The approximation is obtained by computing a constrained Delaunay triangulation for each resolution. The model generalizes the constraints at coarse resolutions. Due to its structure, the constrained Delaunay pyramid efficiently supports browsing and zooming in large data sets stored in database systems underlying the GIS. For very large data sets, a divide-and-conquer approach allows the computation of the constrained Delaunay pyramid on secondary storage.
{"title":"A Hierarchical Model for Multiresolution Surface Reconstruction","authors":"Andreas Voigtmann , Ludger Becker, Klaus Hinrichs","doi":"10.1006/gmip.1997.0436","DOIUrl":"10.1006/gmip.1997.0436","url":null,"abstract":"<div><p>The approximation of topographical surfaces is required in a variety of disciplines, for example, computer graphics and geographic information systems (GIS). The constrained Delaunay pyramid is a hierarchical model for approximating 2<span><math><mtext>1</mtext><mtext>2</mtext></math></span>-dimensional surfaces at a variety of predefined resolutions. Basically, the topographical data are given by a set of three-dimensional points, but an additional set of nonintersecting line segments describing linear surface features like valleys, ridges, and coast lines is required to constrain the representation. The approximation is obtained by computing a constrained Delaunay triangulation for each resolution. The model generalizes the constraints at coarse resolutions. Due to its structure, the constrained Delaunay pyramid efficiently supports browsing and zooming in large data sets stored in database systems underlying the GIS. For very large data sets, a divide-and-conquer approach allows the computation of the constrained Delaunay pyramid on secondary storage.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 5","pages":"Pages 333-348"},"PeriodicalIF":0.0,"publicationDate":"1997-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0436","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117067117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is known that height field ray tracing efficiency can be improved if the empty space above the height field surface is first parameterized in terms of apex heights and opening angles of inverted cones of empty space whose vertical axes are regularly spaced. Once such a parameterization has been performed, rays can be traversed in steps across inverted cones of empty space rather than across successive height field grid cells. As the cone opening angles increase, ray tracing efficiency tends to improve because steps along rays across the inverted cones get longer. Circular horizontal cross-sections of an inverted cone can be divided into contiguous nonoverlapping sectors. Given that the inverted cones can contain nothing but empty space, the maximum possible opening angle within any such sector may significantly exceed the opening angle of the inverted cone. It is shown that ray tracing efficiency can be significantly improved by replacing the inverted cones of empty space with cones that have narrow sectors. It is also known that the parameters of the inverted cones can be derived from distance transforms (DTs) of successive horizontal cross-sections of the height field. Each cross-section can be represented as a 2D binary array, whose DT gives the distance from each element to the nearest element of value 1. DTs can be directionalized by requiring the element of value 1 closest to a given element to lie within a sector emanating from that given element. The parameters of inverted cones within specific sectors can be derived from such directional DTs. An efficient new algorithm for generating directional DTs is introduced.
{"title":"Directional Distance Transforms and Height Field Preprocessing for Efficient Ray Tracing","authors":"David W. Paglieroni","doi":"10.1006/gmip.1997.0434","DOIUrl":"10.1006/gmip.1997.0434","url":null,"abstract":"<div><p>It is known that height field ray tracing efficiency can be improved if the empty space above the height field surface is first parameterized in terms of apex heights and opening angles of inverted cones of empty space whose vertical axes are regularly spaced. Once such a parameterization has been performed, rays can be traversed in steps across inverted cones of empty space rather than across successive height field grid cells. As the cone opening angles increase, ray tracing efficiency tends to improve because steps along rays across the inverted cones get longer. Circular horizontal cross-sections of an inverted cone can be divided into contiguous nonoverlapping sectors. Given that the inverted cones can contain nothing but empty space, the maximum possible opening angle within any such sector may significantly exceed the opening angle of the inverted cone. It is shown that ray tracing efficiency can be significantly improved by replacing the inverted cones of empty space with cones that have narrow sectors. It is also known that the parameters of the inverted cones can be derived from distance transforms (DTs) of successive horizontal cross-sections of the height field. Each cross-section can be represented as a 2D binary array, whose DT gives the distance from each element to the nearest element of value 1. DTs can be directionalized by requiring the element of value 1 closest to a given element to lie within a sector emanating from that given element. The parameters of inverted cones within specific sectors can be derived from such directional DTs. An efficient new algorithm for generating directional DTs is introduced.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 4","pages":"Pages 253-264"},"PeriodicalIF":0.0,"publicationDate":"1997-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0434","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115718401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Boolean model is a random set process in which random shapes are positioned according to the outcomes of an independent point process. In the discrete case, the point process is Bernoulli. Estimation is done on the two-dimensional discrete Boolean model by sampling the germ–grain model at widely spaced points. An observation using this procedure consists of jointly distributed horizontal and vertical runlengths. An approximate likelihood of each cross observation is computed. Since the observations are taken at widely spaced points, they are considered independent and are multiplied to form a likelihood function for the entire sampled process. Estimation for the two-dimensional process is done by maximizing the grand likelihood over the parameter space. Simulations on random-rectangle Boolean models show significant decrease in variance over the method using horizontal and vertical linear samples, each taken at independently selected points. Maximum-likelihood estimation can also be used to fit models to real textures. This method is generalized to estimate parameters of a class of Boolean random functions.
{"title":"Maximum-Likelihood Estimation for the Two-Dimensional Discrete Boolean Random Set and Function Models Using Multidimensional Linear Samples","authors":"John C. Handley , Edward R. Dougherty","doi":"10.1006/gmip.1997.0432","DOIUrl":"10.1006/gmip.1997.0432","url":null,"abstract":"<div><p>The Boolean model is a random set process in which random shapes are positioned according to the outcomes of an independent point process. In the discrete case, the point process is Bernoulli. Estimation is done on the two-dimensional discrete Boolean model by sampling the germ–grain model at widely spaced points. An observation using this procedure consists of jointly distributed horizontal and vertical runlengths. An approximate likelihood of each cross observation is computed. Since the observations are taken at widely spaced points, they are considered independent and are multiplied to form a likelihood function for the entire sampled process. Estimation for the two-dimensional process is done by maximizing the grand likelihood over the parameter space. Simulations on random-rectangle Boolean models show significant decrease in variance over the method using horizontal and vertical linear samples, each taken at independently selected points. Maximum-likelihood estimation can also be used to fit models to real textures. This method is generalized to estimate parameters of a class of Boolean random functions.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 4","pages":"Pages 221-231"},"PeriodicalIF":0.0,"publicationDate":"1997-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0432","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121930105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a constructive method for fitting and extracting geometric primitives. This method formalizes the merging process of geometric primitives, which is often used in computer vision. Constructive fitting starts from small uniform fits of the data, which are called elemental fits, and uses them to construct larger uniform fits. We present formal results that involve the calculation of the fitting cost, the way in which the elemental fits must be selected, and the way in which they must be combined to construct a large fit. The rules used to combine the elemental fits are very similar to the engineering principles used when building rigid mechanical constructions with rods and joins. In fact, we will characterize the quality of a large fit by a rigidity parameter. Because of its bottom-up approach constructive fitting is particularly well suited for the extraction of geometric primitives when there is a need for a flexible system. To illustrate the main aspects of constructive fitting we discuss the following applications: exact Least Median of Squares fitting, linear regression with a minimal number of elemental fits, the design of a flatness estimator to compute the local flatness of an image, the decomposition of a digital arc into digital straight line segments, and the merging of circle segments.
{"title":"Constructive Fitting and Extraction of Geometric Primitives","authors":"Peter Veelaert","doi":"10.1006/gmip.1997.0433","DOIUrl":"10.1006/gmip.1997.0433","url":null,"abstract":"<div><p>We propose a constructive method for fitting and extracting geometric primitives. This method formalizes the merging process of geometric primitives, which is often used in computer vision. Constructive fitting starts from small uniform fits of the data, which are called elemental fits, and uses them to construct larger uniform fits. We present formal results that involve the calculation of the fitting cost, the way in which the elemental fits must be selected, and the way in which they must be combined to construct a large fit. The rules used to combine the elemental fits are very similar to the engineering principles used when building rigid mechanical constructions with rods and joins. In fact, we will characterize the quality of a large fit by a rigidity parameter. Because of its bottom-up approach constructive fitting is particularly well suited for the extraction of geometric primitives when there is a need for a flexible system. To illustrate the main aspects of constructive fitting we discuss the following applications: exact Least Median of Squares fitting, linear regression with a minimal number of elemental fits, the design of a flatness estimator to compute the local flatness of an image, the decomposition of a digital arc into digital straight line segments, and the merging of circle segments.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 4","pages":"Pages 233-251"},"PeriodicalIF":0.0,"publicationDate":"1997-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0433","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128563186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a new unsupervised fuzzy Bayesian image segmentation method using a recent model using hidden fuzzy Markov fields. The originality of this model is to use Dirac and Lebesgue measures simultaneously at the class field level, which allows the coexistence of hard and fuzzy pixels in a same picture. We propose to solve the main problem of parameter estimation by using of a recent general method of estimation in the case of hidden data, called iterative conditional estimation (ICE), which has been successfully applied in classical segmentation based on hidden Markov fields. The first part of our work involves estimating the parameters defining the Markovian distribution of the noise-free fuzzy picture. We then combine this algorithm with the ICE method in order to estimate all the parameters of the fuzzy picture corrupted with noise. Last, we combine the parameter estimation step with two segmentation methods, resulting in two unsupervised statistical fuzzy segmentation methods. The efficiency of the proposed methods is tested numerically on synthetic images and a fuzzy segmentation of a real image of clouds is studied.
{"title":"Parameter Estimation in Hidden Fuzzy Markov Random Fields and Image Segmentation","authors":"Fabien Salzenstein, Wojciech Pieczynski","doi":"10.1006/gmip.1997.0431","DOIUrl":"10.1006/gmip.1997.0431","url":null,"abstract":"<div><p>This paper proposes a new unsupervised fuzzy Bayesian image segmentation method using a recent model using hidden fuzzy Markov fields. The originality of this model is to use Dirac and Lebesgue measures simultaneously at the class field level, which allows the coexistence of hard and fuzzy pixels in a same picture. We propose to solve the main problem of parameter estimation by using of a recent general method of estimation in the case of hidden data, called iterative conditional estimation (ICE), which has been successfully applied in classical segmentation based on hidden Markov fields. The first part of our work involves estimating the parameters defining the Markovian distribution of the noise-free fuzzy picture. We then combine this algorithm with the ICE method in order to estimate all the parameters of the fuzzy picture corrupted with noise. Last, we combine the parameter estimation step with two segmentation methods, resulting in two unsupervised statistical fuzzy segmentation methods. The efficiency of the proposed methods is tested numerically on synthetic images and a fuzzy segmentation of a real image of clouds is studied.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 4","pages":"Pages 205-220"},"PeriodicalIF":0.0,"publicationDate":"1997-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0431","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115166365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The combination of the wavelet transform and vector quantization has proven to be a powerful technique for image compression. In this paper we discuss an image compression system based on the biorthogonal wavelet transform and lattice vector quantizers. In particular, we considerD-type lattices which, as it is shown, are well suited for encoding the wavelet coefficients. In the experimental part of this work the presented image coding system is tested using general-type images as well as fingerprints. The comparison of the fingerprint coding results generated by the presented method with the FBI image compression standard has shown that our method attains a superior speed of coding while maintaining similar figures for signal-to-noise ratio vs compression ratio.
{"title":"Image Coding throughDLattice Quantization of Wavelet Coefficients","authors":"Mikhail Shnaider , Andrew P. Papliński","doi":"10.1006/gmip.1997.0429","DOIUrl":"10.1006/gmip.1997.0429","url":null,"abstract":"<div><p>The combination of the wavelet transform and vector quantization has proven to be a powerful technique for image compression. In this paper we discuss an image compression system based on the biorthogonal wavelet transform and lattice vector quantizers. In particular, we consider<em>D</em>-type lattices which, as it is shown, are well suited for encoding the wavelet coefficients. In the experimental part of this work the presented image coding system is tested using general-type images as well as fingerprints. The comparison of the fingerprint coding results generated by the presented method with the FBI image compression standard has shown that our method attains a superior speed of coding while maintaining similar figures for signal-to-noise ratio vs compression ratio.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 4","pages":"Pages 193-204"},"PeriodicalIF":0.0,"publicationDate":"1997-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0429","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130920593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computation of the finite discrete Gabor transform can be accomplished in a variety of ways. Three representative methods (matrix inversion, Zak transform, and relaxation network) were evaluated in terms of execution speed, accuracy, and stability. The relaxation network was the slowest method tested. Its strength lies in the fact that it makes no explicit assumptions about the basis functions; in practice it was found that convergence did depend on basis choice. The matrix method requires a separable Gabor basis (i.e., one that can be generated by taking a Cartesian product of one-dimensional functions), but is faster than the relaxation network by several orders of magnitude. It proved to be a stable and highly accurate algorithm. The Zak–Gabor algorithm requires that all of the Gabor basis functions have exactly the same envelope and gives no freedom in choosing the modulating function. Its execution, however, is very stable, accurate, and by far the most rapid of the three methods tested.
{"title":"A Performance Analysis of Fast Gabor Transform Methods","authors":"Troy T. Chinen , Todd R. Reed","doi":"10.1006/gmip.1997.0421","DOIUrl":"10.1006/gmip.1997.0421","url":null,"abstract":"<div><p>Computation of the finite discrete Gabor transform can be accomplished in a variety of ways. Three representative methods (matrix inversion, Zak transform, and relaxation network) were evaluated in terms of execution speed, accuracy, and stability. The relaxation network was the slowest method tested. Its strength lies in the fact that it makes no explicit assumptions about the basis functions; in practice it was found that convergence did depend on basis choice. The matrix method requires a separable Gabor basis (i.e., one that can be generated by taking a Cartesian product of one-dimensional functions), but is faster than the relaxation network by several orders of magnitude. It proved to be a stable and highly accurate algorithm. The Zak–Gabor algorithm requires that all of the Gabor basis functions have exactly the same envelope and gives no freedom in choosing the modulating function. Its execution, however, is very stable, accurate, and by far the most rapid of the three methods tested.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 3","pages":"Pages 117-127"},"PeriodicalIF":0.0,"publicationDate":"1997-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0421","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130702033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a new image compression technique based on successive application of a 2-D single-sideband analysis/synthesis system and the Joint Photographic Experts Group (JPEG) discrete cosine transform (DCT) lossy transform coder. A 2-D separable single-sideband (SSB) analysis/synthesis system, which is developed in terms of a 2-D separable weighted overlapped-add method of analysis/synthesis and which allows overlap between adjacent spatial domain windows, is used first to reduce the image size in the spatial domain. The JPEG discrete cosine transform is then used to reduce the image size in the frequency domain. These two successive compression processes combine to form a powerful image compressor. The overall compression of images in this technique can reach up to about 97 percent of their original size without much of the image quality being lost.
{"title":"A New Two Successive Process Image Compression Technique Using Subband Coding and JPEG Discrete Cosine Transform Coding","authors":"C.P. Liu","doi":"10.1006/gmip.1997.0430","DOIUrl":"10.1006/gmip.1997.0430","url":null,"abstract":"<div><p>This paper proposes a new image compression technique based on successive application of a 2-D single-sideband analysis/synthesis system and the Joint Photographic Experts Group (JPEG) discrete cosine transform (DCT) lossy transform coder. A 2-D separable single-sideband (SSB) analysis/synthesis system, which is developed in terms of a 2-D separable weighted overlapped-add method of analysis/synthesis and which allows overlap between adjacent spatial domain windows, is used first to reduce the image size in the spatial domain. The JPEG discrete cosine transform is then used to reduce the image size in the frequency domain. These two successive compression processes combine to form a powerful image compressor. The overall compression of images in this technique can reach up to about 97 percent of their original size without much of the image quality being lost.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 3","pages":"Pages 179-191"},"PeriodicalIF":0.0,"publicationDate":"1997-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0430","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133516846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A traditional problem with color image quantization techniques is their inability to handle smooth variations in intensity and chromaticity, leading to contours in the quantized image. To address this problem, this paper describes new techniques for augmenting the performance of a seminal color image quantization algorithm, the median-cut quantizer. Applying a simple texture analysis method from computer vision in conjunction with the median-cut algorithm using a new variant of a k-d tree, we show that contouring effects can be alleviated without resorting to dithering methods and the accompanying decrease in signal-to-noise ratio. The merits of this approach are evaluated using remotely sensed aerial imagery and synthetically generated scenes.
{"title":"Texture Analysis for Enhanced Color Image Quantization","authors":"Jefferey A. Shufelt","doi":"10.1006/gmip.1997.0428","DOIUrl":"10.1006/gmip.1997.0428","url":null,"abstract":"<div><p>A traditional problem with color image quantization techniques is their inability to handle smooth variations in intensity and chromaticity, leading to contours in the quantized image. To address this problem, this paper describes new techniques for augmenting the performance of a seminal color image quantization algorithm, the median-cut quantizer. Applying a simple texture analysis method from computer vision in conjunction with the median-cut algorithm using a new variant of a k-d tree, we show that contouring effects can be alleviated without resorting to dithering methods and the accompanying decrease in signal-to-noise ratio. The merits of this approach are evaluated using remotely sensed aerial imagery and synthetically generated scenes.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 3","pages":"Pages 149-163"},"PeriodicalIF":0.0,"publicationDate":"1997-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0428","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115263127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}