首页 > 最新文献

Graphical Models and Image Processing最新文献

英文 中文
Combinatorics and Image Processing 组合学与图像处理
Pub Date : 1997-09-01 DOI: 10.1006/gmip.1997.0437
A. Bretto, J. Azema, H. Cherifi, B. Laget

In this paper, we introduce an image combinatorial model based on hypergraph theory. Hypergraph theory is an efficient formal frame for developing image processing applications such as segmentation. Under the assumption that a hypergraph satisfies the Helly property, we develop a segmentation algorithm that partitions the image by inspecting packets of pixels. This process is controlled by a homogeneity criterion. We also present a preprocessing algorithm that ensures that the hypergraph associated with any image satisfies the Helly property. We show that the algorithm is convergent. A performance analysis of the model and of the segmentation algorithm is included.

本文提出了一种基于超图理论的图像组合模型。超图理论是开发图像处理应用(如分割)的有效形式框架。在假设超图满足Helly性质的前提下,我们开发了一种通过检查像素包来分割图像的分割算法。这个过程是由同质性标准控制的。我们还提出了一种预处理算法,以确保与任何图像相关联的超图满足Helly属性。我们证明了该算法是收敛的。对模型和分割算法进行了性能分析。
{"title":"Combinatorics and Image Processing","authors":"A. Bretto,&nbsp;J. Azema,&nbsp;H. Cherifi,&nbsp;B. Laget","doi":"10.1006/gmip.1997.0437","DOIUrl":"10.1006/gmip.1997.0437","url":null,"abstract":"<div><p>In this paper, we introduce an image combinatorial model based on hypergraph theory. Hypergraph theory is an efficient formal frame for developing image processing applications such as segmentation. Under the assumption that a hypergraph satisfies the Helly property, we develop a segmentation algorithm that partitions the image by inspecting packets of pixels. This process is controlled by a homogeneity criterion. We also present a preprocessing algorithm that ensures that the hypergraph associated with any image satisfies the Helly property. We show that the algorithm is convergent. A performance analysis of the model and of the segmentation algorithm is included.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 5","pages":"Pages 265-277"},"PeriodicalIF":0.0,"publicationDate":"1997-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0437","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133860546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
A Hierarchical Model for Multiresolution Surface Reconstruction 多分辨率曲面重建的层次模型
Pub Date : 1997-09-01 DOI: 10.1006/gmip.1997.0436
Andreas Voigtmann , Ludger Becker, Klaus Hinrichs

The approximation of topographical surfaces is required in a variety of disciplines, for example, computer graphics and geographic information systems (GIS). The constrained Delaunay pyramid is a hierarchical model for approximating 212-dimensional surfaces at a variety of predefined resolutions. Basically, the topographical data are given by a set of three-dimensional points, but an additional set of nonintersecting line segments describing linear surface features like valleys, ridges, and coast lines is required to constrain the representation. The approximation is obtained by computing a constrained Delaunay triangulation for each resolution. The model generalizes the constraints at coarse resolutions. Due to its structure, the constrained Delaunay pyramid efficiently supports browsing and zooming in large data sets stored in database systems underlying the GIS. For very large data sets, a divide-and-conquer approach allows the computation of the constrained Delaunay pyramid on secondary storage.

地形表面的近似在许多学科中都需要,例如,计算机图形学和地理信息系统(GIS)。约束德劳内金字塔是一种以各种预定义分辨率近似212维曲面的分层模型。基本上,地形数据是由一组三维点给出的,但需要额外的一组不相交的线段来描述线性表面特征,如山谷、山脊和海岸线,以约束表示。近似是通过计算每个分辨率的约束Delaunay三角剖分得到的。该模型在粗分辨率下推广约束。由于其结构,受约束的Delaunay金字塔有效地支持浏览和缩放存储在GIS底层数据库系统中的大型数据集。对于非常大的数据集,分而治之的方法允许在二级存储上计算受约束的Delaunay金字塔。
{"title":"A Hierarchical Model for Multiresolution Surface Reconstruction","authors":"Andreas Voigtmann ,&nbsp;Ludger Becker,&nbsp;Klaus Hinrichs","doi":"10.1006/gmip.1997.0436","DOIUrl":"10.1006/gmip.1997.0436","url":null,"abstract":"<div><p>The approximation of topographical surfaces is required in a variety of disciplines, for example, computer graphics and geographic information systems (GIS). The constrained Delaunay pyramid is a hierarchical model for approximating 2<span><math><mtext>1</mtext><mtext>2</mtext></math></span>-dimensional surfaces at a variety of predefined resolutions. Basically, the topographical data are given by a set of three-dimensional points, but an additional set of nonintersecting line segments describing linear surface features like valleys, ridges, and coast lines is required to constrain the representation. The approximation is obtained by computing a constrained Delaunay triangulation for each resolution. The model generalizes the constraints at coarse resolutions. Due to its structure, the constrained Delaunay pyramid efficiently supports browsing and zooming in large data sets stored in database systems underlying the GIS. For very large data sets, a divide-and-conquer approach allows the computation of the constrained Delaunay pyramid on secondary storage.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 5","pages":"Pages 333-348"},"PeriodicalIF":0.0,"publicationDate":"1997-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0436","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117067117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Directional Distance Transforms and Height Field Preprocessing for Efficient Ray Tracing 有效光线追踪的方向距离变换和高度场预处理
Pub Date : 1997-07-01 DOI: 10.1006/gmip.1997.0434
David W. Paglieroni

It is known that height field ray tracing efficiency can be improved if the empty space above the height field surface is first parameterized in terms of apex heights and opening angles of inverted cones of empty space whose vertical axes are regularly spaced. Once such a parameterization has been performed, rays can be traversed in steps across inverted cones of empty space rather than across successive height field grid cells. As the cone opening angles increase, ray tracing efficiency tends to improve because steps along rays across the inverted cones get longer. Circular horizontal cross-sections of an inverted cone can be divided into contiguous nonoverlapping sectors. Given that the inverted cones can contain nothing but empty space, the maximum possible opening angle within any such sector may significantly exceed the opening angle of the inverted cone. It is shown that ray tracing efficiency can be significantly improved by replacing the inverted cones of empty space with cones that have narrow sectors. It is also known that the parameters of the inverted cones can be derived from distance transforms (DTs) of successive horizontal cross-sections of the height field. Each cross-section can be represented as a 2D binary array, whose DT gives the distance from each element to the nearest element of value 1. DTs can be directionalized by requiring the element of value 1 closest to a given element to lie within a sector emanating from that given element. The parameters of inverted cones within specific sectors can be derived from such directional DTs. An efficient new algorithm for generating directional DTs is introduced.

我们知道,如果首先用垂直轴有规则间隔的空空间倒锥的顶点高度和开口角来参数化高场表面以上的空空间,可以提高高场射线追踪效率。一旦执行了这样的参数化,射线就可以在空空间的倒锥上逐级穿越,而不是在连续的高度场网格单元上穿越。随着锥开口角度的增加,光线追踪效率趋于提高,因为沿光线穿过倒锥的步骤变长。倒锥的圆形水平截面可以划分为连续的不重叠的扇区。鉴于倒锥只能包含空空间,任何这样的扇形内的最大可能开口角可能大大超过倒锥的开口角。结果表明,用具有窄扇区的倒锥代替空白空间的倒锥可以显著提高光线追踪效率。我们还知道,倒锥的参数可以由高度场的连续水平截面的距离变换(DTs)得到。每个横截面可以表示为一个二维二进制数组,其DT表示每个元素到最近值为1的元素的距离。通过要求最接近给定元素的值为1的元素位于从该给定元素发出的扇区内,可以对dt进行定向。在特定扇区内的倒锥的参数可以由这种定向dt导出。介绍了一种有效的生成定向dt的新算法。
{"title":"Directional Distance Transforms and Height Field Preprocessing for Efficient Ray Tracing","authors":"David W. Paglieroni","doi":"10.1006/gmip.1997.0434","DOIUrl":"10.1006/gmip.1997.0434","url":null,"abstract":"<div><p>It is known that height field ray tracing efficiency can be improved if the empty space above the height field surface is first parameterized in terms of apex heights and opening angles of inverted cones of empty space whose vertical axes are regularly spaced. Once such a parameterization has been performed, rays can be traversed in steps across inverted cones of empty space rather than across successive height field grid cells. As the cone opening angles increase, ray tracing efficiency tends to improve because steps along rays across the inverted cones get longer. Circular horizontal cross-sections of an inverted cone can be divided into contiguous nonoverlapping sectors. Given that the inverted cones can contain nothing but empty space, the maximum possible opening angle within any such sector may significantly exceed the opening angle of the inverted cone. It is shown that ray tracing efficiency can be significantly improved by replacing the inverted cones of empty space with cones that have narrow sectors. It is also known that the parameters of the inverted cones can be derived from distance transforms (DTs) of successive horizontal cross-sections of the height field. Each cross-section can be represented as a 2D binary array, whose DT gives the distance from each element to the nearest element of value 1. DTs can be directionalized by requiring the element of value 1 closest to a given element to lie within a sector emanating from that given element. The parameters of inverted cones within specific sectors can be derived from such directional DTs. An efficient new algorithm for generating directional DTs is introduced.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 4","pages":"Pages 253-264"},"PeriodicalIF":0.0,"publicationDate":"1997-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0434","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115718401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Maximum-Likelihood Estimation for the Two-Dimensional Discrete Boolean Random Set and Function Models Using Multidimensional Linear Samples 二维离散布尔随机集和函数模型的多维线性样本最大似然估计
Pub Date : 1997-07-01 DOI: 10.1006/gmip.1997.0432
John C. Handley , Edward R. Dougherty

The Boolean model is a random set process in which random shapes are positioned according to the outcomes of an independent point process. In the discrete case, the point process is Bernoulli. Estimation is done on the two-dimensional discrete Boolean model by sampling the germ–grain model at widely spaced points. An observation using this procedure consists of jointly distributed horizontal and vertical runlengths. An approximate likelihood of each cross observation is computed. Since the observations are taken at widely spaced points, they are considered independent and are multiplied to form a likelihood function for the entire sampled process. Estimation for the two-dimensional process is done by maximizing the grand likelihood over the parameter space. Simulations on random-rectangle Boolean models show significant decrease in variance over the method using horizontal and vertical linear samples, each taken at independently selected points. Maximum-likelihood estimation can also be used to fit models to real textures. This method is generalized to estimate parameters of a class of Boolean random functions.

布尔模型是一个随机集合过程,其中随机形状根据独立点过程的结果进行定位。在离散情况下,点过程是伯努利过程。对二维离散布尔模型进行估计,方法是在大间距点对种粒模型进行采样。使用此程序的观测结果包括共同分布的水平和垂直长度。计算每个交叉观测的近似似然。由于观测是在间隔很宽的点上进行的,因此它们被认为是独立的,并被相乘以形成整个采样过程的似然函数。二维过程的估计是通过最大化参数空间上的大似然来完成的。随机矩形布尔模型的模拟表明,与使用水平和垂直线性样本的方法相比,方差显著降低,每个样本都在独立选择的点上进行。最大似然估计也可以用来拟合模型真实纹理。将该方法推广到一类布尔随机函数的参数估计。
{"title":"Maximum-Likelihood Estimation for the Two-Dimensional Discrete Boolean Random Set and Function Models Using Multidimensional Linear Samples","authors":"John C. Handley ,&nbsp;Edward R. Dougherty","doi":"10.1006/gmip.1997.0432","DOIUrl":"10.1006/gmip.1997.0432","url":null,"abstract":"<div><p>The Boolean model is a random set process in which random shapes are positioned according to the outcomes of an independent point process. In the discrete case, the point process is Bernoulli. Estimation is done on the two-dimensional discrete Boolean model by sampling the germ–grain model at widely spaced points. An observation using this procedure consists of jointly distributed horizontal and vertical runlengths. An approximate likelihood of each cross observation is computed. Since the observations are taken at widely spaced points, they are considered independent and are multiplied to form a likelihood function for the entire sampled process. Estimation for the two-dimensional process is done by maximizing the grand likelihood over the parameter space. Simulations on random-rectangle Boolean models show significant decrease in variance over the method using horizontal and vertical linear samples, each taken at independently selected points. Maximum-likelihood estimation can also be used to fit models to real textures. This method is generalized to estimate parameters of a class of Boolean random functions.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 4","pages":"Pages 221-231"},"PeriodicalIF":0.0,"publicationDate":"1997-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0432","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121930105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Constructive Fitting and Extraction of Geometric Primitives 几何基元的构造拟合与提取
Pub Date : 1997-07-01 DOI: 10.1006/gmip.1997.0433
Peter Veelaert

We propose a constructive method for fitting and extracting geometric primitives. This method formalizes the merging process of geometric primitives, which is often used in computer vision. Constructive fitting starts from small uniform fits of the data, which are called elemental fits, and uses them to construct larger uniform fits. We present formal results that involve the calculation of the fitting cost, the way in which the elemental fits must be selected, and the way in which they must be combined to construct a large fit. The rules used to combine the elemental fits are very similar to the engineering principles used when building rigid mechanical constructions with rods and joins. In fact, we will characterize the quality of a large fit by a rigidity parameter. Because of its bottom-up approach constructive fitting is particularly well suited for the extraction of geometric primitives when there is a need for a flexible system. To illustrate the main aspects of constructive fitting we discuss the following applications: exact Least Median of Squares fitting, linear regression with a minimal number of elemental fits, the design of a flatness estimator to compute the local flatness of an image, the decomposition of a digital arc into digital straight line segments, and the merging of circle segments.

提出了一种拟合和提取几何基元的构造方法。该方法形式化了计算机视觉中常用的几何基元合并过程。构造拟合从数据的小均匀拟合(称为元素拟合)开始,并使用它们构造更大的均匀拟合。我们提出了正式的结果,包括拟合成本的计算,必须选择元素拟合的方式,以及必须将它们组合起来构建大拟合的方式。用于组合元素配合的规则与使用杆和连接构建刚性机械结构时使用的工程原理非常相似。事实上,我们将通过刚度参数来表征大拟合的质量。由于其自底向上的方法,构造拟合特别适合于在需要灵活系统时提取几何原语。为了说明建设性拟合的主要方面,我们讨论了以下应用:精确的最小二乘中值拟合,具有最小元素拟合数量的线性回归,平面度估计器的设计以计算图像的局部平面度,将数字弧分解为数字直线段,以及圆段的合并。
{"title":"Constructive Fitting and Extraction of Geometric Primitives","authors":"Peter Veelaert","doi":"10.1006/gmip.1997.0433","DOIUrl":"10.1006/gmip.1997.0433","url":null,"abstract":"<div><p>We propose a constructive method for fitting and extracting geometric primitives. This method formalizes the merging process of geometric primitives, which is often used in computer vision. Constructive fitting starts from small uniform fits of the data, which are called elemental fits, and uses them to construct larger uniform fits. We present formal results that involve the calculation of the fitting cost, the way in which the elemental fits must be selected, and the way in which they must be combined to construct a large fit. The rules used to combine the elemental fits are very similar to the engineering principles used when building rigid mechanical constructions with rods and joins. In fact, we will characterize the quality of a large fit by a rigidity parameter. Because of its bottom-up approach constructive fitting is particularly well suited for the extraction of geometric primitives when there is a need for a flexible system. To illustrate the main aspects of constructive fitting we discuss the following applications: exact Least Median of Squares fitting, linear regression with a minimal number of elemental fits, the design of a flatness estimator to compute the local flatness of an image, the decomposition of a digital arc into digital straight line segments, and the merging of circle segments.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 4","pages":"Pages 233-251"},"PeriodicalIF":0.0,"publicationDate":"1997-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0433","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128563186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Parameter Estimation in Hidden Fuzzy Markov Random Fields and Image Segmentation 隐模糊马尔可夫随机场参数估计与图像分割
Pub Date : 1997-07-01 DOI: 10.1006/gmip.1997.0431
Fabien Salzenstein, Wojciech Pieczynski

This paper proposes a new unsupervised fuzzy Bayesian image segmentation method using a recent model using hidden fuzzy Markov fields. The originality of this model is to use Dirac and Lebesgue measures simultaneously at the class field level, which allows the coexistence of hard and fuzzy pixels in a same picture. We propose to solve the main problem of parameter estimation by using of a recent general method of estimation in the case of hidden data, called iterative conditional estimation (ICE), which has been successfully applied in classical segmentation based on hidden Markov fields. The first part of our work involves estimating the parameters defining the Markovian distribution of the noise-free fuzzy picture. We then combine this algorithm with the ICE method in order to estimate all the parameters of the fuzzy picture corrupted with noise. Last, we combine the parameter estimation step with two segmentation methods, resulting in two unsupervised statistical fuzzy segmentation methods. The efficiency of the proposed methods is tested numerically on synthetic images and a fuzzy segmentation of a real image of clouds is studied.

本文提出了一种新的基于隐模糊马尔可夫域的无监督模糊贝叶斯图像分割方法。该模型的独创性在于在类场水平上同时使用狄拉克和勒贝格度量,从而允许在同一幅图像中同时存在硬像素和模糊像素。本文提出了一种新的通用估计方法,即迭代条件估计(ICE),该方法已成功地应用于基于隐马尔可夫域的经典分割中,以解决隐藏数据情况下参数估计的主要问题。我们的工作的第一部分涉及估计参数,定义无噪声模糊图像的马尔可夫分布。然后,我们将该算法与ICE方法相结合,以估计被噪声破坏的模糊图像的所有参数。最后,将参数估计步骤与两种分割方法相结合,得到两种无监督统计模糊分割方法。在合成图像上对所提方法的有效性进行了数值验证,并对真实云图的模糊分割进行了研究。
{"title":"Parameter Estimation in Hidden Fuzzy Markov Random Fields and Image Segmentation","authors":"Fabien Salzenstein,&nbsp;Wojciech Pieczynski","doi":"10.1006/gmip.1997.0431","DOIUrl":"10.1006/gmip.1997.0431","url":null,"abstract":"<div><p>This paper proposes a new unsupervised fuzzy Bayesian image segmentation method using a recent model using hidden fuzzy Markov fields. The originality of this model is to use Dirac and Lebesgue measures simultaneously at the class field level, which allows the coexistence of hard and fuzzy pixels in a same picture. We propose to solve the main problem of parameter estimation by using of a recent general method of estimation in the case of hidden data, called iterative conditional estimation (ICE), which has been successfully applied in classical segmentation based on hidden Markov fields. The first part of our work involves estimating the parameters defining the Markovian distribution of the noise-free fuzzy picture. We then combine this algorithm with the ICE method in order to estimate all the parameters of the fuzzy picture corrupted with noise. Last, we combine the parameter estimation step with two segmentation methods, resulting in two unsupervised statistical fuzzy segmentation methods. The efficiency of the proposed methods is tested numerically on synthetic images and a fuzzy segmentation of a real image of clouds is studied.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 4","pages":"Pages 205-220"},"PeriodicalIF":0.0,"publicationDate":"1997-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0431","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115166365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 133
Image Coding throughDLattice Quantization of Wavelet Coefficients 小波系数格量化图像编码
Pub Date : 1997-07-01 DOI: 10.1006/gmip.1997.0429
Mikhail Shnaider , Andrew P. Papliński

The combination of the wavelet transform and vector quantization has proven to be a powerful technique for image compression. In this paper we discuss an image compression system based on the biorthogonal wavelet transform and lattice vector quantizers. In particular, we considerD-type lattices which, as it is shown, are well suited for encoding the wavelet coefficients. In the experimental part of this work the presented image coding system is tested using general-type images as well as fingerprints. The comparison of the fingerprint coding results generated by the presented method with the FBI image compression standard has shown that our method attains a superior speed of coding while maintaining similar figures for signal-to-noise ratio vs compression ratio.

小波变换与矢量量化的结合已被证明是一种强大的图像压缩技术。本文讨论了一种基于双正交小波变换和点阵矢量量化的图像压缩系统。特别地,我们考虑了d型格,如所示,它非常适合编码小波系数。在本工作的实验部分,使用一般类型的图像和指纹对所提出的图像编码系统进行了测试。将该方法生成的指纹编码结果与FBI图像压缩标准进行了比较,结果表明,该方法在保持信噪比与压缩比相近的情况下,获得了更高的编码速度。
{"title":"Image Coding throughDLattice Quantization of Wavelet Coefficients","authors":"Mikhail Shnaider ,&nbsp;Andrew P. Papliński","doi":"10.1006/gmip.1997.0429","DOIUrl":"10.1006/gmip.1997.0429","url":null,"abstract":"<div><p>The combination of the wavelet transform and vector quantization has proven to be a powerful technique for image compression. In this paper we discuss an image compression system based on the biorthogonal wavelet transform and lattice vector quantizers. In particular, we consider<em>D</em>-type lattices which, as it is shown, are well suited for encoding the wavelet coefficients. In the experimental part of this work the presented image coding system is tested using general-type images as well as fingerprints. The comparison of the fingerprint coding results generated by the presented method with the FBI image compression standard has shown that our method attains a superior speed of coding while maintaining similar figures for signal-to-noise ratio vs compression ratio.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 4","pages":"Pages 193-204"},"PeriodicalIF":0.0,"publicationDate":"1997-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0429","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130920593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Performance Analysis of Fast Gabor Transform Methods 快速Gabor变换方法的性能分析
Pub Date : 1997-05-01 DOI: 10.1006/gmip.1997.0421
Troy T. Chinen , Todd R. Reed

Computation of the finite discrete Gabor transform can be accomplished in a variety of ways. Three representative methods (matrix inversion, Zak transform, and relaxation network) were evaluated in terms of execution speed, accuracy, and stability. The relaxation network was the slowest method tested. Its strength lies in the fact that it makes no explicit assumptions about the basis functions; in practice it was found that convergence did depend on basis choice. The matrix method requires a separable Gabor basis (i.e., one that can be generated by taking a Cartesian product of one-dimensional functions), but is faster than the relaxation network by several orders of magnitude. It proved to be a stable and highly accurate algorithm. The Zak–Gabor algorithm requires that all of the Gabor basis functions have exactly the same envelope and gives no freedom in choosing the modulating function. Its execution, however, is very stable, accurate, and by far the most rapid of the three methods tested.

有限离散Gabor变换的计算可以用多种方法来完成。从执行速度、准确性和稳定性等方面评价了三种代表性方法(矩阵反演、Zak变换和松弛网络)。松弛网络是测试中最慢的方法。它的优点在于它没有对基函数做出明确的假设;在实践中发现收敛性确实依赖于基的选择。矩阵方法需要一个可分离的Gabor基(即,可以通过取一维函数的笛卡尔积生成的基),但比松弛网络快几个数量级。该算法稳定、精度高。Zak-Gabor算法要求所有的Gabor基函数具有完全相同的包络,并且没有选择调制函数的自由。然而,它的执行非常稳定、准确,而且是目前所测试的三种方法中最快的。
{"title":"A Performance Analysis of Fast Gabor Transform Methods","authors":"Troy T. Chinen ,&nbsp;Todd R. Reed","doi":"10.1006/gmip.1997.0421","DOIUrl":"10.1006/gmip.1997.0421","url":null,"abstract":"<div><p>Computation of the finite discrete Gabor transform can be accomplished in a variety of ways. Three representative methods (matrix inversion, Zak transform, and relaxation network) were evaluated in terms of execution speed, accuracy, and stability. The relaxation network was the slowest method tested. Its strength lies in the fact that it makes no explicit assumptions about the basis functions; in practice it was found that convergence did depend on basis choice. The matrix method requires a separable Gabor basis (i.e., one that can be generated by taking a Cartesian product of one-dimensional functions), but is faster than the relaxation network by several orders of magnitude. It proved to be a stable and highly accurate algorithm. The Zak–Gabor algorithm requires that all of the Gabor basis functions have exactly the same envelope and gives no freedom in choosing the modulating function. Its execution, however, is very stable, accurate, and by far the most rapid of the three methods tested.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 3","pages":"Pages 117-127"},"PeriodicalIF":0.0,"publicationDate":"1997-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0421","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130702033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A New Two Successive Process Image Compression Technique Using Subband Coding and JPEG Discrete Cosine Transform Coding 基于子带编码和JPEG离散余弦变换编码的两步连续图像压缩新技术
Pub Date : 1997-05-01 DOI: 10.1006/gmip.1997.0430
C.P. Liu

This paper proposes a new image compression technique based on successive application of a 2-D single-sideband analysis/synthesis system and the Joint Photographic Experts Group (JPEG) discrete cosine transform (DCT) lossy transform coder. A 2-D separable single-sideband (SSB) analysis/synthesis system, which is developed in terms of a 2-D separable weighted overlapped-add method of analysis/synthesis and which allows overlap between adjacent spatial domain windows, is used first to reduce the image size in the spatial domain. The JPEG discrete cosine transform is then used to reduce the image size in the frequency domain. These two successive compression processes combine to form a powerful image compressor. The overall compression of images in this technique can reach up to about 97 percent of their original size without much of the image quality being lost.

本文提出了一种基于二维单边带分析/合成系统和联合摄影专家组(JPEG)离散余弦变换(DCT)有损变换编码器的连续应用的图像压缩技术。首先采用二维可分离加权叠加分析/合成方法开发了二维可分离单边带(SSB)分析/合成系统,该系统允许相邻空间域窗口之间的重叠,以减小空间域中的图像尺寸。然后使用JPEG离散余弦变换在频域中减小图像大小。这两个连续的压缩过程结合起来形成一个强大的图像压缩器。在这种技术中,图像的整体压缩可以达到原始尺寸的97%左右,而不会损失太多的图像质量。
{"title":"A New Two Successive Process Image Compression Technique Using Subband Coding and JPEG Discrete Cosine Transform Coding","authors":"C.P. Liu","doi":"10.1006/gmip.1997.0430","DOIUrl":"10.1006/gmip.1997.0430","url":null,"abstract":"<div><p>This paper proposes a new image compression technique based on successive application of a 2-D single-sideband analysis/synthesis system and the Joint Photographic Experts Group (JPEG) discrete cosine transform (DCT) lossy transform coder. A 2-D separable single-sideband (SSB) analysis/synthesis system, which is developed in terms of a 2-D separable weighted overlapped-add method of analysis/synthesis and which allows overlap between adjacent spatial domain windows, is used first to reduce the image size in the spatial domain. The JPEG discrete cosine transform is then used to reduce the image size in the frequency domain. These two successive compression processes combine to form a powerful image compressor. The overall compression of images in this technique can reach up to about 97 percent of their original size without much of the image quality being lost.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 3","pages":"Pages 179-191"},"PeriodicalIF":0.0,"publicationDate":"1997-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0430","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133516846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Texture Analysis for Enhanced Color Image Quantization 增强彩色图像量化的纹理分析
Pub Date : 1997-05-01 DOI: 10.1006/gmip.1997.0428
Jefferey A. Shufelt

A traditional problem with color image quantization techniques is their inability to handle smooth variations in intensity and chromaticity, leading to contours in the quantized image. To address this problem, this paper describes new techniques for augmenting the performance of a seminal color image quantization algorithm, the median-cut quantizer. Applying a simple texture analysis method from computer vision in conjunction with the median-cut algorithm using a new variant of a k-d tree, we show that contouring effects can be alleviated without resorting to dithering methods and the accompanying decrease in signal-to-noise ratio. The merits of this approach are evaluated using remotely sensed aerial imagery and synthetically generated scenes.

彩色图像量化技术的一个传统问题是它们无法处理亮度和色度的平滑变化,从而导致量化图像中的轮廓。为了解决这个问题,本文描述了一种新的技术来增强一种开创性的彩色图像量化算法的性能,即中值切割量化器。将计算机视觉中的简单纹理分析方法与使用k-d树的新变体的中值切割算法相结合,我们表明可以在不使用抖动方法和伴随的信噪比降低的情况下减轻轮廓效果。利用遥感航拍图像和合成场景对该方法的优点进行了评价。
{"title":"Texture Analysis for Enhanced Color Image Quantization","authors":"Jefferey A. Shufelt","doi":"10.1006/gmip.1997.0428","DOIUrl":"10.1006/gmip.1997.0428","url":null,"abstract":"<div><p>A traditional problem with color image quantization techniques is their inability to handle smooth variations in intensity and chromaticity, leading to contours in the quantized image. To address this problem, this paper describes new techniques for augmenting the performance of a seminal color image quantization algorithm, the median-cut quantizer. Applying a simple texture analysis method from computer vision in conjunction with the median-cut algorithm using a new variant of a k-d tree, we show that contouring effects can be alleviated without resorting to dithering methods and the accompanying decrease in signal-to-noise ratio. The merits of this approach are evaluated using remotely sensed aerial imagery and synthetically generated scenes.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"59 3","pages":"Pages 149-163"},"PeriodicalIF":0.0,"publicationDate":"1997-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0428","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115263127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
Graphical Models and Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1