The extension to 2-D of three statistical methods successfully used in the 1-D problem has been studied, namely: (1) Lagrange multiplier techniques using properties of the residuals; (2) ordinary and generalized cross-validation techniques using prediction errors; and (3) maximum-likelihood estimation. Particular attention has been paid to implementation problems, and the methods have been compared for both synthetic and real images.<>
{"title":"Determination of the appropriate degree of smoothing in linear image restoration: a comparison","authors":"N. Fortier, Y. Goussard, G. Demoment","doi":"10.1109/MDSP.1989.97106","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97106","url":null,"abstract":"The extension to 2-D of three statistical methods successfully used in the 1-D problem has been studied, namely: (1) Lagrange multiplier techniques using properties of the residuals; (2) ordinary and generalized cross-validation techniques using prediction errors; and (3) maximum-likelihood estimation. Particular attention has been paid to implementation problems, and the methods have been compared for both synthetic and real images.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132306264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Nonparametric bootstrap confidence intervals and bands have been constructed from kernel and lag-window spectral estimators. The results can be of use in a finite sample situation, especially when it cannot be assumed that the time series is Gaussian. Monte Carlo simulations have been carried out in order to compare the bootstrap confidence bands with the asymptotic ones.<>
{"title":"Bootstrap confidence bands for spectra and cross-spectra","authors":"D. Politis, Joseph P. Romano, T. Lai","doi":"10.1109/MDSP.1989.97045","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97045","url":null,"abstract":"Summary form only given. Nonparametric bootstrap confidence intervals and bands have been constructed from kernel and lag-window spectral estimators. The results can be of use in a finite sample situation, especially when it cannot be assumed that the time series is Gaussian. Monte Carlo simulations have been carried out in order to compare the bootstrap confidence bands with the asymptotic ones.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130935598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Arithmetic coding has been applied to provide lossless and loss-inducing compression of optical, infrared, and synthetic aperture radar imagery of natural scenes. Several different contexts have been considered, including both predictive and nonpredictive variations, with both image-dependent and image-independent variations. In lossless coding experiments, arithmetic coding algorithms have been shown to outperform comparable variants of both Huffman and Lempel-Ziv-Welch coding algorithms by approximately 0.5 b/pixel. For image-dependent contexts constructed from high-order autoregressive predictors, arithmetic coding algorithms provide compression ratios as high as four. Contexts constructed from lower-order autoregressive predictors provide compression ratios nearly as great as those of the higher-order predictors with favorable computational trades. Compression performance variations have been shown to reflect the inherent sensor-dependent differences in the stochastic structure of the imagery. Arithmetic coding has also been demonstrated to be a valuable addition to loss-inducing compression techniques.<>
{"title":"Arithmetic coding for lossless and loss-inducing image compression","authors":"C. D. Hardin, S. Zabele","doi":"10.1109/MDSP.1989.97129","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97129","url":null,"abstract":"Summary form only given. Arithmetic coding has been applied to provide lossless and loss-inducing compression of optical, infrared, and synthetic aperture radar imagery of natural scenes. Several different contexts have been considered, including both predictive and nonpredictive variations, with both image-dependent and image-independent variations. In lossless coding experiments, arithmetic coding algorithms have been shown to outperform comparable variants of both Huffman and Lempel-Ziv-Welch coding algorithms by approximately 0.5 b/pixel. For image-dependent contexts constructed from high-order autoregressive predictors, arithmetic coding algorithms provide compression ratios as high as four. Contexts constructed from lower-order autoregressive predictors provide compression ratios nearly as great as those of the higher-order predictors with favorable computational trades. Compression performance variations have been shown to reflect the inherent sensor-dependent differences in the stochastic structure of the imagery. Arithmetic coding has also been demonstrated to be a valuable addition to loss-inducing compression techniques.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131100663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The Radon transform has been applied to spectrum estimation of noisy 2-D signals. Estimation of the spectrum of noisy temporal signals is a classic signal processing problem, and a number of estimation algorithms have been developed. These include periodograms, the Blackman-Tukey method, and autoregressive moving average (ARMA) models. Extension of the first two algorithms to multidimensional signals is straightforward. However, the additional available degrees of freedom affect the applicability of ARMA models to multidimensional problems. It has been demonstrated that standard 1-D ARMA models can be applied to the individual projections and combined to estimate the 2-D spectrum. Limitations of the algorithm have been explored.<>
{"title":"Spectrum estimation of two-dimensional signals via the Radon transform","authors":"R. Easton","doi":"10.1109/MDSP.1989.97047","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97047","url":null,"abstract":"Summary form only given. The Radon transform has been applied to spectrum estimation of noisy 2-D signals. Estimation of the spectrum of noisy temporal signals is a classic signal processing problem, and a number of estimation algorithms have been developed. These include periodograms, the Blackman-Tukey method, and autoregressive moving average (ARMA) models. Extension of the first two algorithms to multidimensional signals is straightforward. However, the additional available degrees of freedom affect the applicability of ARMA models to multidimensional problems. It has been demonstrated that standard 1-D ARMA models can be applied to the individual projections and combined to estimate the 2-D spectrum. Limitations of the algorithm have been explored.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126940527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given, as follows. Two measures have been suggested in the literature to characterize the localization performance of an edge detector. The first one was proposed by Abdou and Pratt (1979) and the second one by Canny (1986). The limitations of these localization measures are shown. The former is heuristic, while Canny's measure has not been formulated correctly. The problem of localization of an edge detector has been formulated with the help of the theory of zero-crossings of stochastic processes. The measure of localization of the edge detector proposed is the extent to which it causes the density of maxima to be reduced as we get farther away from the true edge. It is possible to show that with this localization measure and a constraint of the width of the filter, the optimal linear filter for edge detection is the derivative of a Gaussian.<>
{"title":"On the localization performance measure and optimal edge detection","authors":"H. Tagare, R. Figueiredo","doi":"10.1117/12.19530","DOIUrl":"https://doi.org/10.1117/12.19530","url":null,"abstract":"Summary form only given, as follows. Two measures have been suggested in the literature to characterize the localization performance of an edge detector. The first one was proposed by Abdou and Pratt (1979) and the second one by Canny (1986). The limitations of these localization measures are shown. The former is heuristic, while Canny's measure has not been formulated correctly. The problem of localization of an edge detector has been formulated with the help of the theory of zero-crossings of stochastic processes. The measure of localization of the edge detector proposed is the extent to which it causes the density of maxima to be reduced as we get farther away from the true edge. It is possible to show that with this localization measure and a constraint of the width of the filter, the optimal linear filter for edge detection is the derivative of a Gaussian.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114921116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Images often contain information at a number of different scales of resolution, so that the definition and generation of a good scale space is a key step in early vision. A scale space in which object boundaries are respected and smoothing only takes place within these boundaries has been defined that avoids the inaccuracies introduced by the usual method of low-pass-filtering the image with Gaussian kernels. The new scale space is generated by solving a nonlinear diffusion differential equation forward in time (the scale parameter). The original image is used as the initial condition, and the conduction coefficient c(x, y, t) varies in space and scale as a function of the gradient of the variable of interest (e.g. the image brightness). The algorithms are based on comparing the local values of different diffusion processes running in parallel on the same image.<>
只提供摘要形式。图像通常包含多个不同分辨率尺度的信息,因此良好尺度空间的定义和生成是早期视觉的关键步骤。我们定义了一个尺度空间,在这个尺度空间中,物体边界被尊重,平滑只在这些边界内发生,从而避免了通常使用高斯核对图像进行低通滤波的方法所带来的不准确性。新的尺度空间是通过对非线性扩散微分方程(尺度参数)进行时间正向求解而得到的。以原始图像作为初始条件,传导系数c(x, y, t)随感兴趣变量(如图像亮度)的梯度在空间和尺度上变化。该算法基于对同一图像上并行运行的不同扩散过程的局部值进行比较。
{"title":"Anisotropic diffusion processes in early vision","authors":"Pietro Perona","doi":"10.1109/MDSP.1989.97028","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97028","url":null,"abstract":"Summary form only given. Images often contain information at a number of different scales of resolution, so that the definition and generation of a good scale space is a key step in early vision. A scale space in which object boundaries are respected and smoothing only takes place within these boundaries has been defined that avoids the inaccuracies introduced by the usual method of low-pass-filtering the image with Gaussian kernels. The new scale space is generated by solving a nonlinear diffusion differential equation forward in time (the scale parameter). The original image is used as the initial condition, and the conduction coefficient c(x, y, t) varies in space and scale as a function of the gradient of the variable of interest (e.g. the image brightness). The algorithms are based on comparing the local values of different diffusion processes running in parallel on the same image.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116881153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given, as follows. The authors have applied matched field processing to the tomography problem. Matched field processing conventionally assumes a known sound speed field and attempts to determine the range and depth of an acoustic source. However, the authors assume that they know the source location but not the sound speed field. Because of the complexity of this problem, they have assumed an eddy in an ocean with known sound speed profiles, and they estimate the eddy's boundaries. They assume one acoustic CW source and a 20-element vertical array (both in the deep sound channel) situated 100 km apart. Three estimators were used: Bucker, Bartlett, and maximum likelihood. The sensitivity of the estimators' performance to signal-to-noise ratio and noise spatial correlation structure is addressed. The scheme is shown to work when acoustic replica fields were generated using the simple model for the ocean (three known sound speed profiles), and the actual acoustic field was stimulated using a more realistic eddy.<>
{"title":"A simulation of ocean acoustic tomography using matched field processing","authors":"F.M. Strohm, J.H. Miller, R. Bourke","doi":"10.1109/MDSP.1989.97043","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97043","url":null,"abstract":"Summary form only given, as follows. The authors have applied matched field processing to the tomography problem. Matched field processing conventionally assumes a known sound speed field and attempts to determine the range and depth of an acoustic source. However, the authors assume that they know the source location but not the sound speed field. Because of the complexity of this problem, they have assumed an eddy in an ocean with known sound speed profiles, and they estimate the eddy's boundaries. They assume one acoustic CW source and a 20-element vertical array (both in the deep sound channel) situated 100 km apart. Three estimators were used: Bucker, Bartlett, and maximum likelihood. The sensitivity of the estimators' performance to signal-to-noise ratio and noise spatial correlation structure is addressed. The scheme is shown to work when acoustic replica fields were generated using the simple model for the ocean (three known sound speed profiles), and the actual acoustic field was stimulated using a more realistic eddy.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115046709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Auslander has used algebraic methods to give a mathematical structure to a study of the symmetries of crystals. An approach to implementing Auslander's methods that has several important features is described. Only nonredundant data need be stored. Thus, for the case of threefold symmetry, only slightly more than 1/3 of the full set of data need be stored. The problem is broken down into small modules that employ efficient Winograd-type fast Fourier transform algorithms. Most of the calculation is done by calling subroutines which compute smaller conventional 3-D Fourier transforms. This permits the use of efficient available Fourier transform subroutines for the time-consuming parts of the calculations. Indexing and permutations are done on small arrays, thereby reducing data transfer time and storage of index vectors. The method can be implemented on a vector processor. A prototype program was written and tested for a case of 120 degrees rotational symmetry in a 60 by 60 by 60 cube. It was 5.2 times as fast as a conventional 3-D program for the same data.<>
{"title":"A computer program for the Fourier transform of data with crystal symmetry","authors":"J. Cooley","doi":"10.1109/MDSP.1989.97058","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97058","url":null,"abstract":"Summary form only given. Auslander has used algebraic methods to give a mathematical structure to a study of the symmetries of crystals. An approach to implementing Auslander's methods that has several important features is described. Only nonredundant data need be stored. Thus, for the case of threefold symmetry, only slightly more than 1/3 of the full set of data need be stored. The problem is broken down into small modules that employ efficient Winograd-type fast Fourier transform algorithms. Most of the calculation is done by calling subroutines which compute smaller conventional 3-D Fourier transforms. This permits the use of efficient available Fourier transform subroutines for the time-consuming parts of the calculations. Indexing and permutations are done on small arrays, thereby reducing data transfer time and storage of index vectors. The method can be implemented on a vector processor. A prototype program was written and tested for a case of 120 degrees rotational symmetry in a 60 by 60 by 60 cube. It was 5.2 times as fast as a conventional 3-D program for the same data.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"313 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115445593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The use of subband image coding for intrafield coding (compression) of high-definition television (HDTV) signals has been studied. Filter banks with low-order filters that have simple coefficients are attractive for these applications due to the ease in implementation. The results of a limited study to evaluate the impact of the choice of filter orders in HDTV coding are discussed. The study shows that the interplay between the filters and the quantization either in the sample or in the transform domain is very critical in determining the quality of the reconstructed picture. Subband signals that are generated by a four-band subband decomposition using exact reconstruction infinite-impulse-response (IIR) filter banks were considered. A comparison of the quality of reconstruction with approximately linear-phase IIR filters of different orders was made. The work showed that for signals with large high-frequency content the use of low-order filters in the filter bank provides less noticeable degradation compared with that in the case of higher-order filter banks when the same quantization parameters are used.<>
{"title":"On the choice of filter orders in subband coding","authors":"A. Fernandez, R. Ansari","doi":"10.1109/MDSP.1989.97139","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97139","url":null,"abstract":"Summary form only given. The use of subband image coding for intrafield coding (compression) of high-definition television (HDTV) signals has been studied. Filter banks with low-order filters that have simple coefficients are attractive for these applications due to the ease in implementation. The results of a limited study to evaluate the impact of the choice of filter orders in HDTV coding are discussed. The study shows that the interplay between the filters and the quantization either in the sample or in the transform domain is very critical in determining the quality of the reconstructed picture. Subband signals that are generated by a four-band subband decomposition using exact reconstruction infinite-impulse-response (IIR) filter banks were considered. A comparison of the quality of reconstruction with approximately linear-phase IIR filters of different orders was made. The work showed that for signals with large high-frequency content the use of low-order filters in the filter bank provides less noticeable degradation compared with that in the case of higher-order filter banks when the same quantization parameters are used.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120983144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The generalization of conventional array signal processing to multidimensional matched field processing for source localization in the ocean environment is complicated by several factors associated with the nonideal waveguide nature of the acoustic propagation and the presence of natural ambient noise. A number of nonlinear beamformers that combine the sidelobe suppression performance of the maximum-likelihood method (MLM) with the lower resolution of the linear beamformer, in essence widening the mainlobe at the source position without totally sacrificing the sidelobe suppression, have been developed resulting in a more tolerant and robust processing algorithm. A full-wave-field propagation model has been used to simulate realistic ambient noise and signal fields, and the source localization performance of these beamforming algorithms has been analyzed, with particular focus on the effects of noise correlation and environmental mismatch.<>
{"title":"Matched field processing in noisy and imperfectly known ocean environments","authors":"A. Baggeroer, H. Schmidt, P. Velardo, W. Kuperman","doi":"10.1109/MDSP.1989.97038","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97038","url":null,"abstract":"Summary form only given. The generalization of conventional array signal processing to multidimensional matched field processing for source localization in the ocean environment is complicated by several factors associated with the nonideal waveguide nature of the acoustic propagation and the presence of natural ambient noise. A number of nonlinear beamformers that combine the sidelobe suppression performance of the maximum-likelihood method (MLM) with the lower resolution of the linear beamformer, in essence widening the mainlobe at the source position without totally sacrificing the sidelobe suppression, have been developed resulting in a more tolerant and robust processing algorithm. A full-wave-field propagation model has been used to simulate realistic ambient noise and signal fields, and the source localization performance of these beamforming algorithms has been analyzed, with particular focus on the effects of noise correlation and environmental mismatch.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"67 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117233698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}