Pub Date : 1994-10-27DOI: 10.1109/WITS.1994.513896
A. Lapidoth
Summary form only given. A length n block code C of size 2/sup nR/ over a finite alphabet /spl chi//spl circ//sub 0/ is used to encode a memoryless source over a finite alphabet /spl chi/. A length n source sequence x is described by the index i of the codeword x/spl circ//sub 0/(i) that is nearest to x according to the single-letter distortion function d/sub 0/(x,x/spl circ//sub 0/). Based on the description i and the knowledge of the codebook C, we wish to reconstruct the source sequence so as to minimize the average distortion defined by the distortion function d/sub 1/(x,x/spl circ//sub 1/), where d/sub 1/(x, x/spl circ//sub 1/) is in general different from d/sub 0/(x,x/spl circ//sub 0/). In fact, the reconstruction alphabets /spl chi//spl circ//sub 0/ and /spl chi//spl circ//sub 1/ could be different. We study the minimum, over all codebooks C, of the average distortion between the reconstructed sequence x/spl circ//sub 1/(i) and the source sequence x as the blocklength n tends to infinity. This limit is a function of the code rate R, the source's probability law, and the two distortion measures d/sub 0/(x,x/spl circ//sub 0/), and d/sub 1/(x,x/spl circ//sub 1/). This problem is the rate-distortion dual of the problem of determining the capacity of a memoryless channel under a possibly suboptimal decoding rule. The performance of a random i.i.d. codebook is found, and it is shown that the performance of the "average" codebook is in general suboptimal.
{"title":"Mismatched encoding in rate distortion theory","authors":"A. Lapidoth","doi":"10.1109/WITS.1994.513896","DOIUrl":"https://doi.org/10.1109/WITS.1994.513896","url":null,"abstract":"Summary form only given. A length n block code C of size 2/sup nR/ over a finite alphabet /spl chi//spl circ//sub 0/ is used to encode a memoryless source over a finite alphabet /spl chi/. A length n source sequence x is described by the index i of the codeword x/spl circ//sub 0/(i) that is nearest to x according to the single-letter distortion function d/sub 0/(x,x/spl circ//sub 0/). Based on the description i and the knowledge of the codebook C, we wish to reconstruct the source sequence so as to minimize the average distortion defined by the distortion function d/sub 1/(x,x/spl circ//sub 1/), where d/sub 1/(x, x/spl circ//sub 1/) is in general different from d/sub 0/(x,x/spl circ//sub 0/). In fact, the reconstruction alphabets /spl chi//spl circ//sub 0/ and /spl chi//spl circ//sub 1/ could be different. We study the minimum, over all codebooks C, of the average distortion between the reconstructed sequence x/spl circ//sub 1/(i) and the source sequence x as the blocklength n tends to infinity. This limit is a function of the code rate R, the source's probability law, and the two distortion measures d/sub 0/(x,x/spl circ//sub 0/), and d/sub 1/(x,x/spl circ//sub 1/). This problem is the rate-distortion dual of the problem of determining the capacity of a memoryless channel under a possibly suboptimal decoding rule. The performance of a random i.i.d. codebook is found, and it is shown that the performance of the \"average\" codebook is in general suboptimal.","PeriodicalId":423518,"journal":{"name":"Proceedings of 1994 Workshop on Information Theory and Statistics","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121707008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-10-27DOI: 10.1109/WITS.1994.513877
Y. Amit
Large deviation theory is used to obtain the rate distortion theorem for Gibbs distributions together with exponentially small error probabilities. Large deviation theorems provide asymptotically exponential upper and lower bounds on the probability that the empirical distribution under a Gibbs distribution deviates in variational norm from the marginal. In particular these hold if the Gibbs distribution is a product measure. Using these theorems many of the standard asymptotic results of errorless coding theory can be neatly formulated and extended to Gibbs random fields. We present the application of these theorems to coding with distortion.
{"title":"Large deviations and the rate distortion theorem for Gibbs distributions","authors":"Y. Amit","doi":"10.1109/WITS.1994.513877","DOIUrl":"https://doi.org/10.1109/WITS.1994.513877","url":null,"abstract":"Large deviation theory is used to obtain the rate distortion theorem for Gibbs distributions together with exponentially small error probabilities. Large deviation theorems provide asymptotically exponential upper and lower bounds on the probability that the empirical distribution under a Gibbs distribution deviates in variational norm from the marginal. In particular these hold if the Gibbs distribution is a product measure. Using these theorems many of the standard asymptotic results of errorless coding theory can be neatly formulated and extended to Gibbs random fields. We present the application of these theorems to coding with distortion.","PeriodicalId":423518,"journal":{"name":"Proceedings of 1994 Workshop on Information Theory and Statistics","volume":"103 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122393144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Local polynomial fitting for the estimation of a general regression function and its derivatives for p-mixing and strongly mixing processes is considered. Joint asymptotic normality for the regression function and its derivatives is established.
{"title":"Local polynomial estimation of regression functions for mixing processes","authors":"E. Masry, Jianqing Fan","doi":"10.1111/1467-9469.00056","DOIUrl":"https://doi.org/10.1111/1467-9469.00056","url":null,"abstract":"Local polynomial fitting for the estimation of a general regression function and its derivatives for p-mixing and strongly mixing processes is considered. Joint asymptotic normality for the regression function and its derivatives is established.","PeriodicalId":423518,"journal":{"name":"Proceedings of 1994 Workshop on Information Theory and Statistics","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122729865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-10-27DOI: 10.1109/WITS.1994.513890
H.-T. Li, P. Djurić
An efficient iterative MMSE algorithm that estimates the parameters of exponentially damped sinusoids embedded in white Gaussian noise is proposed.
提出了一种有效的迭代MMSE算法,用于估计嵌入高斯白噪声中的指数阻尼正弦波的参数。
{"title":"MMSE parameter estimation of exponentially damped sinusoids","authors":"H.-T. Li, P. Djurić","doi":"10.1109/WITS.1994.513890","DOIUrl":"https://doi.org/10.1109/WITS.1994.513890","url":null,"abstract":"An efficient iterative MMSE algorithm that estimates the parameters of exponentially damped sinusoids embedded in white Gaussian noise is proposed.","PeriodicalId":423518,"journal":{"name":"Proceedings of 1994 Workshop on Information Theory and Statistics","volume":"89 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128006906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-10-27DOI: 10.1109/WITS.1994.513859
R. Olshen
The author surveys binary tree-structured methods for clustering as they apply to predictive, pruned, tree-structured vector quantization (predictive PTSVQ). Much of the material concerns applications of PTSVQ to the lossy coding of digital medical images, especially CT and MR chest scans. There is a brief introduction to the asymptotic properties of the algorithms and to an attempt to understand variability and covariability of amino acids in the V3 loop region of HIV.
{"title":"Variable-rate, lossy, tree-structured codes and digital radiography","authors":"R. Olshen","doi":"10.1109/WITS.1994.513859","DOIUrl":"https://doi.org/10.1109/WITS.1994.513859","url":null,"abstract":"The author surveys binary tree-structured methods for clustering as they apply to predictive, pruned, tree-structured vector quantization (predictive PTSVQ). Much of the material concerns applications of PTSVQ to the lossy coding of digital medical images, especially CT and MR chest scans. There is a brief introduction to the asymptotic properties of the algorithms and to an attempt to understand variability and covariability of amino acids in the V3 loop region of HIV.","PeriodicalId":423518,"journal":{"name":"Proceedings of 1994 Workshop on Information Theory and Statistics","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131773182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-10-27DOI: 10.1109/WITS.1994.513848
H. Poor
The related problems of (finite-length) robust prediction and maximizing spectral entropy over a simplex of covariance matrices are considered. General properties of iterative solutions of these problems are developed, and monotone convergence proofs are presented for two algorithms that provide such solutions. The analogous problems for simplexes of spectral densities are also considered.
{"title":"Maximum entropy and robust prediction on a simplex","authors":"H. Poor","doi":"10.1109/WITS.1994.513848","DOIUrl":"https://doi.org/10.1109/WITS.1994.513848","url":null,"abstract":"The related problems of (finite-length) robust prediction and maximizing spectral entropy over a simplex of covariance matrices are considered. General properties of iterative solutions of these problems are developed, and monotone convergence proofs are presented for two algorithms that provide such solutions. The analogous problems for simplexes of spectral densities are also considered.","PeriodicalId":423518,"journal":{"name":"Proceedings of 1994 Workshop on Information Theory and Statistics","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123667708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-10-27DOI: 10.1109/WITS.1994.513931
J. Zhang, G. Walter
A wavelet-based neural network is described. The network is similar to the radial basis function (RBF) network, except that the RBF's are replaced by orthonormal scaling functions. It has been shown that the wavelet network has universal and L/sup 2/ approximation properties and is a consistent function estimator. Convergence rates, which avoid the "curse of dimensionality," are obtained for certain function classes. The network also compared favorably to the MLP and RBF networks in the experiments.
{"title":"Wavelet networks for functional learning","authors":"J. Zhang, G. Walter","doi":"10.1109/WITS.1994.513931","DOIUrl":"https://doi.org/10.1109/WITS.1994.513931","url":null,"abstract":"A wavelet-based neural network is described. The network is similar to the radial basis function (RBF) network, except that the RBF's are replaced by orthonormal scaling functions. It has been shown that the wavelet network has universal and L/sup 2/ approximation properties and is a consistent function estimator. Convergence rates, which avoid the \"curse of dimensionality,\" are obtained for certain function classes. The network also compared favorably to the MLP and RBF networks in the experiments.","PeriodicalId":423518,"journal":{"name":"Proceedings of 1994 Workshop on Information Theory and Statistics","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132534078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-10-27DOI: 10.1109/WITS.1994.513913
R. Eier
Digital data transmission signals may be considered as some specific stochastic process controlled by a Markov chain. Briefly going into the presentation and evaluation of the power density spectra (PDS) of such processes, one of our major concerns deals with the computational effort. By some special grouping among the employed signal elements and a corresponding partitioning of the controlling transition matrix, the formula for the desired PDS can be simplified to an Euclidean vector norm expression. By means of several PDS graphs the relevance of such an analysis to evaluate or design real transmission systems may be appreciated.
{"title":"Markov chains for modeling and analyzing digital data signals","authors":"R. Eier","doi":"10.1109/WITS.1994.513913","DOIUrl":"https://doi.org/10.1109/WITS.1994.513913","url":null,"abstract":"Digital data transmission signals may be considered as some specific stochastic process controlled by a Markov chain. Briefly going into the presentation and evaluation of the power density spectra (PDS) of such processes, one of our major concerns deals with the computational effort. By some special grouping among the employed signal elements and a corresponding partitioning of the controlling transition matrix, the formula for the desired PDS can be simplified to an Euclidean vector norm expression. By means of several PDS graphs the relevance of such an analysis to evaluate or design real transmission systems may be appreciated.","PeriodicalId":423518,"journal":{"name":"Proceedings of 1994 Workshop on Information Theory and Statistics","volume":"45 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114115925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-10-27DOI: 10.1109/WITS.1994.513897
R. Matzner, K. Letsch
The underlying mathematical problem of both, SNR estimation and blind equalization, is the sum of random processes. It can be shown that it is sufficient to describe the random processes as well as their sum by a shape factor of the p.d.f., the kurtosis, which includes the second and fourth order moment.
{"title":"SNR estimation and blind equalization (deconvolution) using the kurtosis","authors":"R. Matzner, K. Letsch","doi":"10.1109/WITS.1994.513897","DOIUrl":"https://doi.org/10.1109/WITS.1994.513897","url":null,"abstract":"The underlying mathematical problem of both, SNR estimation and blind equalization, is the sum of random processes. It can be shown that it is sufficient to describe the random processes as well as their sum by a shape factor of the p.d.f., the kurtosis, which includes the second and fourth order moment.","PeriodicalId":423518,"journal":{"name":"Proceedings of 1994 Workshop on Information Theory and Statistics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116047388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-10-27DOI: 10.1109/WITS.1994.513861
Rosalind W. Picard, Ashok Popat
Summary form only given, as follows. A cluster-based probability model has been found to perform extremely well at capturing the complex structures in natural textures (e.g., better than Markov random field models). Its success is mainly due to its ability to handle high dimensionality, via large conditioning neighborhoods over multiple scales, and to generalize salient characteristics from limited training data. Imposing a tree structure on this model provides not only the benefit of reducing computational complexity, but also a new benefit, the trees are mutable, allowing us to mix and match models for different sources. This flexibility is of increasing importance in emerging applications such as database retrieval for sound, image and video.
{"title":"Tree-structured clustered probability models for texture","authors":"Rosalind W. Picard, Ashok Popat","doi":"10.1109/WITS.1994.513861","DOIUrl":"https://doi.org/10.1109/WITS.1994.513861","url":null,"abstract":"Summary form only given, as follows. A cluster-based probability model has been found to perform extremely well at capturing the complex structures in natural textures (e.g., better than Markov random field models). Its success is mainly due to its ability to handle high dimensionality, via large conditioning neighborhoods over multiple scales, and to generalize salient characteristics from limited training data. Imposing a tree structure on this model provides not only the benefit of reducing computational complexity, but also a new benefit, the trees are mutable, allowing us to mix and match models for different sources. This flexibility is of increasing importance in emerging applications such as database retrieval for sound, image and video.","PeriodicalId":423518,"journal":{"name":"Proceedings of 1994 Workshop on Information Theory and Statistics","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116122167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}