Pub Date : 2010-01-01DOI: 10.1007/978-94-007-0510-4_14
G. Paterson, C. Earl
{"title":"Line and Plane to Solid: Analyzing Their Use in Design Practice through Shape Rules","authors":"G. Paterson, C. Earl","doi":"10.1007/978-94-007-0510-4_14","DOIUrl":"https://doi.org/10.1007/978-94-007-0510-4_14","url":null,"abstract":"","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"29 1","pages":"251-267"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75647542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Local Average-Based Model of Probabilities for JPEG2000 Bitplane Coder","authors":"Francesc Aulí Llinàs","doi":"10.1109/DCC.2010.12","DOIUrl":"https://doi.org/10.1109/DCC.2010.12","url":null,"abstract":"","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"30 1","pages":"59-68"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73882153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-01DOI: 10.1007/978-94-007-0510-4_7
S. Hanna
{"title":"Design Agents and the Need for High-Dimensional Perception","authors":"S. Hanna","doi":"10.1007/978-94-007-0510-4_7","DOIUrl":"https://doi.org/10.1007/978-94-007-0510-4_7","url":null,"abstract":"","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"77 1","pages":"115-134"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81131679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-01DOI: 10.1007/978-94-007-0510-4_22
T. Alink, C. Eckert, A. Ruckpaul, A. Albers
{"title":"Different Function Breakdowns for One Existing Product: Experimental Results","authors":"T. Alink, C. Eckert, A. Ruckpaul, A. Albers","doi":"10.1007/978-94-007-0510-4_22","DOIUrl":"https://doi.org/10.1007/978-94-007-0510-4_22","url":null,"abstract":"","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"1 1","pages":"405-424"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90618188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-01DOI: 10.1007/978-94-007-0510-4_24
M. Dabbeeru, A. Mukerjee
{"title":"Learning Concepts and Language for a Baby Designer","authors":"M. Dabbeeru, A. Mukerjee","doi":"10.1007/978-94-007-0510-4_24","DOIUrl":"https://doi.org/10.1007/978-94-007-0510-4_24","url":null,"abstract":"","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"1 1","pages":"445-463"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89572302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-08-02DOI: 10.1109/ISIT.2005.1523654
Jan Østergaard, R. Heusdens, J. Jensen
We derive analytical expressions for the central and side quantizers in an n-channel symmetric multiple-description lattice vector quantizer which, under high-resolution assumptions, minimize the expected distortion subject to entropy constraints on the side descriptions for given packet-loss probabilities. The performance of the central quantizer is lattice dependent whereas the performance of the side quantizers is lattice independent. In fact the normalized second moments of the side quantizers are given by that of an L-dimensional sphere. Furthermore, our analytical results reveal a simple way to determine the optimum number of descriptions. We verify theoretical results with numerical experiments and show that with a packet-loss probability of 5%, a gain of 9.1 dB in MSE over state-of-the-art two-description systems can be achieved when quantizing a two-dimensional unit-variance Gaussian source using a total bit budget of 15 bits/dimension and using three descriptions. With 20% packet loss, a similar experiment reveals an MSE reduction of 10.6 dB when using four descriptions.
{"title":"n-channel symmetric multiple-description lattice vector quantization","authors":"Jan Østergaard, R. Heusdens, J. Jensen","doi":"10.1109/ISIT.2005.1523654","DOIUrl":"https://doi.org/10.1109/ISIT.2005.1523654","url":null,"abstract":"We derive analytical expressions for the central and side quantizers in an n-channel symmetric multiple-description lattice vector quantizer which, under high-resolution assumptions, minimize the expected distortion subject to entropy constraints on the side descriptions for given packet-loss probabilities. The performance of the central quantizer is lattice dependent whereas the performance of the side quantizers is lattice independent. In fact the normalized second moments of the side quantizers are given by that of an L-dimensional sphere. Furthermore, our analytical results reveal a simple way to determine the optimum number of descriptions. We verify theoretical results with numerical experiments and show that with a packet-loss probability of 5%, a gain of 9.1 dB in MSE over state-of-the-art two-description systems can be achieved when quantizing a two-dimensional unit-variance Gaussian source using a total bit budget of 15 bits/dimension and using three descriptions. With 20% packet loss, a similar experiment reveals an MSE reduction of 10.6 dB when using four descriptions.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"1 1","pages":"378-387"},"PeriodicalIF":0.0,"publicationDate":"2005-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80326178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Archimedes' spiral can be used as a 2:1 bandwidth reducing mapping in a joint source-channel coding (JSCC) system. The combined point of two iid Gaussian sources (the source space) is mapped, or approximated, onto a double Archimedes' spiral (the codebook), and the squared angle from the origin to the mapped point is transmitted as an analogue channel symbol (the channel space), e.g. PAM. It is shown that the total distortion of this JSCC system is minimised when the distortion contributions from the approximation noise and channel noise are equal. The given system produces a channel input distribution close to a Laplace probability density function (pdf) instead of the optimal Gaussian pdf. The loss when using this mismatched pdf is shown to be approximately equal to the relative entropy of the two pdf.
{"title":"Using 2:1 Shannon mapping for joint source-channel coding","authors":"F. Hekland, G. Øien, T. Ramstad","doi":"10.1109/DCC.2005.92","DOIUrl":"https://doi.org/10.1109/DCC.2005.92","url":null,"abstract":"The Archimedes' spiral can be used as a 2:1 bandwidth reducing mapping in a joint source-channel coding (JSCC) system. The combined point of two iid Gaussian sources (the source space) is mapped, or approximated, onto a double Archimedes' spiral (the codebook), and the squared angle from the origin to the mapped point is transmitted as an analogue channel symbol (the channel space), e.g. PAM. It is shown that the total distortion of this JSCC system is minimised when the distortion contributions from the approximation noise and channel noise are equal. The given system produces a channel input distribution close to a Laplace probability density function (pdf) instead of the optimal Gaussian pdf. The loss when using this mismatched pdf is shown to be approximately equal to the relative entropy of the two pdf.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"33 1","pages":"223-232"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74050896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Kashyap, L. A. Lastras-Montaño, Cathy H. Xia, Zhen Liu
We study the problem of the reconstruction of a Gaussian field defined in [0,1] using N sensors deployed at regular intervals. The goal is to quantify the total data rate required for the reconstruction of the field with a given mean square distortion. We consider a class of two-stage mechanisms which (a) send information to allow the reconstruction of the sensor's samples within sufficient accuracy, and then (b) use these reconstructions to estimate the entire field. To implement the first stage, the heavy correlation between the sensor samples suggests the use of distributed coding schemes to reduce the total rate. Our main contribution is to demonstrate the existence of a distributed block coding scheme that achieves, for a given fidelity criterion for the sensor's measurements, a total information rate that is within a constant, independent of N, of the minimum information rate required by an encoder that has access to all the sensor measurements simultaneously. The constant in general depends on the autocorrelation function of the field and the desired distortion criterion for the sensor samples.
{"title":"Distributed source coding in dense sensor networks","authors":"A. Kashyap, L. A. Lastras-Montaño, Cathy H. Xia, Zhen Liu","doi":"10.1109/DCC.2005.33","DOIUrl":"https://doi.org/10.1109/DCC.2005.33","url":null,"abstract":"We study the problem of the reconstruction of a Gaussian field defined in [0,1] using N sensors deployed at regular intervals. The goal is to quantify the total data rate required for the reconstruction of the field with a given mean square distortion. We consider a class of two-stage mechanisms which (a) send information to allow the reconstruction of the sensor's samples within sufficient accuracy, and then (b) use these reconstructions to estimate the entire field. To implement the first stage, the heavy correlation between the sensor samples suggests the use of distributed coding schemes to reduce the total rate. Our main contribution is to demonstrate the existence of a distributed block coding scheme that achieves, for a given fidelity criterion for the sensor's measurements, a total information rate that is within a constant, independent of N, of the minimum information rate required by an encoder that has access to all the sensor measurements simultaneously. The constant in general depends on the autocorrelation function of the field and the desired distortion criterion for the sensor samples.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"152 1","pages":"13-22"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74856630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Lagrangian formulation of fixed-rate vector quantization is presented. The formulation provides an alternative version of the classic high-rate quantization approximations for fixed-rate codes of Zador (1966), and Bucklew and Wise (1982) which parallels the Lagrangian results for variable-rate codes and it leads to a variation of the classic Lloyd (1982) algorithm for quantizer design. The approach also leads to a natural Lagrangian formulation combining both common rate constraints of alphabet size and entropy, effectively providing a Lagrangian formulation of memory and entropy constrained vector quantization.
{"title":"A Lagrangian formulation of fixed-rate quantization","authors":"R. Gray","doi":"10.1109/DCC.2005.7","DOIUrl":"https://doi.org/10.1109/DCC.2005.7","url":null,"abstract":"A Lagrangian formulation of fixed-rate vector quantization is presented. The formulation provides an alternative version of the classic high-rate quantization approximations for fixed-rate codes of Zador (1966), and Bucklew and Wise (1982) which parallels the Lagrangian results for variable-rate codes and it leads to a variation of the classic Lloyd (1982) algorithm for quantizer design. The approach also leads to a natural Lagrangian formulation combining both common rate constraints of alphabet size and entropy, effectively providing a Lagrangian formulation of memory and entropy constrained vector quantization.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"176 5 1","pages":"261-269"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/DCC.2005.7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72530060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}