Pub Date : 2006-07-09DOI: 10.1109/ISIT.2006.261562
Wenyi Zhang, S. Kotagiri, J. N. Laneman
The postal channel models a postal system in which letters, each consisting of a number of characters, are sometimes lost. We study the postal channel with variable-length letters and variable-length coding over letters, both with and without letter-by-letter feedback. Without allowing letter lengths to encode information, we examine one feedback strategy consisting of automatic repeat-request (ARQ) with exponentially increasing letter lengths. For this strategy we investigate an alternative notion of information rate per character, based upon the total, random number of characters required to convey the messages instead of its expectation. This information rate exhibits a phase transition in its convergence as the number of messages becomes large: if the letter lengths increase by a factor less than the inverse of the probability that a letter is lost, it converges to the channel capacity; otherwise, it converges to a number strictly larger than channel capacity. More generally, when we allow both the characters and the length of a letter to convey information, we compute the corresponding channel capacity with and without feedback, and find that it is twice the channel capacity of the original postal channel without allowing letter lengths to encode information
{"title":"Information Transmission over the Postal Channel with and without Feedback","authors":"Wenyi Zhang, S. Kotagiri, J. N. Laneman","doi":"10.1109/ISIT.2006.261562","DOIUrl":"https://doi.org/10.1109/ISIT.2006.261562","url":null,"abstract":"The postal channel models a postal system in which letters, each consisting of a number of characters, are sometimes lost. We study the postal channel with variable-length letters and variable-length coding over letters, both with and without letter-by-letter feedback. Without allowing letter lengths to encode information, we examine one feedback strategy consisting of automatic repeat-request (ARQ) with exponentially increasing letter lengths. For this strategy we investigate an alternative notion of information rate per character, based upon the total, random number of characters required to convey the messages instead of its expectation. This information rate exhibits a phase transition in its convergence as the number of messages becomes large: if the letter lengths increase by a factor less than the inverse of the probability that a letter is lost, it converges to the channel capacity; otherwise, it converges to a number strictly larger than channel capacity. More generally, when we allow both the characters and the length of a letter to convey information, we compute the corresponding channel capacity with and without feedback, and find that it is twice the channel capacity of the original postal channel without allowing letter lengths to encode information","PeriodicalId":115298,"journal":{"name":"2006 IEEE International Symposium on Information Theory","volume":"55 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122865110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-07-09DOI: 10.1109/ISIT.2006.261891
R. McEliece, M. C. Rodríguez-Palánquex
In AG coding theory is very important to work with curves with many rational points, to get good codes. In this paper, from curves defined over F2 with genus g ges 1 we give sufficient conditions for getting maximal curves over F2E2g
{"title":"AG Goppa Codes from Maximal Curves over determined Finite Fields of characteristic 2","authors":"R. McEliece, M. C. Rodríguez-Palánquex","doi":"10.1109/ISIT.2006.261891","DOIUrl":"https://doi.org/10.1109/ISIT.2006.261891","url":null,"abstract":"In AG coding theory is very important to work with curves with many rational points, to get good codes. In this paper, from curves defined over F2 with genus g ges 1 we give sufficient conditions for getting maximal curves over F2E2g","PeriodicalId":115298,"journal":{"name":"2006 IEEE International Symposium on Information Theory","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114137796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-07-09DOI: 10.1109/ISIT.2006.261788
A. Salomon, O. Amrani
Generalized multilevel constructions for binary Reed-Muller R(r,m) codes using projections onto GF(2q) are presented. These constructions exploit component codes over GF(2),GF(4),...,GF(2q ) that are based on shorter Reed-Muller codes, and set partitioning using partition chains of length-2l codes. This is then used for deriving multilevel constructions for the Barnes-Wall A(r, m) family of lattices. Similarly, the latter construction involves component codes over GF(2),GF(4),...,GF(2q) and set partitioning based on partition chains of length-2l lattices. These constructions of Reed-Muller codes and Barnes-Wall lattices are readily applicable for their efficient decoding
{"title":"Generalized Multilevel Constructions for Reed-Muller Codes and Barnes-Wall lattices","authors":"A. Salomon, O. Amrani","doi":"10.1109/ISIT.2006.261788","DOIUrl":"https://doi.org/10.1109/ISIT.2006.261788","url":null,"abstract":"Generalized multilevel constructions for binary Reed-Muller R(r,m) codes using projections onto GF(2q) are presented. These constructions exploit component codes over GF(2),GF(4),...,GF(2q ) that are based on shorter Reed-Muller codes, and set partitioning using partition chains of length-2l codes. This is then used for deriving multilevel constructions for the Barnes-Wall A(r, m) family of lattices. Similarly, the latter construction involves component codes over GF(2),GF(4),...,GF(2q) and set partitioning based on partition chains of length-2l lattices. These constructions of Reed-Muller codes and Barnes-Wall lattices are readily applicable for their efficient decoding","PeriodicalId":115298,"journal":{"name":"2006 IEEE International Symposium on Information Theory","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114387661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-07-09DOI: 10.1109/ISIT.2006.261658
A. Grant, L. Hanlen
We consider t-input r-output Rayleigh fading channels with transmit-sided correlation, where the receiver knows the channel realizations, and the transmitter only knows the channel statistics. Using Lagrange duality, we develop an easily computable, tight upper bound on the loss in information rate due to the use of any given input covariance for this channel. This bound is applied to two simple transmission strategies. The first strategy is a reduced-rank uniform allocation, in which independent, equal power Gaussian symbols are transmitted on the at strongest eigenvectors of the transmit covariance matrix, where 0 les alpha les 1 is chosen to optimize the resulting information rate. The second strategy is water-filling on the eigenvalues of the transmit covariance matrix. The upper bound on loss shows these strategies are nearly optimal for a wide range of signal to noise ratios and correlation scenarios
我们考虑具有发射侧相关的t-输入r-输出瑞利衰落信道,其中接收机知道信道实现,而发射机只知道信道统计。利用拉格朗日对偶性,我们开发了一个易于计算的、紧的信息率损失上界,这是由于使用该信道的任何给定输入协方差造成的。这一界限适用于两种简单的传输策略。第一种策略是降阶均匀分配,其中在传输协方差矩阵的最强特征向量上传输独立的等幂高斯符号,其中选择0 les alpha les 1来优化生成的信息率。第二种策略是对传输协方差矩阵的特征值进行注水。损失的上界表明,这些策略在大范围的信噪比和相关场景下几乎是最优的
{"title":"Sub-optimal Power Allocation for MIMO Channels","authors":"A. Grant, L. Hanlen","doi":"10.1109/ISIT.2006.261658","DOIUrl":"https://doi.org/10.1109/ISIT.2006.261658","url":null,"abstract":"We consider t-input r-output Rayleigh fading channels with transmit-sided correlation, where the receiver knows the channel realizations, and the transmitter only knows the channel statistics. Using Lagrange duality, we develop an easily computable, tight upper bound on the loss in information rate due to the use of any given input covariance for this channel. This bound is applied to two simple transmission strategies. The first strategy is a reduced-rank uniform allocation, in which independent, equal power Gaussian symbols are transmitted on the at strongest eigenvectors of the transmit covariance matrix, where 0 les alpha les 1 is chosen to optimize the resulting information rate. The second strategy is water-filling on the eigenvalues of the transmit covariance matrix. The upper bound on loss shows these strategies are nearly optimal for a wide range of signal to noise ratios and correlation scenarios","PeriodicalId":115298,"journal":{"name":"2006 IEEE International Symposium on Information Theory","volume":"260 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122088429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-07-09DOI: 10.1109/ISIT.2006.261714
Xiangyu Tang, R. Koetter
We propose novel error correction coding schemes called generalized integrated interleaving and sparsely integrated interleaving codes. In the context of block interleaved codewords, generalized integrated interleaving allows nonuniform redundancy to be shared among all the interleaves. This allows the redundancy to be adjusted on-the-fly to better suit the error statistics of the channel or storage device. Sparsely integrated interleaving groups data nodes in a distributed storage system into subgroups. A data node can belong to several subgroups. Small errors are corrected locally within each subgroup. A localized algebraic iterative decoding algorithm is used to decode across subgroups to correct large errors that cannot be corrected within subgroups. Very little correction capability is sacrificed to achieve fast error correction and lower communication overhead. This scheme improves data access for all the data nodes and allows easy scaling of the distributed storage network
{"title":"A Novel Method for Combining Algebraic Decoding and Iterative Processing","authors":"Xiangyu Tang, R. Koetter","doi":"10.1109/ISIT.2006.261714","DOIUrl":"https://doi.org/10.1109/ISIT.2006.261714","url":null,"abstract":"We propose novel error correction coding schemes called generalized integrated interleaving and sparsely integrated interleaving codes. In the context of block interleaved codewords, generalized integrated interleaving allows nonuniform redundancy to be shared among all the interleaves. This allows the redundancy to be adjusted on-the-fly to better suit the error statistics of the channel or storage device. Sparsely integrated interleaving groups data nodes in a distributed storage system into subgroups. A data node can belong to several subgroups. Small errors are corrected locally within each subgroup. A localized algebraic iterative decoding algorithm is used to decode across subgroups to correct large errors that cannot be corrected within subgroups. Very little correction capability is sacrificed to achieve fast error correction and lower communication overhead. This scheme improves data access for all the data nodes and allows easy scaling of the distributed storage network","PeriodicalId":115298,"journal":{"name":"2006 IEEE International Symposium on Information Theory","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122226841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-07-09DOI: 10.1109/ISIT.2006.261875
Srinath Puducheri-Sundaravaradhan, J. Kliewer, T. Fuja
This paper proposes a novel distributed encoding procedure to realize codes that resemble LT codes (rateless codes for erasure correction) in both structure and performance. For the case of two sources communicating with a single sink via a common relay, this technique separately encodes k/2 symbols of information onto slightly more than k code symbols at each source. These two codewords are then selectively XOR-ed at the relay, such that the result can be decoded by the sink to recover all k information symbols. It is shown that, for the case of four sources communicating to a single sink, the use of a similar distributed LT code leads to a 50% reduction in overhead at the sink, compared to the use of four individual LT codes
{"title":"Distributed LT Codes","authors":"Srinath Puducheri-Sundaravaradhan, J. Kliewer, T. Fuja","doi":"10.1109/ISIT.2006.261875","DOIUrl":"https://doi.org/10.1109/ISIT.2006.261875","url":null,"abstract":"This paper proposes a novel distributed encoding procedure to realize codes that resemble LT codes (rateless codes for erasure correction) in both structure and performance. For the case of two sources communicating with a single sink via a common relay, this technique separately encodes k/2 symbols of information onto slightly more than k code symbols at each source. These two codewords are then selectively XOR-ed at the relay, such that the result can be decoded by the sink to recover all k information symbols. It is shown that, for the case of four sources communicating to a single sink, the use of a similar distributed LT code leads to a 50% reduction in overhead at the sink, compared to the use of four individual LT codes","PeriodicalId":115298,"journal":{"name":"2006 IEEE International Symposium on Information Theory","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116837735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-07-09DOI: 10.1109/ISIT.2006.261822
N. Prasad, M. Varanasi
Outage capacity and throughput are the two key metrics through which the fundamental limits of delay-sensitive wireless MIMO links can be studied. In this paper, we show that these metrics are intimately related, and consequently, as in the case of outage capacity, the growth rate of throughput with SNR rho is t log rho for a general class of fading channels (with channel state information at the receiver (CSIR) and with or without CSI at the transmitter (CSIT)) whose channel matrix is of rank t with probability one. However, while asymptotically tight affine lower bounds of the form t log rho + 0(1) were recently derived for outage capacity for such channels, in the sense that the limit as rho rarr infin of the difference between the outage capacity and the lower bound is zero, such affine lower bounds are not possible in general for the throughput. Using the t log rho + O(1) bounds on outage capacity however, lower bounds on throughput are specified where the high SNR limit of the ratio of the throughput and its lower bound is unity. These bounds reveal that the throughput optimal outage probability approaches zero as rho rarr infin. An important exception is the scenario where both the transmitter and receiver have CSI under the long-term power constraint (LTPC), for which we obtain a lower bound of the form t log rho + O(1) which is asymptotically tight (in the stronger sense) and interestingly, this lower bound is identical to the asymptotic delay-limited capacity. The throughputs of MISO and SIMO fading channels are extensively analyzed and it is shown that asymptotically, isotropic Gaussian input is throughput optimal, correlation is detrimental whereas increase in the Rice factor is beneficial and that throughput is schur-concave in the correlation eigenvalues
{"title":"Throughput analysis for MIMO systems in the high SNR regime","authors":"N. Prasad, M. Varanasi","doi":"10.1109/ISIT.2006.261822","DOIUrl":"https://doi.org/10.1109/ISIT.2006.261822","url":null,"abstract":"Outage capacity and throughput are the two key metrics through which the fundamental limits of delay-sensitive wireless MIMO links can be studied. In this paper, we show that these metrics are intimately related, and consequently, as in the case of outage capacity, the growth rate of throughput with SNR rho is t log rho for a general class of fading channels (with channel state information at the receiver (CSIR) and with or without CSI at the transmitter (CSIT)) whose channel matrix is of rank t with probability one. However, while asymptotically tight affine lower bounds of the form t log rho + 0(1) were recently derived for outage capacity for such channels, in the sense that the limit as rho rarr infin of the difference between the outage capacity and the lower bound is zero, such affine lower bounds are not possible in general for the throughput. Using the t log rho + O(1) bounds on outage capacity however, lower bounds on throughput are specified where the high SNR limit of the ratio of the throughput and its lower bound is unity. These bounds reveal that the throughput optimal outage probability approaches zero as rho rarr infin. An important exception is the scenario where both the transmitter and receiver have CSI under the long-term power constraint (LTPC), for which we obtain a lower bound of the form t log rho + O(1) which is asymptotically tight (in the stronger sense) and interestingly, this lower bound is identical to the asymptotic delay-limited capacity. The throughputs of MISO and SIMO fading channels are extensively analyzed and it is shown that asymptotically, isotropic Gaussian input is throughput optimal, correlation is detrimental whereas increase in the Rice factor is beneficial and that throughput is schur-concave in the correlation eigenvalues","PeriodicalId":115298,"journal":{"name":"2006 IEEE International Symposium on Information Theory","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129750815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-07-09DOI: 10.1109/ISIT.2006.262088
R. I. Seshadri, M. Valenti
This paper investigates a capacity-based approach to parameter optimization for energy and bandwidth efficient communication systems. In particular, non-coherently detected, bit-interleaved coded, M-ary Gaussian frequency shift keying (GFSK) is considered. Non-coherent detection is accomplished using a sequential, soft-output (SO), soft-decision differential phase detector (SDDPD). The capacity of the proposed system under modulation, channel and detector design constraints is calculated. For a wide range of spectral efficiencies, the most energy efficient combination of GFSK parameters and code rates is identified using information theoretic bounds on reliable signaling. The information outage probabilities under modulation and detector design constraints are calculated for the block fading channel. Select results reveal that the capacity-based approach also helps in identifying the combination of modulation parameters and code rates with the lowest outage probabilities in block fading
{"title":"A Capacity-Based Approach for Designing Bit-Interleaved Coded GFSK with Noncoherent Detection","authors":"R. I. Seshadri, M. Valenti","doi":"10.1109/ISIT.2006.262088","DOIUrl":"https://doi.org/10.1109/ISIT.2006.262088","url":null,"abstract":"This paper investigates a capacity-based approach to parameter optimization for energy and bandwidth efficient communication systems. In particular, non-coherently detected, bit-interleaved coded, M-ary Gaussian frequency shift keying (GFSK) is considered. Non-coherent detection is accomplished using a sequential, soft-output (SO), soft-decision differential phase detector (SDDPD). The capacity of the proposed system under modulation, channel and detector design constraints is calculated. For a wide range of spectral efficiencies, the most energy efficient combination of GFSK parameters and code rates is identified using information theoretic bounds on reliable signaling. The information outage probabilities under modulation and detector design constraints are calculated for the block fading channel. Select results reveal that the capacity-based approach also helps in identifying the combination of modulation parameters and code rates with the lowest outage probabilities in block fading","PeriodicalId":115298,"journal":{"name":"2006 IEEE International Symposium on Information Theory","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124597981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-07-09DOI: 10.1109/ISIT.2006.262028
A. Said
The symbol grouping technique is widely used in practice because it allows great reductions on the complexity of entropy coding symbols from large alphabets, at the expense of small losses in compression. While it has been used mostly in an ad hoc manner, it is not known how general this technique is, i.e., in exactly what type of data sources it can be effective. We try to answer this question by searching for worst-case data sources, measuring the performance, and trying to identify trends. We show that finding the worst-case source is a very challenging optimization problem, and propose some solution methods that can be used in alphabets of moderate size. The numerical results provide evidence confirming the hypotheses that all data sources with large number of symbols can be more efficiently coded, with very small loss, using symbol grouping
{"title":"Worst-case Analysis of the Low-complexity Symbol Grouping Coding Technique","authors":"A. Said","doi":"10.1109/ISIT.2006.262028","DOIUrl":"https://doi.org/10.1109/ISIT.2006.262028","url":null,"abstract":"The symbol grouping technique is widely used in practice because it allows great reductions on the complexity of entropy coding symbols from large alphabets, at the expense of small losses in compression. While it has been used mostly in an ad hoc manner, it is not known how general this technique is, i.e., in exactly what type of data sources it can be effective. We try to answer this question by searching for worst-case data sources, measuring the performance, and trying to identify trends. We show that finding the worst-case source is a very challenging optimization problem, and propose some solution methods that can be used in alphabets of moderate size. The numerical results provide evidence confirming the hypotheses that all data sources with large number of symbols can be more efficiently coded, with very small loss, using symbol grouping","PeriodicalId":115298,"journal":{"name":"2006 IEEE International Symposium on Information Theory","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124691350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-07-09DOI: 10.1109/ISIT.2006.261687
Chen Meng, J. Tuqan
We consider the reconstruction of a class of continuous time non bandlimited signals from its samples and their differences. The set of samples and their differences is obtained from an oversampled sequence of the underlying continuous time signal. This sequence is in turn modeled as the output of a discrete time multirate interpolation filter. Using this model, we propose a general structure to retrieve the continuous time signal and derive corresponding mathematical conditions for its reconstruction using FIR digital filters. The use of FIR filtering is desired to guarantee the stability of the reconstruction process
{"title":"Difference Sampling Theorems For a Class of Non-Bandlimited Signals","authors":"Chen Meng, J. Tuqan","doi":"10.1109/ISIT.2006.261687","DOIUrl":"https://doi.org/10.1109/ISIT.2006.261687","url":null,"abstract":"We consider the reconstruction of a class of continuous time non bandlimited signals from its samples and their differences. The set of samples and their differences is obtained from an oversampled sequence of the underlying continuous time signal. This sequence is in turn modeled as the output of a discrete time multirate interpolation filter. Using this model, we propose a general structure to retrieve the continuous time signal and derive corresponding mathematical conditions for its reconstruction using FIR digital filters. The use of FIR filtering is desired to guarantee the stability of the reconstruction process","PeriodicalId":115298,"journal":{"name":"2006 IEEE International Symposium on Information Theory","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129670029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}