The term analytic information theory has been coined to describe problems of information theory studied by analytic tools. The approach of applying tools from analysis of algorithms to problems of source coding and, in general, to information theory lies at the crossroad of computer science and information theory. Combining the tools from both areas often provides powerful results, such as computer scientist Abraham Lempel and information theorist Jacob Ziv working together in the late 1970s to develop compression algorithms that are now widely referred to as Lempel-Ziv algorithms and are the basis of the ZIP compression still used extensively in computing today. This monograph surveys the use of these techniques for the rigorous analysis of code redundancy for known sources in lossless data compression. A separate chapter is devoted to precise analyses of each of three types of lossless data compression schemes, namely fixed-to-variable length codes, variable-to-fixed length codes, and variable-to-variable length codes. Each one of these schemes is described in detail, building upon work done in the latter part of the 20th century to present new and powerful techniques. For the first time, this survey presents redundancy for universal variable-to-fixed and variable-to-variable length codes in a comprehensive and coherent manner. The monograph will be of interest to computer scientists and information theorists working on modern coding techniques. Written by two leading experts, it provides the reader with a unique, succinct starting point for their own research into the area.
{"title":"Redundancy of Lossless Data Compression for Known Sources by Analytic Methods","authors":"M. Drmota, W. Szpankowski","doi":"10.1561/0100000090","DOIUrl":"https://doi.org/10.1561/0100000090","url":null,"abstract":"The term analytic information theory has been coined to describe problems of information theory studied by analytic tools. The approach of applying tools from analysis of algorithms to problems of source coding and, in general, to information theory lies at the crossroad of computer science and information theory. Combining the tools from both areas often provides powerful results, such as computer scientist Abraham Lempel and information theorist Jacob Ziv working together in the late 1970s to develop compression algorithms that are now widely referred to as Lempel-Ziv algorithms and are the basis of the ZIP compression still used extensively in computing today. This monograph surveys the use of these techniques for the rigorous analysis of code redundancy for known sources in lossless data compression. A separate chapter is devoted to precise analyses of each of three types of lossless data compression schemes, namely fixed-to-variable length codes, variable-to-fixed length codes, and variable-to-variable length codes. Each one of these schemes is described in detail, building upon work done in the latter part of the 20th century to present new and powerful techniques. For the first time, this survey presents redundancy for universal variable-to-fixed and variable-to-variable length codes in a comprehensive and coherent manner. The monograph will be of interest to computer scientists and information theorists working on modern coding techniques. Written by two leading experts, it provides the reader with a unique, succinct starting point for their own research into the area.","PeriodicalId":45236,"journal":{"name":"Foundations and Trends in Communications and Information Theory","volume":"1117 ","pages":"277-417"},"PeriodicalIF":2.4,"publicationDate":"2017-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1561/0100000090","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72433545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences. This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds. The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed.
{"title":"Community Detection and Stochastic Block Models","authors":"","doi":"10.1561/0100000067","DOIUrl":"https://doi.org/10.1561/0100000067","url":null,"abstract":"The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences. \u0000 \u0000This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds. \u0000 \u0000The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed.","PeriodicalId":45236,"journal":{"name":"Foundations and Trends in Communications and Information Theory","volume":"80 1","pages":"1-162"},"PeriodicalIF":2.4,"publicationDate":"2017-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77940869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This monograph describes principles of information theoretic secrecygeneration by legitimate parties with public discussion in the presenceof an eavesdropper. The parties are guaranteed secrecy in the form ofindependence from the eavesdropper's observation of the communication.Part I develops basic technical tools for secrecy generation, many ofwhich are potentially of independent interest beyond secrecy settings.Various information theoretic and cryptographic notions of secrecy arecompared. Emphasis is placed on central themes of interactive communicationand common randomness as well as on core methods ofbalanced coloring and leftover hash for extracting secret uniform randomness.Achievability and converse results are shown to emerge from"single shot" incarnations that serve to explain essential structure.Part II applies the methods of Part I to secrecy generation in twosettings: a multiterminal source model and a multiterminal channelmodel, in both of which the legitimate parties are afforded privilegedaccess to correlated observations of which the eavesdropper has onlypartial knowledge. Characterizations of secret key capacity bring outinherent connections to the data compression concept of omniscienceand, for a specialized source model, to a combinatorial problem of maximalspanning tree packing in a multigraph. Interactive common informationis seen to govern the minimum rate of communication needed toachieve secret key capacity in the two-terminal source model. Furthermore,necessary and sufficient conditions are analyzed for the securecomputation of a given function in the multiterminal source model.Based largely on known recent results, this self-contained monographalso includes new formulations with associated new proofs. Supplementingeach chapter in Part II are descriptions of several openproblems.
{"title":"Multiterminal Secrecy by Public Discussion","authors":"P. Narayan, Himanshu Tyagi","doi":"10.1561/0100000072","DOIUrl":"https://doi.org/10.1561/0100000072","url":null,"abstract":"This monograph describes principles of information theoretic secrecygeneration by legitimate parties with public discussion in the presenceof an eavesdropper. The parties are guaranteed secrecy in the form ofindependence from the eavesdropper's observation of the communication.Part I develops basic technical tools for secrecy generation, many ofwhich are potentially of independent interest beyond secrecy settings.Various information theoretic and cryptographic notions of secrecy arecompared. Emphasis is placed on central themes of interactive communicationand common randomness as well as on core methods ofbalanced coloring and leftover hash for extracting secret uniform randomness.Achievability and converse results are shown to emerge from\"single shot\" incarnations that serve to explain essential structure.Part II applies the methods of Part I to secrecy generation in twosettings: a multiterminal source model and a multiterminal channelmodel, in both of which the legitimate parties are afforded privilegedaccess to correlated observations of which the eavesdropper has onlypartial knowledge. Characterizations of secret key capacity bring outinherent connections to the data compression concept of omniscienceand, for a specialized source model, to a combinatorial problem of maximalspanning tree packing in a multigraph. Interactive common informationis seen to govern the minimum rate of communication needed toachieve secret key capacity in the two-terminal source model. Furthermore,necessary and sufficient conditions are analyzed for the securecomputation of a given function in the multiterminal source model.Based largely on known recent results, this self-contained monographalso includes new formulations with associated new proofs. Supplementingeach chapter in Part II are descriptions of several openproblems.","PeriodicalId":45236,"journal":{"name":"Foundations and Trends in Communications and Information Theory","volume":"90 1","pages":"129-275"},"PeriodicalIF":2.4,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80346102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-volatile memories NVMs have emerged as the primary replacement of hard-disk drives for a variety of storage applications, including personal electronics, mobile computing, intelligent vehicles, enterprise storage, data warehousing, and data-intensive computing systems. Channel coding schemes are a necessary tool for ensuring target reliability and performance of NVMs. However, due to operational asymmetries in NVMs, conventional coding approaches - commonly based on designing for the Hamming metric - no longer apply. Given the immediate need for practical solutions and the shortfalls of existing methods, the fast-growing discipline of coding for NVMs has resulted in several key innovations that not only answer the needs of modern storage systems but also directly contribute to the analytical toolbox of coding theory at large. This monograph discusses recent advances in coding for NVMs, covering topics such as error correction coding based on novel algebraic and graph-based methods, rank modulation, rewriting codes, and constrained coding. Our goal for this work is multifold: to illuminate the advantages - as well as challenges - associated with modern NVMs, to present a succinct overview of several exciting recent developments in coding for memories, and, by presenting numerous potential research directions, to inspire other researchers to contribute to this timely and thriving discipline.
{"title":"Channel Coding Methods for Non-Volatile Memories","authors":"L. Dolecek, Frederic Sala","doi":"10.1561/0100000084","DOIUrl":"https://doi.org/10.1561/0100000084","url":null,"abstract":"Non-volatile memories NVMs have emerged as the primary replacement of hard-disk drives for a variety of storage applications, including personal electronics, mobile computing, intelligent vehicles, enterprise storage, data warehousing, and data-intensive computing systems. Channel coding schemes are a necessary tool for ensuring target reliability and performance of NVMs. However, due to operational asymmetries in NVMs, conventional coding approaches - commonly based on designing for the Hamming metric - no longer apply. Given the immediate need for practical solutions and the shortfalls of existing methods, the fast-growing discipline of coding for NVMs has resulted in several key innovations that not only answer the needs of modern storage systems but also directly contribute to the analytical toolbox of coding theory at large. This monograph discusses recent advances in coding for NVMs, covering topics such as error correction coding based on novel algebraic and graph-based methods, rank modulation, rewriting codes, and constrained coding. Our goal for this work is multifold: to illuminate the advantages - as well as challenges - associated with modern NVMs, to present a succinct overview of several exciting recent developments in coding for memories, and, by presenting numerous potential research directions, to inspire other researchers to contribute to this timely and thriving discipline.","PeriodicalId":45236,"journal":{"name":"Foundations and Trends in Communications and Information Theory","volume":"11 1","pages":"1-128"},"PeriodicalIF":2.4,"publicationDate":"2016-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74916956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-way communication is a means to significantly improve the spectralefficiency of wireless networks. For instance, in a bi-directional ortwo-way communication channel, two users can simultaneously usethe transmission medium to exchange information, thus achieving upto twice the rate that would be achieved had each user transmittedseparately. Multi-way communications provides an overview on the developmentsin this research area since it has been initiated by Shannon.The basic two-way communication channel is considered first, followedby the two-way relay channel obtained by the deployment of an additionalcooperative relay node to improve the overall communicationperformance. This basic setup is then extended to multi-user systems.For all these setups, fundamental limits on the achievable rates are reviewed,thereby making use of a linear high-SNR deterministic channelmodel to provide valuable insights which are helpful when discussingthe coding schemes for Gaussian channel models in detail. Several toolsand communication strategies are used in the process, including butnot limited to computation, signal-space alignment, and nested-latticecodes. Finally, extensions of multi-way communication channels to multipleantenna settings are discussed.
{"title":"Multi-way Communications: An Information Theoretic Perspective","authors":"A. Chaaban, A. Sezgin","doi":"10.1561/0100000081","DOIUrl":"https://doi.org/10.1561/0100000081","url":null,"abstract":"Multi-way communication is a means to significantly improve the spectralefficiency of wireless networks. For instance, in a bi-directional ortwo-way communication channel, two users can simultaneously usethe transmission medium to exchange information, thus achieving upto twice the rate that would be achieved had each user transmittedseparately. Multi-way communications provides an overview on the developmentsin this research area since it has been initiated by Shannon.The basic two-way communication channel is considered first, followedby the two-way relay channel obtained by the deployment of an additionalcooperative relay node to improve the overall communicationperformance. This basic setup is then extended to multi-user systems.For all these setups, fundamental limits on the achievable rates are reviewed,thereby making use of a linear high-SNR deterministic channelmodel to provide valuable insights which are helpful when discussingthe coding schemes for Gaussian channel models in detail. Several toolsand communication strategies are used in the process, including butnot limited to computation, signal-space alignment, and nested-latticecodes. Finally, extensions of multi-way communication channels to multipleantenna settings are discussed.","PeriodicalId":45236,"journal":{"name":"Foundations and Trends in Communications and Information Theory","volume":"48 1","pages":"185-371"},"PeriodicalIF":2.4,"publicationDate":"2015-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80926387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This monograph illustrates a novel approach, which is based on changing the focus to seek approximate solutions accompanied by universal guarantees on the gap to optimality, in order to enable progress on several key open problems in network information theory. We seek universal guarantees that are independent of problem parameters, but perhaps dependent on the problem structure. At the heart of this approach is the development of simple, deterministic models that capture the main features of information sources and communication channels, and are utilized to approximate more complex models. The program advocated in this monograph is to use first seek solutions for the simplified deterministic model and use the insights and the solution of the simplified model to connect it to the original problem. The goal of this deterministic-approximation approach is to obtain universal approximate characterizations of the original channel capacity region and source coding rate regions. The translation of the insights from the deterministic framework to the original problem might need non-trivial steps either in the coding scheme or in the outer bounds. The applications of this deterministic approximation approach are demonstrated in four central problems, namely unicast/multicast relay networks, interference channels, multiple descriptions source coding, and joint source-channel coding over networks. For each of these problems, it is illustrated how the proposed approach can be utilized to approximate the solution and draw engineering insights. Throughout the monograph, many extensions and future directions are addressed, and several open problems are presented in each chapter. The monograph is concluded by illustrating other deterministic models that can be utilized to obtain tighter approximation results, and discussing some recent developments on utilization of deterministic models in multi-flow multi-hop wireless networks.
{"title":"An Approximation Approach to Network Information Theory","authors":"A. Avestimehr, S. Diggavi, C. Tian, David Tse","doi":"10.1561/0100000042","DOIUrl":"https://doi.org/10.1561/0100000042","url":null,"abstract":"This monograph illustrates a novel approach, which is based on changing the focus to seek approximate solutions accompanied by universal guarantees on the gap to optimality, in order to enable progress on several key open problems in network information theory. We seek universal guarantees that are independent of problem parameters, but perhaps dependent on the problem structure. At the heart of this approach is the development of simple, deterministic models that capture the main features of information sources and communication channels, and are utilized to approximate more complex models. The program advocated in this monograph is to use first seek solutions for the simplified deterministic model and use the insights and the solution of the simplified model to connect it to the original problem. The goal of this deterministic-approximation approach is to obtain universal approximate characterizations of the original channel capacity region and source coding rate regions. The translation of the insights from the deterministic framework to the original problem might need non-trivial steps either in the coding scheme or in the outer bounds. The applications of this deterministic approximation approach are demonstrated in four central problems, namely unicast/multicast relay networks, interference channels, multiple descriptions source coding, and joint source-channel coding over networks. For each of these problems, it is illustrated how the proposed approach can be utilized to approximate the solution and draw engineering insights. Throughout the monograph, many extensions and future directions are addressed, and several open problems are presented in each chapter. The monograph is concluded by illustrating other deterministic models that can be utilized to obtain tighter approximation results, and discussing some recent developments on utilization of deterministic models in multi-flow multi-hop wireless networks.","PeriodicalId":45236,"journal":{"name":"Foundations and Trends in Communications and Information Theory","volume":"35 1","pages":"1-183"},"PeriodicalIF":2.4,"publicationDate":"2015-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90040903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This monograph presents a unified framework for energy efficiency maximization in wireless networks via fractional programming theory. The definition of energy efficiency is introduced, with reference to single-user and multi-user wireless networks, and it is observed how the problem of resource allocation for energy efficiency optimization is naturally cast as a fractional program. An extensive review of the state-of-the-art in energy efficiency optimization by fractional programming is provided, with reference to centralized and distributed resource allocation schemes. A solid background on fractional programming theory is provided. The key-notion of generalized concavity is presented and its strong connection with fractional functions described. A taxonomy of fractional problems is introduced, and for each class of fractional problem, general solution algorithms are described, discussing their complexity and convergence properties. The described theoretical and algorithmic framework is applied to solve energy efficiency maximization problems in practical wireless networks. A general system and signal model is developed which encompasses many relevant special cases, such as one-hop and two-hop heterogeneous networks, multi-cell networks, small-cell networks, device-to-device systems, cognitive radio systems, and hardware-impaired networks, wherein multiple-antennas and multiple subcarriers are possibly employed. Energy-efficient resource allocation algorithms are developed, considering both centralized, cooperative schemes, as well as distributed approaches for self-organizing networks. Finally, some remarks on future lines of research are given, stating some open problems that remain to be studied. It is shown how the described framework is general enough to be extended in these directions, proving useful in tackling future challenges that may arise in the design of energy-efficient future wireless networks.
{"title":"Energy Efficiency in Wireless Networks via Fractional Programming Theory","authors":"A. Zappone, Eduard Axel Jorswieck","doi":"10.1561/0100000088","DOIUrl":"https://doi.org/10.1561/0100000088","url":null,"abstract":"This monograph presents a unified framework for energy efficiency maximization in wireless networks via fractional programming theory. The definition of energy efficiency is introduced, with reference to single-user and multi-user wireless networks, and it is observed how the problem of resource allocation for energy efficiency optimization is naturally cast as a fractional program. An extensive review of the state-of-the-art in energy efficiency optimization by fractional programming is provided, with reference to centralized and distributed resource allocation schemes. A solid background on fractional programming theory is provided. The key-notion of generalized concavity is presented and its strong connection with fractional functions described. A taxonomy of fractional problems is introduced, and for each class of fractional problem, general solution algorithms are described, discussing their complexity and convergence properties. The described theoretical and algorithmic framework is applied to solve energy efficiency maximization problems in practical wireless networks. A general system and signal model is developed which encompasses many relevant special cases, such as one-hop and two-hop heterogeneous networks, multi-cell networks, small-cell networks, device-to-device systems, cognitive radio systems, and hardware-impaired networks, wherein multiple-antennas and multiple subcarriers are possibly employed. Energy-efficient resource allocation algorithms are developed, considering both centralized, cooperative schemes, as well as distributed approaches for self-organizing networks. Finally, some remarks on future lines of research are given, stating some open problems that remain to be studied. It is shown how the described framework is general enough to be extended in these directions, proving useful in tackling future challenges that may arise in the design of energy-efficient future wireless networks.","PeriodicalId":45236,"journal":{"name":"Foundations and Trends in Communications and Information Theory","volume":"108 1","pages":"185-396"},"PeriodicalIF":2.4,"publicationDate":"2015-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86709771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This monograph presents a unified treatment of single- and multi-user problems in Shannon's information theory where we depart from the requirement that the error probability decays asymptotically in the blocklength. Instead, the error probabilities for various problems are bounded above by a non-vanishing constant and the spotlight is shone on achievable coding rates as functions of the growing blocklengths. This represents the study of asymptotic estimates with non-vanishing error probabilities.In Part I, after reviewing the fundamentals of information theory, we discuss Strassen's seminal result for binary hypothesis testing where the type-I error probability is non-vanishing and the rate of decay of the type-II error probability with growing number of independent observations is characterized. In Part II, we use this basic hypothesis testing result to develop second- and sometimes, even third-order asymptotic expansions for point-to-point communication. Finally in Part III, we consider network information theory problems for which the second order asymptotics are known. These problems include some classes of channels with random state, the multiple-encoder distributed lossless source coding (Slepian-Wolf) problem and special cases of the Gaussian interference and multiple-access channels. Finally, we discuss avenues for further research.
{"title":"Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities","authors":"V. Tan","doi":"10.1561/0100000086","DOIUrl":"https://doi.org/10.1561/0100000086","url":null,"abstract":"This monograph presents a unified treatment of single- and multi-user problems in Shannon's information theory where we depart from the requirement that the error probability decays asymptotically in the blocklength. Instead, the error probabilities for various problems are bounded above by a non-vanishing constant and the spotlight is shone on achievable coding rates as functions of the growing blocklengths. This represents the study of asymptotic estimates with non-vanishing error probabilities.In Part I, after reviewing the fundamentals of information theory, we discuss Strassen's seminal result for binary hypothesis testing where the type-I error probability is non-vanishing and the rate of decay of the type-II error probability with growing number of independent observations is characterized. In Part II, we use this basic hypothesis testing result to develop second- and sometimes, even third-order asymptotic expansions for point-to-point communication. Finally in Part III, we consider network information theory problems for which the second order asymptotics are known. These problems include some classes of channels with random state, the multiple-encoder distributed lossless source coding (Slepian-Wolf) problem and special cases of the Gaussian interference and multiple-access channels. Finally, we discuss avenues for further research.","PeriodicalId":45236,"journal":{"name":"Foundations and Trends in Communications and Information Theory","volume":"171 2 1","pages":"1-184"},"PeriodicalIF":2.4,"publicationDate":"2014-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77634890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Numerous voice, still image, audio, and video compression standards have been developed over the last 25 years, and significant advances in the state of the art have been achieved. However, in the more than 50 years since Shannon's seminal 1959 paper, no rate distortion bounds for voice and video have been forthcoming. In this volume, we present the first rate distortion bounds for voice and video that actually lower bound the operational rate distortion performance of the best-performing voice and video codecs. The bounds indicate that improvements in rate distortion performance of approximately 50% over the best-performing voice and video codecs are possible. Research directions to improve the new bounds are discussed.
{"title":"Rate Distortion Bounds for Voice and Video","authors":"J. Gibson, Jing Hu","doi":"10.1561/0100000061","DOIUrl":"https://doi.org/10.1561/0100000061","url":null,"abstract":"Numerous voice, still image, audio, and video compression standards have been developed over the last 25 years, and significant advances in the state of the art have been achieved. However, in the more than 50 years since Shannon's seminal 1959 paper, no rate distortion bounds for voice and video have been forthcoming. In this volume, we present the first rate distortion bounds for voice and video that actually lower bound the operational rate distortion performance of the best-performing voice and video codecs. The bounds indicate that improvements in rate distortion performance of approximately 50% over the best-performing voice and video codecs are possible. Research directions to improve the new bounds are discussed.","PeriodicalId":45236,"journal":{"name":"Foundations and Trends in Communications and Information Theory","volume":"111 1","pages":"379-514"},"PeriodicalIF":2.4,"publicationDate":"2014-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80602844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The most commonly deployed multi-storage device systems are RAID housed in a single computing unit. The idea of distributing data across multiple disks has been naturally extended to multiple storage nodes which are interconnected over a network and are called Networked Distributed Storage Systems (NDSS). The simplest coding techniques based on replication are often used to ensure redundancy in these systems, but given the sheer volume of data that needs to be stored and the overheads of replication, other coding techniques are being developed. Coding Techniques for Repairability in Networked Distributed Storage Systems (NDSS) surveys coding techniques for NDSS, which aim at achieving (1) fault tolerance efficiently and (2) good repairability characteristics to replenish the lost redundancy, and ensure data durability over time. This is a vibrant are of research and this book is the first overview which presents the background required to understand the problems as well as covering the most important techniques currently being developed. Coding Techniques for Repairability in Networked Distributed Storage Systems is essential reading for all researchers and engineers involved in designing and researching computer storage systems.
{"title":"Coding Techniques for Repairability in Networked Distributed Storage Systems","authors":"F. Oggier, Anwitaman Datta","doi":"10.1561/0100000068","DOIUrl":"https://doi.org/10.1561/0100000068","url":null,"abstract":"The most commonly deployed multi-storage device systems are RAID housed in a single computing unit. The idea of distributing data across multiple disks has been naturally extended to multiple storage nodes which are interconnected over a network and are called Networked Distributed Storage Systems (NDSS). The simplest coding techniques based on replication are often used to ensure redundancy in these systems, but given the sheer volume of data that needs to be stored and the overheads of replication, other coding techniques are being developed. Coding Techniques for Repairability in Networked Distributed Storage Systems (NDSS) surveys coding techniques for NDSS, which aim at achieving (1) fault tolerance efficiently and (2) good repairability characteristics to replenish the lost redundancy, and ensure data durability over time. This is a vibrant are of research and this book is the first overview which presents the background required to understand the problems as well as covering the most important techniques currently being developed. Coding Techniques for Repairability in Networked Distributed Storage Systems is essential reading for all researchers and engineers involved in designing and researching computer storage systems.","PeriodicalId":45236,"journal":{"name":"Foundations and Trends in Communications and Information Theory","volume":"1 1","pages":"383-466"},"PeriodicalIF":2.4,"publicationDate":"2013-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82179047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}