{"title":"Pooling, Splitting, and Restituting Information to Overcome Total Failure of Some Channels of Communication","authors":"C. Asmuth, G. Blakley","doi":"10.1109/SP.1982.10019","DOIUrl":null,"url":null,"abstract":"This paper solves an analog of the problem which gave rise to the theory of error control codes by methods, of miniscule computational complexity, taken from the theory of TIPS (also called key safeguarding schemes, threshold schemes, secret sharing, key sharing, and IPS). The problem solved herein is the following. Information is flowing through several parallel channels from a sending node S to a receiving node R. The possibility exists that one or more channels will be rendered inoperative, but it is deemed essential that all the information get through. Suppose that the organization responsible for the information flow wants to protect Itself against ths breakdown of some of the total number d of available channels. It thus wants to be able to use \"coding\" and \"decoding\" processes, which are quick to implement on cheap microprocessors, for blending all the information H due to leave S into a slurry which can be poured into the d channels in such a way that whatever comes out of any b channels at R is enough to reconstruct H completely. It wants more than a high speed implementation of this process on cheap hardware. It wants to send as few bits as possible. Suppose, for example, that it has 100 bits to send and that it requires assurance that they will all get through even if 3 channels fail. It cannot predict which 3 channels might fail and it knows, of course, that it cannot reconstruct the 100 bits to be sent from S unless 100 bits get through the channels which continue to function (total bit cost: 100 plus the number of bits sent on channels which fail). Each of the following solutions to its problem is therefore optimal from an information theoretic viewpoint: 1. A way to reconstruct H from l-bit transmissionson any 100 of 103 channels (involves 3 wasted bits); 2. A way to reconstruct H from 10-bit transmissions on any 10 of 13 channels (involves 30 wasted bits); 3. A way to reconstruct H from 25-bit transmissions on any 4 of 7 channels (involves 75 wasted bits); 4. A way to reconstruct H from 100-bit transmissions on any 1 of 4 (involves 300 wasted bits). Common sense is inclined to reject at least the first (too many channels used) and last (too many bits sent) of the \"optimal\" solutions above. This paper shows how to produce cheap high speed processes which come within a hair of being optimal (in the sense just described) solutions to the problem in question. It describes parameter settings in which the problem cannot be solved satisfactorilyby at leastsome approaches. It discusses ways to decide on which \"optimal\" solution to the problem is preferable. The idea behind the theory presented here was originally to provide insurance against lose of information due to long-term outage of several channels of communication. The insurance turned out to be cheap (involving only general-purpose processor and memory chips) and compatible with communications in the megabit per second range. But the process involved conferred an unlooked-for additional benefit. It provided a novel way to multiplex digital communications and, in so doing, led to the invention of a variety of mathematically natural \"stepup information transformers\" (devices for taking several streams of data being produced at various low bit per second rates and merging them to yield transmitted data streams at higher bit rates on a number of channels which can, in some circumstances, be smaller than the number of source streams of data) and \"stepdown information transformers\" (devices which take the output of several high bit per second rate data sources and transmit them, on a number of channels exceeding the number of sources, at lower bit per second transmission rates in such a way that the high rate streams reemerge separately at the receiver). Thus devices which provide reliability can sometimes also confer economies on communications systems. It became evident that there is a natural way to cascade the processes described below. This cascading operation makes possible the use of two or three microprocessors to overcome an inherent limitation of a single microprocessor. A single 32-bit micro cannot cope with two bit streams when one has more than 30 times the bit rate of the other. But a two chip cascade can deal with bit streams whose bit rates differ by a factor of hundreds. Three chip cascades can process still more disparate bit streams. Finally, it appeared that the same theory can be ueed to provide low cost reliability in packet-switching networks where packeta can be destroyed in collisions, and can be employed in chip design to provide fault tolerance.","PeriodicalId":195978,"journal":{"name":"1982 IEEE Symposium on Security and Privacy","volume":"71 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1982-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"24","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"1982 IEEE Symposium on Security and Privacy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SP.1982.10019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 24
Abstract
This paper solves an analog of the problem which gave rise to the theory of error control codes by methods, of miniscule computational complexity, taken from the theory of TIPS (also called key safeguarding schemes, threshold schemes, secret sharing, key sharing, and IPS). The problem solved herein is the following. Information is flowing through several parallel channels from a sending node S to a receiving node R. The possibility exists that one or more channels will be rendered inoperative, but it is deemed essential that all the information get through. Suppose that the organization responsible for the information flow wants to protect Itself against ths breakdown of some of the total number d of available channels. It thus wants to be able to use "coding" and "decoding" processes, which are quick to implement on cheap microprocessors, for blending all the information H due to leave S into a slurry which can be poured into the d channels in such a way that whatever comes out of any b channels at R is enough to reconstruct H completely. It wants more than a high speed implementation of this process on cheap hardware. It wants to send as few bits as possible. Suppose, for example, that it has 100 bits to send and that it requires assurance that they will all get through even if 3 channels fail. It cannot predict which 3 channels might fail and it knows, of course, that it cannot reconstruct the 100 bits to be sent from S unless 100 bits get through the channels which continue to function (total bit cost: 100 plus the number of bits sent on channels which fail). Each of the following solutions to its problem is therefore optimal from an information theoretic viewpoint: 1. A way to reconstruct H from l-bit transmissionson any 100 of 103 channels (involves 3 wasted bits); 2. A way to reconstruct H from 10-bit transmissions on any 10 of 13 channels (involves 30 wasted bits); 3. A way to reconstruct H from 25-bit transmissions on any 4 of 7 channels (involves 75 wasted bits); 4. A way to reconstruct H from 100-bit transmissions on any 1 of 4 (involves 300 wasted bits). Common sense is inclined to reject at least the first (too many channels used) and last (too many bits sent) of the "optimal" solutions above. This paper shows how to produce cheap high speed processes which come within a hair of being optimal (in the sense just described) solutions to the problem in question. It describes parameter settings in which the problem cannot be solved satisfactorilyby at leastsome approaches. It discusses ways to decide on which "optimal" solution to the problem is preferable. The idea behind the theory presented here was originally to provide insurance against lose of information due to long-term outage of several channels of communication. The insurance turned out to be cheap (involving only general-purpose processor and memory chips) and compatible with communications in the megabit per second range. But the process involved conferred an unlooked-for additional benefit. It provided a novel way to multiplex digital communications and, in so doing, led to the invention of a variety of mathematically natural "stepup information transformers" (devices for taking several streams of data being produced at various low bit per second rates and merging them to yield transmitted data streams at higher bit rates on a number of channels which can, in some circumstances, be smaller than the number of source streams of data) and "stepdown information transformers" (devices which take the output of several high bit per second rate data sources and transmit them, on a number of channels exceeding the number of sources, at lower bit per second transmission rates in such a way that the high rate streams reemerge separately at the receiver). Thus devices which provide reliability can sometimes also confer economies on communications systems. It became evident that there is a natural way to cascade the processes described below. This cascading operation makes possible the use of two or three microprocessors to overcome an inherent limitation of a single microprocessor. A single 32-bit micro cannot cope with two bit streams when one has more than 30 times the bit rate of the other. But a two chip cascade can deal with bit streams whose bit rates differ by a factor of hundreds. Three chip cascades can process still more disparate bit streams. Finally, it appeared that the same theory can be ueed to provide low cost reliability in packet-switching networks where packeta can be destroyed in collisions, and can be employed in chip design to provide fault tolerance.