Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253471
Thomas Lengauer
The large amounts of data being assembled in the genome sequencing projects provide a grand challenge to science, namely their interpretation. There are several aspects to this interpretation such as identifying genes, determining the structure of the encoded proteins, discovering the mechanisms, by which proteins execute their biological function, and gaining insights into what role noncoding regions of the DNA play in gene regularization and expression, as well as metabolism. Due to the increased computing power and, especially, due to sophisticated graphics technology, one can visualize the structure and dynamics of molecules on the computer screen. What is still largely missing is a set of reliable models and algorithmic methods for deriving molecular structures on the basis of sequence data, as well as methods for the reliable prediction and analysis of interactions between biomolecules such as enzymes and their substrates. The author points out a few problems for which careful modeling and the development of appropriate algorithmic techniques is at the center of progress in computer-aided molecular biology.<>
{"title":"Algorithmic research problems in molecular bioinformatics","authors":"Thomas Lengauer","doi":"10.1109/ISTCS.1993.253471","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253471","url":null,"abstract":"The large amounts of data being assembled in the genome sequencing projects provide a grand challenge to science, namely their interpretation. There are several aspects to this interpretation such as identifying genes, determining the structure of the encoded proteins, discovering the mechanisms, by which proteins execute their biological function, and gaining insights into what role noncoding regions of the DNA play in gene regularization and expression, as well as metabolism. Due to the increased computing power and, especially, due to sophisticated graphics technology, one can visualize the structure and dynamics of molecules on the computer screen. What is still largely missing is a set of reliable models and algorithmic methods for deriving molecular structures on the basis of sequence data, as well as methods for the reliable prediction and analysis of interactions between biomolecules such as enzymes and their substrates. The author points out a few problems for which careful modeling and the development of appropriate algorithmic techniques is at the center of progress in computer-aided molecular biology.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126772778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253467
K. Sere
One form of program refinement is to add new variables to the state, together with code that manipulates these new variables. When the addition of new variables and associated computation code is done in a way that prevents the old computation of the program from being disturbed, then the author calls it superpositioning. He studies superposition in the context of constructing parallel programs following the stepwise refinement approach, where the added computation in each step could consist of an entire parallel algorithm. Hence, it is important to find methods that are easy to use and also guarantee the correctness of the operation. It is also important be able to superpose one algorithm, like a termination detection algorithm, onto several different original algorithms. He therefore gives a method for defining and using such superposable modules.<>
{"title":"A formalization of superposition refinement","authors":"K. Sere","doi":"10.1109/ISTCS.1993.253467","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253467","url":null,"abstract":"One form of program refinement is to add new variables to the state, together with code that manipulates these new variables. When the addition of new variables and associated computation code is done in a way that prevents the old computation of the program from being disturbed, then the author calls it superpositioning. He studies superposition in the context of constructing parallel programs following the stepwise refinement approach, where the added computation in each step could consist of an entire parallel algorithm. Hence, it is important to find methods that are easy to use and also guarantee the correctness of the operation. It is also important be able to superpose one algorithm, like a termination detection algorithm, onto several different original algorithms. He therefore gives a method for defining and using such superposable modules.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129089658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253481
E. Cohen
The author considers parallel shortest-path computations in weighted undirected graphs G=(V,E), where n= mod V mod and m= mod E mod . The standard path-doubling algorithms consists of O(log n) phases, where in each phase, for every triple of vertices (u/sub 1/, u/sub 2/, u/sub 3/) in V/sup 3/, she updates the distance between u/sub 1/ and u/sub 3/ to be no more than the sum of the previous-phase distances between (u/sub 1/, u/sub 2/) and (u/sub 2/, u/sub 3/). The work performed in each phase, O(n/sup 3/) (linear in the number of triples), is currently the bottleneck in NC shortest-paths computations. She introduces a new algorithm that for delta =o(n), considers only O(n delta /sup 2/) triples. Roughly, the resulting NC algorithm performs O(n delta /sup 2/) work and augments E with O(n delta ) new weighted edges such that between every pair of vertices, there exists a minimum weight path of size (number of edges) O(n/ delta ) (where O(f) identical to O(f polylog n)). To compute shortest-paths, she applies work-efficient algorithms, where the time depends on the size of shortest paths, to the augmented graph. She obtains a O(t) time O( mod S mod n/sup 2/+n/sup 3//t/sup 2/) work deterministic PRAM algorithm for computing shortest-paths form mod S mod sources to all other vertices, where t>
{"title":"Using selective path-doubling for parallel shortest-path computations","authors":"E. Cohen","doi":"10.1109/ISTCS.1993.253481","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253481","url":null,"abstract":"The author considers parallel shortest-path computations in weighted undirected graphs G=(V,E), where n= mod V mod and m= mod E mod . The standard path-doubling algorithms consists of O(log n) phases, where in each phase, for every triple of vertices (u/sub 1/, u/sub 2/, u/sub 3/) in V/sup 3/, she updates the distance between u/sub 1/ and u/sub 3/ to be no more than the sum of the previous-phase distances between (u/sub 1/, u/sub 2/) and (u/sub 2/, u/sub 3/). The work performed in each phase, O(n/sup 3/) (linear in the number of triples), is currently the bottleneck in NC shortest-paths computations. She introduces a new algorithm that for delta =o(n), considers only O(n delta /sup 2/) triples. Roughly, the resulting NC algorithm performs O(n delta /sup 2/) work and augments E with O(n delta ) new weighted edges such that between every pair of vertices, there exists a minimum weight path of size (number of edges) O(n/ delta ) (where O(f) identical to O(f polylog n)). To compute shortest-paths, she applies work-efficient algorithms, where the time depends on the size of shortest paths, to the augmented graph. She obtains a O(t) time O( mod S mod n/sup 2/+n/sup 3//t/sup 2/) work deterministic PRAM algorithm for computing shortest-paths form mod S mod sources to all other vertices, where t<or=n is a parameter. When the ratio of the largest edge weight and the smallest edge weight is n/sup O(polylog/ /sup n)/, the algorithm computes shortest paths. When weights are arbitrary, it computes paths within a factor of 1+n/sup - Omega (polylog/ /sup n)/ of shortest. This improves over previous bounds. She achieves improved O( mod S mod (n/sup 2//t+m)+n/sup 3//t/sup 2/) work for computing approximate distances to within a factor of (1+ in ) (for any fixed in ).<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133157579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253489
R. Ostrovsky, A. Wigderson
If one-way functions exist, then there are zero-knowledge proofs for every language in PSPACE. The authors prove that unless very weak one-way functions exist, zero-knowledge proofs can be given only for languages in BPP. For average-case definitions of BPP they prove an analogous result under the assumption that uniform one-way functions do not exist. Thus, very loosely speaking, zero-knowledge is either useless (exists only for 'easy' languages), or universal (exists for every provable language).<>
{"title":"One-way functions are essential for non-trivial zero-knowledge","authors":"R. Ostrovsky, A. Wigderson","doi":"10.1109/ISTCS.1993.253489","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253489","url":null,"abstract":"If one-way functions exist, then there are zero-knowledge proofs for every language in PSPACE. The authors prove that unless very weak one-way functions exist, zero-knowledge proofs can be given only for languages in BPP. For average-case definitions of BPP they prove an analogous result under the assumption that uniform one-way functions do not exist. Thus, very loosely speaking, zero-knowledge is either useless (exists only for 'easy' languages), or universal (exists for every provable language).<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128647436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253472
F. Stomp
Much effort has been invested in recent years to propose self-stabilizing programs for various purposes. Only little attention has been paid to the structured, formal design and verification of such programs. The current paper presents a sound and formal principle for designing, hence verifying self-stabilizing programs. This principle, which combines programs into larger ones, is formulated in linear time temporal logic and captures the underlying intuition of many designers of self-stabilizing programs in a natural way. The proposed principle is applied to a program, due to Ghosh and Karaata (1991), for coloring a graph.<>
{"title":"Structured design of self-stabilizing programs","authors":"F. Stomp","doi":"10.1109/ISTCS.1993.253472","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253472","url":null,"abstract":"Much effort has been invested in recent years to propose self-stabilizing programs for various purposes. Only little attention has been paid to the structured, formal design and verification of such programs. The current paper presents a sound and formal principle for designing, hence verifying self-stabilizing programs. This principle, which combines programs into larger ones, is formulated in linear time temporal logic and captures the underlying intuition of many designers of self-stabilizing programs in a natural way. The proposed principle is applied to a program, due to Ghosh and Karaata (1991), for coloring a graph.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115620599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253487
S. Chaudhuri
The lambda -approximate compaction problem is: given an input array of n values, each either 0 or 1, place each value in the output array so that all the 1s are in the first (1+ lambda )k array locations, where k is the number of 1's in the input. lambda is an accuracy parameter. This problem is of fundamental importance in parallel computation because of its applications to processor allocation and approximate counting. When lambda is a constant, the problem is called linear approximate compaction (LAC). On the CRCW PRAM model, there is an algorithm that solves approximate compaction in O((log log n)/sup 3/) time for lambda =/sup 1///sub loglogn/, using /sup n///sub (loglogn)3/ processors. This is close to the best possible. Specifically, the authors, prove that LAC requires Omega (log log n) time using O(n) processors. They also give a tradeoff between lambda and the processing time. For in <1, and lambda =n/sup in /, the time required is Omega (log/sup 1///sub in /).<>
{"title":"A lower bound for linear approximate compaction","authors":"S. Chaudhuri","doi":"10.1109/ISTCS.1993.253487","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253487","url":null,"abstract":"The lambda -approximate compaction problem is: given an input array of n values, each either 0 or 1, place each value in the output array so that all the 1s are in the first (1+ lambda )k array locations, where k is the number of 1's in the input. lambda is an accuracy parameter. This problem is of fundamental importance in parallel computation because of its applications to processor allocation and approximate counting. When lambda is a constant, the problem is called linear approximate compaction (LAC). On the CRCW PRAM model, there is an algorithm that solves approximate compaction in O((log log n)/sup 3/) time for lambda =/sup 1///sub loglogn/, using /sup n///sub (loglogn)3/ processors. This is close to the best possible. Specifically, the authors, prove that LAC requires Omega (log log n) time using O(n) processors. They also give a tradeoff between lambda and the processing time. For in <1, and lambda =n/sup in /, the time required is Omega (log/sup 1///sub in /).<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133539473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253462
M. Bellare
The author presents hard to approximate problems in the following areas: systems of representatives, network flow, and longest paths in graphs. In each case he shows that there exists some delta >0 such that polynomial time approximation to within a factor of 2/sup log delta n/ of the optimal implies that NP has quasi polynomial time deterministic simulations. The results are derived by reduction from two prover, one round proof systems, and exemplify the ability of such reductions to yield hardness of approximations results for many different kinds of problems.<>
{"title":"Interactive proofs and approximation: reductions from two provers in one round","authors":"M. Bellare","doi":"10.1109/ISTCS.1993.253462","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253462","url":null,"abstract":"The author presents hard to approximate problems in the following areas: systems of representatives, network flow, and longest paths in graphs. In each case he shows that there exists some delta >0 such that polynomial time approximation to within a factor of 2/sup log delta n/ of the optimal implies that NP has quasi polynomial time deterministic simulations. The results are derived by reduction from two prover, one round proof systems, and exemplify the ability of such reductions to yield hardness of approximations results for many different kinds of problems.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132423750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253486
D. Dolev, Y. Harari, Michal Parnas
Many applications require the retrieval of all words from a fixed dictionary D, that are 'close' to some input string. The paper defines a theoretical framework to study the performance of algorithms for this problem, and provides a basic algorithmic approach. It is shown that a certain class of algorithms, D-oblivious algorithms, can not be optimal both in space and time. This is done by proving a lower bound on the tradeoff between the space and time complexities of D-oblivious algorithms. Several algorithms for this problem are presented, and their performance is compared to that of Ispell, the standard speller of Unix. On the Webster English dictionary the algorithms are shown to be faster than 'Ispell' by a significant factor, while incurring only a small cost in space.<>
{"title":"Finding the neighborhood of a query in a dictionary","authors":"D. Dolev, Y. Harari, Michal Parnas","doi":"10.1109/ISTCS.1993.253486","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253486","url":null,"abstract":"Many applications require the retrieval of all words from a fixed dictionary D, that are 'close' to some input string. The paper defines a theoretical framework to study the performance of algorithms for this problem, and provides a basic algorithmic approach. It is shown that a certain class of algorithms, D-oblivious algorithms, can not be optimal both in space and time. This is done by proving a lower bound on the tradeoff between the space and time complexities of D-oblivious algorithms. Several algorithms for this problem are presented, and their performance is compared to that of Ispell, the standard speller of Unix. On the Webster English dictionary the algorithms are shown to be faster than 'Ispell' by a significant factor, while incurring only a small cost in space.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126797228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253485
N. Megiddo, M. Naor, David P. Anderson
The minimum reservation rate problem arises in distributed systems for handling digital audio and video data. The problem is to find the minimum rate at which data must be reserved on a shared storage system in order to provide continuous buffered play-back of a variable-rate output schedule. The problem is equivalent to the minimum output rate: given input rates during various time periods, find the minimum output rate under which the buffer never overflows. The authors present an O(n log n) randomized algorithm and an O(n log n log log n) deterministic one.<>
在处理数字音视频数据的分布式系统中,出现了最小预订率问题。问题是找到共享存储系统上必须保留数据的最小速率,以便提供可变速率输出调度的连续缓冲回放。这个问题等价于最小输出速率:给定不同时间段的输入速率,找出缓冲区不溢出的最小输出速率。作者提出了一个O(n log n)随机化算法和一个O(n log n log n)确定性算法
{"title":"The minimum reservation rate problem in digital audio/video systems","authors":"N. Megiddo, M. Naor, David P. Anderson","doi":"10.1109/ISTCS.1993.253485","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253485","url":null,"abstract":"The minimum reservation rate problem arises in distributed systems for handling digital audio and video data. The problem is to find the minimum rate at which data must be reserved on a shared storage system in order to provide continuous buffered play-back of a variable-rate output schedule. The problem is equivalent to the minimum output rate: given input rates during various time periods, find the minimum output rate under which the buffer never overflows. The authors present an O(n log n) randomized algorithm and an O(n log n log log n) deterministic one.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134373557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253484
A. Efrat, C. Gotsman
The design of fiducials for precise image registration is of major practical importance in computer vision, especially in automatic inspection applications. The authors analyze the subpixel registration accuracy that can, and cannot, be achieved by some rotation-invariant fiducials, and present and analyze efficient algorithms for the registration procedure. They rely on some old and new results from lattice geometry and number theory and efficient computational-geometric algorithms.<>
{"title":"Subpixel image registration using circular fiducials","authors":"A. Efrat, C. Gotsman","doi":"10.1109/ISTCS.1993.253484","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253484","url":null,"abstract":"The design of fiducials for precise image registration is of major practical importance in computer vision, especially in automatic inspection applications. The authors analyze the subpixel registration accuracy that can, and cannot, be achieved by some rotation-invariant fiducials, and present and analyze efficient algorithms for the registration procedure. They rely on some old and new results from lattice geometry and number theory and efficient computational-geometric algorithms.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125429142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}