Pub Date : 1994-11-01DOI: 10.1016/0096-0551(94)90009-4
{"title":"List of contents and author index","authors":"","doi":"10.1016/0096-0551(94)90009-4","DOIUrl":"https://doi.org/10.1016/0096-0551(94)90009-4","url":null,"abstract":"","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 4","pages":"Pages iii-iv"},"PeriodicalIF":0.0,"publicationDate":"1994-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90009-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137349726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-11-01DOI: 10.1016/0096-0551(94)90007-8
Gudula Rünger, Kurt Sieber
The parallel language FORK [1], based on a scalable shared memory model, is a PASCAL-like language with some additional parallel constructs. A PRAM (Parallel Random Access Machine) algorithm can be expressed on a high level of abstraction as a FORK program which is translated into efficient PRAM code guaranteeing theoretically predicted runtimes.
In this paper, we concentrate on those features of the language FORK related to parallelism, such as the group concept, a shared memory access and synchronous or asynchronous execution. We present a trace-based denotational interleaving semantics where processes describe synchronous computations. Processes are created or deleted dynamically and run asynchronously. Interleaving rules reflect the underlying CRCW (concurrent-read-concurrent-write) PRAM model.
{"title":"A process oriented semantics of the PRAM-language FORK","authors":"Gudula Rünger, Kurt Sieber","doi":"10.1016/0096-0551(94)90007-8","DOIUrl":"10.1016/0096-0551(94)90007-8","url":null,"abstract":"<div><p>The parallel language FORK [1], based on a scalable shared memory model, is a PASCAL-like language with some additional parallel constructs. A PRAM (Parallel Random Access Machine) algorithm can be expressed on a high level of abstraction as a FORK program which is translated into efficient PRAM code guaranteeing theoretically predicted runtimes.</p><p>In this paper, we concentrate on those features of the language FORK related to parallelism, such as the group concept, a shared memory access and synchronous or asynchronous execution. We present a trace-based denotational interleaving semantics where processes describe synchronous computations. Processes are created or deleted dynamically and run asynchronously. Interleaving rules reflect the underlying CRCW (concurrent-read-concurrent-write) PRAM model.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 4","pages":"Pages 253-265"},"PeriodicalIF":0.0,"publicationDate":"1994-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90007-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72653414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-08-01DOI: 10.1016/0096-0551(94)90002-7
N. Viswanathan, Y.N. Srikant
A new parallel parsing algorithm for block structured languages, capable of parsing incrementally also, is presented. The parser is for LR grammars. A shared memory multiprocessor model is assumed. We associate processors to parse corrections independently with minimum reparsing. A new compatibility condition is used by the associated processors to terminate parsing, and prevent redoing the work of other processors. We give an efficient way of assembling the final parse tree from the individual parses. Our compatibility condition is simple and it can be computed at the parser construction time itself. Further, the compatibility condition can be tested while parsing, in constant time. The parser can be integrated into the editor. We give an estimate for speedup by our parallel parsing and parallel incremental parsing methods. We have obtained considerable speedups in simulation studies of our algorithm.
{"title":"Parallel incremental LR parsing","authors":"N. Viswanathan, Y.N. Srikant","doi":"10.1016/0096-0551(94)90002-7","DOIUrl":"10.1016/0096-0551(94)90002-7","url":null,"abstract":"<div><p>A new parallel parsing algorithm for block structured languages, capable of parsing incrementally also, is presented. The parser is for LR grammars. A shared memory multiprocessor model is assumed. We associate processors to parse corrections independently with minimum reparsing. A new compatibility condition is used by the associated processors to terminate parsing, and prevent redoing the work of other processors. We give an efficient way of assembling the final parse tree from the individual parses. Our compatibility condition is simple and it can be computed at the parser construction time itself. Further, the compatibility condition can be tested while parsing, in constant time. The parser can be integrated into the editor. We give an estimate for speedup by our parallel parsing and parallel incremental parsing methods. We have obtained considerable speedups in simulation studies of our algorithm.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 3","pages":"Pages 151-175"},"PeriodicalIF":0.0,"publicationDate":"1994-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90002-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81011378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-08-01DOI: 10.1016/0096-0551(94)90004-3
Johann Blieberger
In this paper so-called discrete loops are introduced which narrow the gap between general loops (e.g. while- or repeat-loops) and for-loops. Although discrete loops can be used for applications that would otherwise require general loops, discrete loops are known to complete in any case. Furthermore it is possible to determine the number of iterations of a discrete loop, while this is trivial to do for for-loops and extremely difficult for general loops. Thus discrete loops form an ideal frame-work for determining the worst case timing behavior of a program and they are especially useful in implementing real-time systems and proving such systems correct.
{"title":"Discrete loops and worst case performance","authors":"Johann Blieberger","doi":"10.1016/0096-0551(94)90004-3","DOIUrl":"10.1016/0096-0551(94)90004-3","url":null,"abstract":"<div><p>In this paper so-called <em>discrete loops</em> are introduced which narrow the gap between general loops (e.g. while- or repeat-loops) and for-loops. Although discrete loops can be used for applications that would otherwise require general loops, discrete loops are known to complete in any case. Furthermore it is possible to determine the number of iterations of a discrete loop, while this is trivial to do for for-loops and extremely difficult for general loops. Thus discrete loops form an ideal frame-work for determining the worst case timing behavior of a program and they are especially useful in implementing real-time systems and proving such systems correct.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 3","pages":"Pages 193-212"},"PeriodicalIF":0.0,"publicationDate":"1994-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90004-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80741357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-08-01DOI: 10.1016/0096-0551(94)90003-5
Pieter H. Hartel, Willem G. Vree
The aggregate update problem has received considerable attention since pure functional programming languages were recognised as an interesting research topic. There is extensive literature in this area, which proposes a wide variety of solutions. We have tried to apply some of the proposed solutions to our own applications to see how these solutions work in practice. We have been able to use destructive updates but are not convinced that this could have been achieved without application specific knowledge. In particular, no form of update analysis has been reported that is applicable to non-flat domains in polymorphic languages with higher order functions.
It is our belief that a refinement of the monolithic approach towards constructing arrays may be a good alternative to using the incremental approach with destructive updates.
{"title":"Experiments with destructive updates in a lazy functional language","authors":"Pieter H. Hartel, Willem G. Vree","doi":"10.1016/0096-0551(94)90003-5","DOIUrl":"https://doi.org/10.1016/0096-0551(94)90003-5","url":null,"abstract":"<div><p>The aggregate update problem has received considerable attention since pure functional programming languages were recognised as an interesting research topic. There is extensive literature in this area, which proposes a wide variety of solutions. We have tried to apply some of the proposed solutions to our own applications to see how these solutions work in practice. We have been able to use destructive updates but are not convinced that this could have been achieved without application specific knowledge. In particular, no form of update analysis has been reported that is applicable to non-flat domains in polymorphic languages with higher order functions.</p><p>It is our belief that a refinement of the monolithic approach towards constructing arrays may be a good alternative to using the incremental approach with destructive updates.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 3","pages":"Pages 177-192"},"PeriodicalIF":0.0,"publicationDate":"1994-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90003-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92091589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-08-01DOI: 10.1016/0096-0551(94)90001-9
U. Nagaraj Shenoy , Y.N. Srikant , V.P. Bhatkar
Several researchers have looked into various issues related to automatic parallelization of sequential programs for multicomputers. But there is a need for a coherent framework which encompasses all these issues. In this paper we present a such a framework which takes best advantage of the multicomputer architecture. We resort to tiling transformation for iteration space partitioning and propose a scheme of automatic data partitioning and dynamic data distribution. We have tried a simple implementation of our scheme on a transputer based multicomputer [1] and the results are encouraging.
{"title":"An automatic parallelization framework for multicomputers","authors":"U. Nagaraj Shenoy , Y.N. Srikant , V.P. Bhatkar","doi":"10.1016/0096-0551(94)90001-9","DOIUrl":"10.1016/0096-0551(94)90001-9","url":null,"abstract":"<div><p>Several researchers have looked into various issues related to automatic parallelization of sequential programs for multicomputers. But there is a need for a coherent framework which encompasses all these issues. In this paper we present a such a framework which takes best advantage of the multicomputer architecture. We resort to tiling transformation for iteration space partitioning and propose a scheme of automatic data partitioning and dynamic data distribution. We have tried a simple implementation of our scheme on a transputer based multicomputer [1] and the results are encouraging.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 3","pages":"Pages 135-150"},"PeriodicalIF":0.0,"publicationDate":"1994-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90001-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78307146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-05-01DOI: 10.1016/0096-0551(94)90019-1
S. Mansoor Sarwar , Mansour H.A. Jaragh , Mike Wind
The paper describes the results of a large empirical study to measure the practical behavior of the basic versions of the popular internal sorting algorithms, Shellsort, quicksort, and mergesort, for medium to large size data and compares them with previous results. The results give running times of θ(N1.25) for Shellsort, quicksort, and mergesort for 1000 < N < 2 × 106. The study also shows that Shellsort behaves better than mergesort for 1000 < N < 150,000. However, mergesort outperforms Shellsort for N > 150,000. Quicksort outperforms both Shellsort and mergesort for all values of N > 1000. Our fits show better performance for Shellsort than the previous studies and are mostly accurate to within 2% for 1000 < N < 2 × 106. The primary reason for this error seems to be related to the error in the measured data.
{"title":"An empirical study of the run-time behavior of quicksort, Shellsort and mergesort for medium to large size data","authors":"S. Mansoor Sarwar , Mansour H.A. Jaragh , Mike Wind","doi":"10.1016/0096-0551(94)90019-1","DOIUrl":"10.1016/0096-0551(94)90019-1","url":null,"abstract":"<div><p>The paper describes the results of a large empirical study to measure the practical behavior of the basic versions of the popular internal sorting algorithms, Shellsort, quicksort, and mergesort, for medium to large size data and compares them with previous results. The results give running times of <em>θ</em>(<em>N</em><sup>1.25</sup>) for Shellsort, quicksort, and mergesort for 1000 < <em>N</em> < 2 × 10<sup>6</sup>. The study also shows that Shellsort behaves better than mergesort for 1000 < <em>N</em> < 150,000. However, mergesort outperforms Shellsort for <em>N</em> > 150,000. Quicksort outperforms both Shellsort and mergesort for all values of <em>N</em> > 1000. Our fits show better performance for Shellsort than the previous studies and are mostly accurate to within 2% for 1000 < <em>N</em> < 2 × 10<sup>6</sup>. The primary reason for this error seems to be related to the error in the measured data.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 2","pages":"Pages 127-134"},"PeriodicalIF":0.0,"publicationDate":"1994-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90019-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73227714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-05-01DOI: 10.1016/0096-0551(94)90017-5
Ken Sailor, Carl McCrosky
Type-sensitive parsing of expressions is context-sensitive parsing based on type. Previous research reported a general class of algorithms for type-sensitive parsing. Unfortunately, these algorithms are impractical for languages that infer type. This paper describes a related algorithm which is much more efficient—its incremental cost is linear (with a small constant) in the length of the expression, even when types must be deduced. Our method can be applied to any statically typed language solving a variety of problems associated with conventional parsing techniques including problems with operator precedence and the interaction between infix operators and higher order functions.
{"title":"A practical approach to type-sensitive parsing","authors":"Ken Sailor, Carl McCrosky","doi":"10.1016/0096-0551(94)90017-5","DOIUrl":"10.1016/0096-0551(94)90017-5","url":null,"abstract":"<div><p>Type-sensitive parsing of expressions is context-sensitive parsing based on type. Previous research reported a general class of algorithms for type-sensitive parsing. Unfortunately, these algorithms are impractical for languages that infer type. This paper describes a related algorithm which is much more efficient—its incremental cost is linear (with a small constant) in the length of the expression, even when types must be deduced. Our method can be applied to any statically typed language solving a variety of problems associated with conventional parsing techniques including problems with operator precedence and the interaction between infix operators and higher order functions.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 2","pages":"Pages 101-116"},"PeriodicalIF":0.0,"publicationDate":"1994-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90017-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72538013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-05-01DOI: 10.1016/0096-0551(94)90015-9
Steven J. Drew, K. John Gough
Since the mid-1970s, and with the development of each new programming paradigm there has been an increasing interest in exceptions and the benefits of exception handling. With the move towards programming for ever more complex architectures, understanding basic facilities such as exception handling as an aid to improving program reliability, robustness and comprehensibility has become much more important. Interest has sparked the production of many papers both theoretical and practical, each giving a view of exceptions and exception handling from a different standpoint.
In an effort to provide a means of classifying exception handling models which may be encountered, a taxonomy is presented in this paper. As the taxonomy is developed some of the concepts of exception handling are introduced and discussed. The taxonomy is applied to a number of exception handling models in some contemporary programming languages and some observations and conclusions offered.
{"title":"Exception handling: Expecting the unexpected","authors":"Steven J. Drew, K. John Gough","doi":"10.1016/0096-0551(94)90015-9","DOIUrl":"10.1016/0096-0551(94)90015-9","url":null,"abstract":"<div><p>Since the mid-1970s, and with the development of each new programming paradigm there has been an increasing interest in exceptions and the benefits of exception handling. With the move towards programming for ever more complex architectures, understanding basic facilities such as exception handling as an aid to improving program reliability, robustness and comprehensibility has become much more important. Interest has sparked the production of many papers both theoretical and practical, each giving a view of exceptions and exception handling from a different standpoint.</p><p>In an effort to provide a means of classifying exception handling models which may be encountered, a taxonomy is presented in this paper. As the taxonomy is developed some of the concepts of exception handling are introduced and discussed. The taxonomy is applied to a number of exception handling models in some contemporary programming languages and some observations and conclusions offered.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 2","pages":"Pages 69-87"},"PeriodicalIF":0.0,"publicationDate":"1994-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90015-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78936539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-05-01DOI: 10.1016/0096-0551(94)90016-7
Janos J. Sarbo
We present two grammar transformations which can decrease the search space of generated top-down backtrack parsers. The transformations are simple and can be of practical use.
The first transformation, which is a combination of substitution and left-factorization, is based on the LR-table construction. The second transformation uses the calculation of the sets FIRST and FOLLOW, and a grammar property, called rrelative unambiguity.
The time complexity of the transformations is worst case polynomial and in practical cases linear in the size of the grammar.
{"title":"Grammar transformations for optimizing backtrack parsers","authors":"Janos J. Sarbo","doi":"10.1016/0096-0551(94)90016-7","DOIUrl":"10.1016/0096-0551(94)90016-7","url":null,"abstract":"<div><p>We present two grammar transformations which can decrease the search space of generated top-down backtrack parsers. The transformations are simple and can be of practical use.</p><p>The first transformation, which is a combination of substitution and left-factorization, is based on the LR-table construction. The second transformation uses the calculation of the sets <em>FIRST</em> and <em>FOLLOW</em>, and a grammar property, called <em>rrelative unambiguity</em>.</p><p>The time complexity of the transformations is worst case polynomial and in practical cases linear in the size of the grammar.</p></div>","PeriodicalId":100315,"journal":{"name":"Computer Languages","volume":"20 2","pages":"Pages 89-100"},"PeriodicalIF":0.0,"publicationDate":"1994-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0096-0551(94)90016-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91360472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}