Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139422
M. Vouk, A. Paradkar, D. McAllister
The timing performance of N-version multi-stage software is analyzed for a strategy called expedient voting. In expedient voting the voting takes place as soon as an adequate number of components have finished the stage. The concept of a 'runahead' is introduced: the faster versions are allowed to run ahead of the rest of the slower versions by one or more stages, with synchronized re-start in the event of a failure. If the versions are highly reliable, inter-version failure dependence is small, and the difference between the fastest and the slowest successful components in each stage is large, then the execution speed-up through expedient voting may be substantial. Runaheads exceeding three stages offer diminishing returns. Speed-up deteriorates with reduction in the version reliability and independence.<>
{"title":"Modeling execution time of multi-stage N-version fault-tolerant software","authors":"M. Vouk, A. Paradkar, D. McAllister","doi":"10.1109/CMPSAC.1990.139422","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139422","url":null,"abstract":"The timing performance of N-version multi-stage software is analyzed for a strategy called expedient voting. In expedient voting the voting takes place as soon as an adequate number of components have finished the stage. The concept of a 'runahead' is introduced: the faster versions are allowed to run ahead of the rest of the slower versions by one or more stages, with synchronized re-start in the event of a failure. If the versions are highly reliable, inter-version failure dependence is small, and the difference between the fastest and the slowest successful components in each stage is large, then the execution speed-up through expedient voting may be substantial. Runaheads exceeding three stages offer diminishing returns. Speed-up deteriorates with reduction in the version reliability and independence.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116985052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139376
A. Andrews, B. Hirsh
An evolutionary approach to software development can realize lifetime productivity improvements. The authors explain the natural evolutionary process of software. Negative evolutionary influences can erode the useful life of software, but these can be regulated. Software evolves through three stages: (1) elaboration, (2) adaptation, and (3) mutation, progressive expansion/growth. Differing life cycle models are contrasted for how they consider these three evolutionary stages, how well they meet customer needs, and how they take into account evolution regulators. By using an evolutionary approach to software development, the benefits of the effort spent on development are actualized over a longer period of time. Productivity increases because the life span of the product increases, gains in development productivity are not eaten up by support cost, and development productivity gains are not lost because of a lack of understanding of evolution.<>
{"title":"Productivity improvement with evolutionary development","authors":"A. Andrews, B. Hirsh","doi":"10.1109/CMPSAC.1990.139376","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139376","url":null,"abstract":"An evolutionary approach to software development can realize lifetime productivity improvements. The authors explain the natural evolutionary process of software. Negative evolutionary influences can erode the useful life of software, but these can be regulated. Software evolves through three stages: (1) elaboration, (2) adaptation, and (3) mutation, progressive expansion/growth. Differing life cycle models are contrasted for how they consider these three evolutionary stages, how well they meet customer needs, and how they take into account evolution regulators. By using an evolutionary approach to software development, the benefits of the effort spent on development are actualized over a longer period of time. Productivity increases because the life span of the product increases, gains in development productivity are not eaten up by support cost, and development productivity gains are not lost because of a lack of understanding of evolution.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121202499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139448
Masaaki Hashimoto, K. Okamoto
Structure clash is one of the main concerns in JSP (Jackson Structured Programming). Structure clash is a program implementation issue rather than a program specification issue. Furthermore, structure clash must be accurately detected and solved in order to produce an efficient program. Therefore, there is a very important non-procedural language class in which a programmer need not think about the clashes, but in which a compiler detects and solves the clashes. In this class, an array-based detection and solution method was previously studied in the nonprocedural language MODEL. However, a set and mapping-based detection and solution method has not been used, although sets and mappings appear in several very high level nonprocedural languages. An experimental compiler based on the method has been implemented for the entity-relationship model-based nonprocedural language PSDL. The experiment has demonstrated that usable programs will be generated.<>
{"title":"A set and mapping-based detection and solution method for structure clash between program input and output data","authors":"Masaaki Hashimoto, K. Okamoto","doi":"10.1109/CMPSAC.1990.139448","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139448","url":null,"abstract":"Structure clash is one of the main concerns in JSP (Jackson Structured Programming). Structure clash is a program implementation issue rather than a program specification issue. Furthermore, structure clash must be accurately detected and solved in order to produce an efficient program. Therefore, there is a very important non-procedural language class in which a programmer need not think about the clashes, but in which a compiler detects and solves the clashes. In this class, an array-based detection and solution method was previously studied in the nonprocedural language MODEL. However, a set and mapping-based detection and solution method has not been used, although sets and mappings appear in several very high level nonprocedural languages. An experimental compiler based on the method has been implemented for the entity-relationship model-based nonprocedural language PSDL. The experiment has demonstrated that usable programs will be generated.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"313 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116358627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139449
An-Chi Liu, A. Engberts
A distributed debugger based on the Petri net model is designed and implemented. The major functions supported are distributed breakpoints, step-by-step execution, and replay. The debugger consists of a preprocessor which inserts control functions into the source code, and a parser which generates a Petri net model of the distributed program for graphical monitoring and program simulation. The debugger also interfaces with existing sequential program debuggers to provide access to variables. The superposition of the distributed debugger on top of a sequential program debugger makes it possible to decouple sequential programming from distributed program behavior.<>
{"title":"A Petri net-based distributed debugger","authors":"An-Chi Liu, A. Engberts","doi":"10.1109/CMPSAC.1990.139449","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139449","url":null,"abstract":"A distributed debugger based on the Petri net model is designed and implemented. The major functions supported are distributed breakpoints, step-by-step execution, and replay. The debugger consists of a preprocessor which inserts control functions into the source code, and a parser which generates a Petri net model of the distributed program for graphical monitoring and program simulation. The debugger also interfaces with existing sequential program debuggers to provide access to variables. The superposition of the distributed debugger on top of a sequential program debugger makes it possible to decouple sequential programming from distributed program behavior.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126639351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139401
E. Park, P. Anderson, H. Dardy
The design of a set of Ada packages defining parallel data types is described. The parallel data types and operations defined on them are intended to provide natural Ada constructs for exploitation of the data parallel Connection Machine (CM). The preliminary design of this CM interface to be built in Ada provides data parallel operations equivalent to operations found in the CM *LISP programming language and preserves many of the inherent advantages of the Ada language. Package specifications for the packages constituting the interface have been written and compiled with the VAX/VMS Ada compiler. Implementation concepts are described and samples of Ada application code are shown. While the interface is intended for use with the Connection Machine, the basic concepts may apply to other SIMD (single instruction/multiple data) machines such as the MasPar MP-1 and DAP.<>
{"title":"An Ada interface for massively parallel systems","authors":"E. Park, P. Anderson, H. Dardy","doi":"10.1109/CMPSAC.1990.139401","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139401","url":null,"abstract":"The design of a set of Ada packages defining parallel data types is described. The parallel data types and operations defined on them are intended to provide natural Ada constructs for exploitation of the data parallel Connection Machine (CM). The preliminary design of this CM interface to be built in Ada provides data parallel operations equivalent to operations found in the CM *LISP programming language and preserves many of the inherent advantages of the Ada language. Package specifications for the packages constituting the interface have been written and compiled with the VAX/VMS Ada compiler. Implementation concepts are described and samples of Ada application code are shown. While the interface is intended for use with the Connection Machine, the basic concepts may apply to other SIMD (single instruction/multiple data) machines such as the MasPar MP-1 and DAP.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126103799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139356
H. Lam, H. M. Chen, Frederick S. Ty, Ji Qiu, S. Su
A graphical user interface for an object-oriented query language, GOQL, is presented, GOQL is a part of a prototype knowledge base management system which is based on an object-oriented semantic association model, OSAM. GOQL consists of a graphical browser and a graphical querying module. The browser allows a user to browse through a complex knowledge-base schema graphically and prune it into a desired level of abstraction and details before the querying process. In the querying module, there are two modes, OQL and graphical OQL. The OQL mode is provided for knowledgeable users to directly type in the OQL command. In the graphical OQL mode, the user is guided through the formation of the query. It is noted that the object-oriented nature and the increased semantics of the underlying model and query language pose new challenges in user interface design due to their added complexity. On the other hand, these features also provide more information to the system in order to make the user interface more intelligent.<>
{"title":"A graphical interface for an object-oriented query language","authors":"H. Lam, H. M. Chen, Frederick S. Ty, Ji Qiu, S. Su","doi":"10.1109/CMPSAC.1990.139356","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139356","url":null,"abstract":"A graphical user interface for an object-oriented query language, GOQL, is presented, GOQL is a part of a prototype knowledge base management system which is based on an object-oriented semantic association model, OSAM. GOQL consists of a graphical browser and a graphical querying module. The browser allows a user to browse through a complex knowledge-base schema graphically and prune it into a desired level of abstraction and details before the querying process. In the querying module, there are two modes, OQL and graphical OQL. The OQL mode is provided for knowledgeable users to directly type in the OQL command. In the graphical OQL mode, the user is guided through the formation of the query. It is noted that the object-oriented nature and the increased semantics of the underlying model and query language pose new challenges in user interface design due to their added complexity. On the other hand, these features also provide more information to the system in order to make the user interface more intelligent.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127305494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139366
K. Hayashi, T. Nishizono, T. Takenaka
A mechanism for specifying distributed communication software is proposed. Using this mechanism, each process is specified by three types of constraints: an event to a process is decomposed into event elements and distributed to relevant processes in accordance with decomposing constraints; each process independently performs actions by its functional constraints; and the output is determined by superposing constraints as a superposing result of each process action. Because the functional constraints are independent of other processes, they can be made up of reusable components. To facilitate specifying the decomposing and superposing constraints, a language based on constraint-oriented logic programming is developed. The authors also propose a constraint evaluation system which executes reasoning rules to obtain outputs from input events and knowledge of constraints. The bound/unbound concept of event variables is employed to implement the execution control of each constraint evaluator. Using plain old telephony service and call waiting service as examples, the advantages of the mechanism and the evaluation system are shown.<>
{"title":"Distributed communication software specification based on the action superposition mechanism","authors":"K. Hayashi, T. Nishizono, T. Takenaka","doi":"10.1109/CMPSAC.1990.139366","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139366","url":null,"abstract":"A mechanism for specifying distributed communication software is proposed. Using this mechanism, each process is specified by three types of constraints: an event to a process is decomposed into event elements and distributed to relevant processes in accordance with decomposing constraints; each process independently performs actions by its functional constraints; and the output is determined by superposing constraints as a superposing result of each process action. Because the functional constraints are independent of other processes, they can be made up of reusable components. To facilitate specifying the decomposing and superposing constraints, a language based on constraint-oriented logic programming is developed. The authors also propose a constraint evaluation system which executes reasoning rules to obtain outputs from input events and knowledge of constraints. The bound/unbound concept of event variables is employed to implement the execution control of each constraint evaluator. Using plain old telephony service and call waiting service as examples, the advantages of the mechanism and the evaluation system are shown.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"294 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132597604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139322
M. Azuma
The Japanese National Working Group has carried out research on developing a framework which clarifies the relation among internal and external characteristics, factors which affect software quality, and the effect of software quality. An attempt has also been made to develop metrics to measure them. The concept model and metrics that have been developed are presented with the future plan. Specifically, attention is given to the measurement of reliability, portability, functionality, usability, and maintainability.<>
{"title":"Panel: the model and metrics for software quality evaluation report of the Japanese National Working Group","authors":"M. Azuma","doi":"10.1109/CMPSAC.1990.139322","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139322","url":null,"abstract":"The Japanese National Working Group has carried out research on developing a framework which clarifies the relation among internal and external characteristics, factors which affect software quality, and the effect of software quality. An attempt has also been made to develop metrics to measure them. The concept model and metrics that have been developed are presented with the future plan. Specifically, attention is given to the measurement of reliability, portability, functionality, usability, and maintainability.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132654410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139306
Y. Malaiya, N. Karunanithi, P. Verma
A two-component predictability measure is presented that characterizes the long-term predictability of a software reliability growth model. The first component, average predictability, measures how well a model predicts throughout the testing phase. The second component, average bias, is a measure of the general tendency to overestimate or underestimate the number of faults. Data sets for both large and small projects from diverse sources have been analyzed. The results seem to support the observation that the logarithmic model appears to have good predictability is most cases. However, at very low fault densities, the exponential model may be slightly better. The delayed S-shaped model which in some cases has been shown to have good fit, generally performed poorly.<>
{"title":"Predictability measures for software reliability models","authors":"Y. Malaiya, N. Karunanithi, P. Verma","doi":"10.1109/CMPSAC.1990.139306","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139306","url":null,"abstract":"A two-component predictability measure is presented that characterizes the long-term predictability of a software reliability growth model. The first component, average predictability, measures how well a model predicts throughout the testing phase. The second component, average bias, is a measure of the general tendency to overestimate or underestimate the number of faults. Data sets for both large and small projects from diverse sources have been analyzed. The results seem to support the observation that the logarithmic model appears to have good predictability is most cases. However, at very low fault densities, the exponential model may be slightly better. The delayed S-shaped model which in some cases has been shown to have good fit, generally performed poorly.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129523176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139367
K. Barker, M. Tamer Özsu
Multidatabase serializability is defined as an extension of the well-known serializability theory in order to provide a theoretical framework for research in concurrency control of transactions over multidatabase systems. Also introduced are multidatabase serializability graphs which capture the ordering characteristics of global as well as local transactions. Two schedulers that produce multidatabase serializable histories are described. The first scheduler is a conservative one which only permits one global subtransaction to proceed if all of the global subtransactions can proceed for any given global transaction. The 'all or nothing' approach of this algorithm is simple, elegant, and correct. The second scheduler is more aggressive in that it attempts to schedule as many global subtransactions as possible as soon as possible. A distinguishing feature of this work is the environment that it considers; the most pessimistic scenario is assumed, where individual database management systems are totally autonomous with no knowledge of each other. This restricts the communication between them to be via the multidatabase layer and requires that the global scheduler 'hand down' the order of execution of global transactions.<>
{"title":"Concurrent transaction execution in multidatabase systems","authors":"K. Barker, M. Tamer Özsu","doi":"10.1109/CMPSAC.1990.139367","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139367","url":null,"abstract":"Multidatabase serializability is defined as an extension of the well-known serializability theory in order to provide a theoretical framework for research in concurrency control of transactions over multidatabase systems. Also introduced are multidatabase serializability graphs which capture the ordering characteristics of global as well as local transactions. Two schedulers that produce multidatabase serializable histories are described. The first scheduler is a conservative one which only permits one global subtransaction to proceed if all of the global subtransactions can proceed for any given global transaction. The 'all or nothing' approach of this algorithm is simple, elegant, and correct. The second scheduler is more aggressive in that it attempts to schedule as many global subtransactions as possible as soon as possible. A distinguishing feature of this work is the environment that it considers; the most pessimistic scenario is assumed, where individual database management systems are totally autonomous with no knowledge of each other. This restricts the communication between them to be via the multidatabase layer and requires that the global scheduler 'hand down' the order of execution of global transactions.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114076904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}