Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65139
J. Gaffney
Definitions of software reliability are presented, and an overview of reliability is given. The system-software-availability relationship is also explored.<>
给出了软件可靠性的定义,并对可靠性进行了概述。本文还探讨了系统-软件-可用性的关系。
{"title":"On predicting software reliability","authors":"J. Gaffney","doi":"10.1109/CMPSAC.1989.65139","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65139","url":null,"abstract":"Definitions of software reliability are presented, and an overview of reliability is given. The system-software-availability relationship is also explored.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125499414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65179
D. Coleman
Summary form only given. Four fundamental paradigms that can be used to characterize how manufacturers approach development of a product are discussed. They are: build to order, modify to suit, assemble to order, and off the shelf. Each of them is predicated on certain assumptions about the market at which the product is targeted.<>
{"title":"The influence of manufacturing paradigms on system development methodologies","authors":"D. Coleman","doi":"10.1109/CMPSAC.1989.65179","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65179","url":null,"abstract":"Summary form only given. Four fundamental paradigms that can be used to characterize how manufacturers approach development of a product are discussed. They are: build to order, modify to suit, assemble to order, and off the shelf. Each of them is predicated on certain assumptions about the market at which the product is targeted.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116096615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65105
D. Breen, P. Getto, A. Apodaca
A programming methodology that implements many object-oriented features within a conventional programming environment is described. The methodology was created during the development of a computer animation system, The Clockworks. The methodology supports such object-oriented features as objects with variables and methods, class hierarchies, variable and method inheritance, object instantiation, and message passing. The methodology does not employ any special keywords or language extensions, thus removing the need for a preprocessor or compiler. The methodology has been implemented in a C/Unix environment. This allows the environment and any system developed within it to be ported to a wide variety of computers which support Unix. The methodology provides many object-oriented features and associated benefits. It also provides all the benefits of a C/Unix environment, including portability, a rich variety of development tools, and efficiency.<>
{"title":"Object-oriented programming in a conventional programming environment","authors":"D. Breen, P. Getto, A. Apodaca","doi":"10.1109/CMPSAC.1989.65105","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65105","url":null,"abstract":"A programming methodology that implements many object-oriented features within a conventional programming environment is described. The methodology was created during the development of a computer animation system, The Clockworks. The methodology supports such object-oriented features as objects with variables and methods, class hierarchies, variable and method inheritance, object instantiation, and message passing. The methodology does not employ any special keywords or language extensions, thus removing the need for a preprocessor or compiler. The methodology has been implemented in a C/Unix environment. This allows the environment and any system developed within it to be ported to a wide variety of computers which support Unix. The methodology provides many object-oriented features and associated benefits. It also provides all the benefits of a C/Unix environment, including portability, a rich variety of development tools, and efficiency.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122018547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65134
T. Miyoshi, Yasuko Togashi, M. Azuma
An evaluation technology for software development environments based on a software quality evaluation process model was developed. This evaluation technology was applied to a software development environments project, FASET (formal approach to software environment technology), to evaluate prototype environments. The quality evaluation process model and its concept are presented, the technology for evaluating a software development environment is described, and the experimental process and results of the FASET project are shown.<>
{"title":"Evaluating software development environment quality","authors":"T. Miyoshi, Yasuko Togashi, M. Azuma","doi":"10.1109/CMPSAC.1989.65134","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65134","url":null,"abstract":"An evaluation technology for software development environments based on a software quality evaluation process model was developed. This evaluation technology was applied to a software development environments project, FASET (formal approach to software environment technology), to evaluate prototype environments. The quality evaluation process model and its concept are presented, the technology for evaluating a software development environment is described, and the experimental process and results of the FASET project are shown.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114263502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65062
O. Brewer, J. Dongarra, D. Sorensen
A graphics tool called BUILD that can be used to help automate the process of writing parallel FORTRAN programs for the SCHEDULE package is presented. The user can interactively build an execution graph that describes his or her algorithm and then have the tool generate the necessary calls to the SCHEDULE package. The tool and its use are described, and some examples that have been built using the tool are presented.<>
{"title":"A graphics tool to aid in the generation of parallel FORTRAN programs","authors":"O. Brewer, J. Dongarra, D. Sorensen","doi":"10.1109/CMPSAC.1989.65062","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65062","url":null,"abstract":"A graphics tool called BUILD that can be used to help automate the process of writing parallel FORTRAN programs for the SCHEDULE package is presented. The user can interactively build an execution graph that describes his or her algorithm and then have the tool generate the necessary calls to the SCHEDULE package. The tool and its use are described, and some examples that have been built using the tool are presented.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130264593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65164
Tsu-Chang Lee, A. Peterson
A novel algorithm for dynamically adapting the size of neural networks is proposed. According to the measures to be defined, a neuron in the network will generate a new neuron when the variation of its weight vector is high (i.e. when it is not learned) and will be annihilated if it is not active for a long time. This algorithm is tested on a simple but popular neural network model, Self Organization Feature Map (SOFM), and implemented in software using a double linked list. Using this algorithm, one can initially put a set of seed neurons in the network and then let the network grow according to the training patterns. It is observed from the simulation results that the network will eventually grow to a configuration suitable to the class of problems characterized by the training patterns, i.e. the neural network synthesizes itself to fit the problem space.<>
{"title":"Implementing a self-development neural network using doubly linked lists","authors":"Tsu-Chang Lee, A. Peterson","doi":"10.1109/CMPSAC.1989.65164","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65164","url":null,"abstract":"A novel algorithm for dynamically adapting the size of neural networks is proposed. According to the measures to be defined, a neuron in the network will generate a new neuron when the variation of its weight vector is high (i.e. when it is not learned) and will be annihilated if it is not active for a long time. This algorithm is tested on a simple but popular neural network model, Self Organization Feature Map (SOFM), and implemented in software using a double linked list. Using this algorithm, one can initially put a set of seed neurons in the network and then let the network grow according to the training patterns. It is observed from the simulation results that the network will eventually grow to a configuration suitable to the class of problems characterized by the training patterns, i.e. the neural network synthesizes itself to fit the problem space.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133492311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65068
Michael Weiss
Ideas developed for vectorization can be applied to the problems encountered in compiling parallel languages. The issues that arise with data allocation and strip mining for SIMD architectures are discussed. Two simple examples illustrate the interplay between strip mining and interprocessor communication.<>
{"title":"Parallel languages, vectorization, and compilers","authors":"Michael Weiss","doi":"10.1109/CMPSAC.1989.65068","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65068","url":null,"abstract":"Ideas developed for vectorization can be applied to the problems encountered in compiling parallel languages. The issues that arise with data allocation and strip mining for SIMD architectures are discussed. Two simple examples illustrate the interplay between strip mining and interprocessor communication.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129529163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65074
C. King, Ing-Ren Kau
The design of a tool for partitioning and parallelizing nested loops for execution on distributed-memory multicomputers is presented. The core of the tool is a technique called grouping, which identifies appropriate loop partition patterns based on data dependencies across the iterations. The grouping technique combined with analytic results from performance modeling tools will allow certain nested loops to be partitioned systematically and automatically, without users specifying the data partitions. Grouping is based on the concept of pipelined data parallel computation , which promises to achieve a balanced computation and communication on multicomputers. The basic structure of the parallelizing tool is presented. The grouping and performance analysis techniques for pipelined data parallel computations are described. A prototype of the tool is introduced to illustrate the feasibility of the approach.<>
{"title":"Parallelizing nested loops on multicomputers-the grouping approach","authors":"C. King, Ing-Ren Kau","doi":"10.1109/CMPSAC.1989.65074","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65074","url":null,"abstract":"The design of a tool for partitioning and parallelizing nested loops for execution on distributed-memory multicomputers is presented. The core of the tool is a technique called grouping, which identifies appropriate loop partition patterns based on data dependencies across the iterations. The grouping technique combined with analytic results from performance modeling tools will allow certain nested loops to be partitioned systematically and automatically, without users specifying the data partitions. Grouping is based on the concept of pipelined data parallel computation , which promises to achieve a balanced computation and communication on multicomputers. The basic structure of the parallelizing tool is presented. The grouping and performance analysis techniques for pipelined data parallel computations are described. A prototype of the tool is introduced to illustrate the feasibility of the approach.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125069359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65171
Jingde Cheng, K. Ushijima
A problem in knowledge engineering is investigated in order to determine how to generate automatically new valid causal relations from some known causal relations (e.g. given with the IF-THEN form). The authors discuss this problem from a logical viewpoint and propose using the entailment that is a primitive logical basis for the incremental generation of causal relations. After a brief comparison between relevance logics and entailment logic, the authors define a subclass of entailment logic, given an algebraic model for it, show its soundness based on the model, and discuss deductive entailment reasoning based on the logic. As a result, for given causal relations, new valid causal relations can be generated by deductive entailment reasoning based on the logic.<>
{"title":"Entailment as a logical basis for incremental generation of causal relations","authors":"Jingde Cheng, K. Ushijima","doi":"10.1109/CMPSAC.1989.65171","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65171","url":null,"abstract":"A problem in knowledge engineering is investigated in order to determine how to generate automatically new valid causal relations from some known causal relations (e.g. given with the IF-THEN form). The authors discuss this problem from a logical viewpoint and propose using the entailment that is a primitive logical basis for the incremental generation of causal relations. After a brief comparison between relevance logics and entailment logic, the authors define a subclass of entailment logic, given an algebraic model for it, show its soundness based on the model, and discuss deductive entailment reasoning based on the logic. As a result, for given causal relations, new valid causal relations can be generated by deductive entailment reasoning based on the logic.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114702786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65096
C. Yasunobu, R. Itsuki, H. Tsuji, Fumihiko Mori
ESOCKS, a domain shell for intelligent document retrieval is described. An important feature of ESOCKS is its associative retrieval capability. ESOCKS associates input keywords with other keywords and utilizes the augmented keywords to retrieve documents. Keyword association prevents users from missing suitable documents. The certainty factor attached to each retrieved document enables the user to select suitable documents from certain candidates. Another important feature of ESOCKS is its knowledge acquisition facility, using eight kinds of worksheet. For experts, developing an expert system consists of extracting knowledge according to the format and entering it in worksheets. Experience with ESOCKS indicates that an unassisted expert can built an expert system and that associative retrieval is more intelligent than conventional keyword retrieval.<>
{"title":"Document retrieval expert system shell with worksheet-based knowledge acquisition facility","authors":"C. Yasunobu, R. Itsuki, H. Tsuji, Fumihiko Mori","doi":"10.1109/CMPSAC.1989.65096","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65096","url":null,"abstract":"ESOCKS, a domain shell for intelligent document retrieval is described. An important feature of ESOCKS is its associative retrieval capability. ESOCKS associates input keywords with other keywords and utilizes the augmented keywords to retrieve documents. Keyword association prevents users from missing suitable documents. The certainty factor attached to each retrieved document enables the user to select suitable documents from certain candidates. Another important feature of ESOCKS is its knowledge acquisition facility, using eight kinds of worksheet. For experts, developing an expert system consists of extracting knowledge according to the format and entering it in worksheets. Experience with ESOCKS indicates that an unassisted expert can built an expert system and that associative retrieval is more intelligent than conventional keyword retrieval.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115718216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}