Pub Date : 1993-11-01DOI: 10.1109/CMPSAC.1993.404234
T. Oda, K. Araki
Specifications are important in software development because a large percentage of the errors at implementation and test phases are traceable to a lack of precision of the specifications. Formal methods are studied and applied to produce clear specifications and argue about them rigorously. As formal methods may even increase the ratio of specification phase in the software life-cycle, it is necessary to be efficient in debugging, modification, and reuse of specifications to reduce cost of the whole software development process. In a large scale specification in particular, parts extracted from the specification are useful. We introduce here a specification slicing that supports debugging, modification and reuse of specifications. In this paper, we define specification slice as a part of a specification that defines or restricts values of a particular variable used in the specification. Attention is also directed to applications of specification slicing and support tools.<>
{"title":"Specification slicing in formal methods of software development","authors":"T. Oda, K. Araki","doi":"10.1109/CMPSAC.1993.404234","DOIUrl":"https://doi.org/10.1109/CMPSAC.1993.404234","url":null,"abstract":"Specifications are important in software development because a large percentage of the errors at implementation and test phases are traceable to a lack of precision of the specifications. Formal methods are studied and applied to produce clear specifications and argue about them rigorously. As formal methods may even increase the ratio of specification phase in the software life-cycle, it is necessary to be efficient in debugging, modification, and reuse of specifications to reduce cost of the whole software development process. In a large scale specification in particular, parts extracted from the specification are useful. We introduce here a specification slicing that supports debugging, modification and reuse of specifications. In this paper, we define specification slice as a part of a specification that defines or restricts values of a particular variable used in the specification. Attention is also directed to applications of specification slicing and support tools.<<ETX>>","PeriodicalId":375808,"journal":{"name":"Proceedings of 1993 IEEE 17th International Computer Software and Applications Conference COMPSAC '93","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129324812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-01DOI: 10.1109/CMPSAC.1993.404182
M.-Y. Zhu, C.-W. Wang
A fuzzy duration calculus (FDC) is presented, which combines duration calculus and fuzzy logic into one framework. It will be used as a basis for studying intermixed models for analog and digital computing, as a formal framework for specifying and reasoning about hybrid computer systems.<>
{"title":"Computing with real world: a fuzzy duration calculs","authors":"M.-Y. Zhu, C.-W. Wang","doi":"10.1109/CMPSAC.1993.404182","DOIUrl":"https://doi.org/10.1109/CMPSAC.1993.404182","url":null,"abstract":"A fuzzy duration calculus (FDC) is presented, which combines duration calculus and fuzzy logic into one framework. It will be used as a basis for studying intermixed models for analog and digital computing, as a formal framework for specifying and reasoning about hybrid computer systems.<<ETX>>","PeriodicalId":375808,"journal":{"name":"Proceedings of 1993 IEEE 17th International Computer Software and Applications Conference COMPSAC '93","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125572655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-01DOI: 10.1109/CMPSAC.1993.404226
Hsiao-Ying Yang, Don-Lin Lang, An-Chi Liu
We design and implementation a system for distributed software development. The system helps the program designer by first extracting the program model (in Petri nets). The system then monitors the program run and collects program trace into a database. The monitor also interacts with the network management system for resource allocation. Program debugging is achieved by replaying the trace data, both textually and graphically, in coordination with the program model.<>
{"title":"SD/sup 2/-A system for distributed software development","authors":"Hsiao-Ying Yang, Don-Lin Lang, An-Chi Liu","doi":"10.1109/CMPSAC.1993.404226","DOIUrl":"https://doi.org/10.1109/CMPSAC.1993.404226","url":null,"abstract":"We design and implementation a system for distributed software development. The system helps the program designer by first extracting the program model (in Petri nets). The system then monitors the program run and collects program trace into a database. The monitor also interacts with the network management system for resource allocation. Program debugging is achieved by replaying the trace data, both textually and graphically, in coordination with the program model.<<ETX>>","PeriodicalId":375808,"journal":{"name":"Proceedings of 1993 IEEE 17th International Computer Software and Applications Conference COMPSAC '93","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121291422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-01DOI: 10.1109/CMPSAC.1993.404238
S. Yau, V. Satish
Most performance lapses in distributed computing systems can be traced to the lack of a good task allocation strategy for distributed software. Random assignment of tasks or modules onto processors or subsystems can substantially degrade the performance of the entire distribution system. In this paper a heuristic algorithm for task allocation for any distributed computing system where the subsystems are connected in the form of a local area network and communicate by means of broadcasting is presented. This algorithm is based on minimizing communication cost and balancing the load among its subsystems. An example to illustrate our algorithm is also given.<>
{"title":"A task allocation algorithm for distributed computing systems","authors":"S. Yau, V. Satish","doi":"10.1109/CMPSAC.1993.404238","DOIUrl":"https://doi.org/10.1109/CMPSAC.1993.404238","url":null,"abstract":"Most performance lapses in distributed computing systems can be traced to the lack of a good task allocation strategy for distributed software. Random assignment of tasks or modules onto processors or subsystems can substantially degrade the performance of the entire distribution system. In this paper a heuristic algorithm for task allocation for any distributed computing system where the subsystems are connected in the form of a local area network and communicate by means of broadcasting is presented. This algorithm is based on minimizing communication cost and balancing the load among its subsystems. An example to illustrate our algorithm is also given.<<ETX>>","PeriodicalId":375808,"journal":{"name":"Proceedings of 1993 IEEE 17th International Computer Software and Applications Conference COMPSAC '93","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125235147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-01DOI: 10.1109/CMPSAC.1993.404222
S. Biffl, T. Grechenig
The following paper deals with a model for reuse of software providing different levels of reuse intensity. The model is designed for industrial use based on experiences from consulting a large inhouse developer (administrative software). It is drawn from state-of-the-art suggestions in reuse research as well as from typical constraints of time and costs in a less ideal development scenario. The model can be taken as a receipt for reuse in practice as it provides three different levels of reuse intensities/investments, and thus returns three different levels of reuse maturity. A basic level of reuse maturity in practice is to achieve maintainability: Many, especially older, programs turn out to be widely undocumented; often requirements and/or abstract design are missing, the programs do not meet basic criteria of maintainability. A medium level of reuse maturity is represented by balance within similar projects: A well designed and therefore maintainable software system contains system-specific and general components. We define a group of software systems as balanced, if there is a clear top-down structure from the general to the specific in documents concerning analysis, design, code and test. A new but similar system can be designed reusing upper level components and adapting lower level ones. A top-level reuse maturity in practice affords several technical and organizational efforts. We favor the term reuse culture. The design of a new project goes along with the use of repositories for all phases of development. Making a reuse culture work needs developing, providing and enforcing of standards. On the technical level this requires the use of quality assurance methodology, on the organizational level this includes a rather precise project information flow model. The roles for a reuse culture are defined.<>
{"title":"Degrees of consciousness for reuse of software in practice: Maintainability, balance, standardization","authors":"S. Biffl, T. Grechenig","doi":"10.1109/CMPSAC.1993.404222","DOIUrl":"https://doi.org/10.1109/CMPSAC.1993.404222","url":null,"abstract":"The following paper deals with a model for reuse of software providing different levels of reuse intensity. The model is designed for industrial use based on experiences from consulting a large inhouse developer (administrative software). It is drawn from state-of-the-art suggestions in reuse research as well as from typical constraints of time and costs in a less ideal development scenario. The model can be taken as a receipt for reuse in practice as it provides three different levels of reuse intensities/investments, and thus returns three different levels of reuse maturity. A basic level of reuse maturity in practice is to achieve maintainability: Many, especially older, programs turn out to be widely undocumented; often requirements and/or abstract design are missing, the programs do not meet basic criteria of maintainability. A medium level of reuse maturity is represented by balance within similar projects: A well designed and therefore maintainable software system contains system-specific and general components. We define a group of software systems as balanced, if there is a clear top-down structure from the general to the specific in documents concerning analysis, design, code and test. A new but similar system can be designed reusing upper level components and adapting lower level ones. A top-level reuse maturity in practice affords several technical and organizational efforts. We favor the term reuse culture. The design of a new project goes along with the use of repositories for all phases of development. Making a reuse culture work needs developing, providing and enforcing of standards. On the technical level this requires the use of quality assurance methodology, on the organizational level this includes a rather precise project information flow model. The roles for a reuse culture are defined.<<ETX>>","PeriodicalId":375808,"journal":{"name":"Proceedings of 1993 IEEE 17th International Computer Software and Applications Conference COMPSAC '93","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122581339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-01DOI: 10.1109/CMPSAC.1993.404209
K. Liu, D. Spooner
In this paper we present a new approach for creating materialized object-oriented (OO) views. We investigate the required semantic content of an OO model in order to support views. We present a view algebra which is capable of manipulating both the is-a dimension and association dimension of a composite object. We also propose a methodology for constructing materialized OO views.<>
{"title":"Object-oriented database views for supporting multidisciplinary concurrent engineering","authors":"K. Liu, D. Spooner","doi":"10.1109/CMPSAC.1993.404209","DOIUrl":"https://doi.org/10.1109/CMPSAC.1993.404209","url":null,"abstract":"In this paper we present a new approach for creating materialized object-oriented (OO) views. We investigate the required semantic content of an OO model in order to support views. We present a view algebra which is capable of manipulating both the is-a dimension and association dimension of a composite object. We also propose a methodology for constructing materialized OO views.<<ETX>>","PeriodicalId":375808,"journal":{"name":"Proceedings of 1993 IEEE 17th International Computer Software and Applications Conference COMPSAC '93","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127160711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-01DOI: 10.1109/CMPSAC.1993.404249
P. Livadas, S. Croll
In a previous paper we demonstrated that a parse-tree-based system dependence graph (SDG) provides us with smaller and therefore more precise slices than a statement-based SDG. Furthermore, we described extensions to the SDG that were made to handle particular constructs found in ANSI C. In this paper, we describe a new method for the calculation of transitive dependences (in the presence of recursion) and therefore build a SDG that does not require calculation of the GMOD and GREF sets. Furthermore, this method does not require construction of a linkage grammar and its corresponding subordinate characteristic graphs. Additionally, we illustrate the versatility of the SDG as an internal program representation by briefly presenting a tool that we have developed that can perform interprocedural slicing, dicing, and ripple analysis in addition to other software engineering activities on programs written in a subset of ANSI C.<>
{"title":"System dependence graph construction for recursive programs","authors":"P. Livadas, S. Croll","doi":"10.1109/CMPSAC.1993.404249","DOIUrl":"https://doi.org/10.1109/CMPSAC.1993.404249","url":null,"abstract":"In a previous paper we demonstrated that a parse-tree-based system dependence graph (SDG) provides us with smaller and therefore more precise slices than a statement-based SDG. Furthermore, we described extensions to the SDG that were made to handle particular constructs found in ANSI C. In this paper, we describe a new method for the calculation of transitive dependences (in the presence of recursion) and therefore build a SDG that does not require calculation of the GMOD and GREF sets. Furthermore, this method does not require construction of a linkage grammar and its corresponding subordinate characteristic graphs. Additionally, we illustrate the versatility of the SDG as an internal program representation by briefly presenting a tool that we have developed that can perform interprocedural slicing, dicing, and ripple analysis in addition to other software engineering activities on programs written in a subset of ANSI C.<<ETX>>","PeriodicalId":375808,"journal":{"name":"Proceedings of 1993 IEEE 17th International Computer Software and Applications Conference COMPSAC '93","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128796957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-01DOI: 10.1109/CMPSAC.1993.404178
M. Hsueh
Today, enterprises are seeking solutions to integrate their business functions by using advanced computing technologies. Meanwhile, these enterprises are cutting the information centers' budgets to lower the costs. Computer downsizing becomes essential for cost saving. However, computer downsizing also threatens those enterprises whose business success depends heavily on delivering fast, accurate service to customers. System performance is the number one issue followed by the scalability. This paper presents an experimental study conducted on a system that mimicked a production system installed at a customer site. The experiment was designed to focus on the online work-load characterization and the performance evaluation when the use of the system changed along with the projected business growth.<>
{"title":"Online workload, performance and scalability of a database production system: A case study","authors":"M. Hsueh","doi":"10.1109/CMPSAC.1993.404178","DOIUrl":"https://doi.org/10.1109/CMPSAC.1993.404178","url":null,"abstract":"Today, enterprises are seeking solutions to integrate their business functions by using advanced computing technologies. Meanwhile, these enterprises are cutting the information centers' budgets to lower the costs. Computer downsizing becomes essential for cost saving. However, computer downsizing also threatens those enterprises whose business success depends heavily on delivering fast, accurate service to customers. System performance is the number one issue followed by the scalability. This paper presents an experimental study conducted on a system that mimicked a production system installed at a customer site. The experiment was designed to focus on the online work-load characterization and the performance evaluation when the use of the system changed along with the projected business growth.<<ETX>>","PeriodicalId":375808,"journal":{"name":"Proceedings of 1993 IEEE 17th International Computer Software and Applications Conference COMPSAC '93","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133829082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-01DOI: 10.1109/CMPSAC.1993.404188
Pei-yu Li, B. McMillin
In an unreliable distributed system, faulty processors may prevent a deadlock detection algorithm from properly detecting deadlocks. However, few of the algorithms proposed in the literature address the issue of handling process failures in a distributed system. This paper proposes a fault-tolerant distributed deadlock detection algorithm which integrates a priority-based probe algorithm with a PMC-based diagnosis model. This algorithm detects deadlock cycles as well as identifies process failures under a bounded number of failures in a deadlock cycle by using extended probe messages that contain additional information about faulty processors.<>
{"title":"Fault-tolerant distributed deadlock detection/resolution","authors":"Pei-yu Li, B. McMillin","doi":"10.1109/CMPSAC.1993.404188","DOIUrl":"https://doi.org/10.1109/CMPSAC.1993.404188","url":null,"abstract":"In an unreliable distributed system, faulty processors may prevent a deadlock detection algorithm from properly detecting deadlocks. However, few of the algorithms proposed in the literature address the issue of handling process failures in a distributed system. This paper proposes a fault-tolerant distributed deadlock detection algorithm which integrates a priority-based probe algorithm with a PMC-based diagnosis model. This algorithm detects deadlock cycles as well as identifies process failures under a bounded number of failures in a deadlock cycle by using extended probe messages that contain additional information about faulty processors.<<ETX>>","PeriodicalId":375808,"journal":{"name":"Proceedings of 1993 IEEE 17th International Computer Software and Applications Conference COMPSAC '93","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122337375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-01DOI: 10.1109/CMPSAC.1993.404231
Chung-Ming Huang, Hsin-Yi Lai, Duen-Tay Huang
The extended communication finite state machine (ECFSM) model, which belongs to the state transition model, has been used to formally specify protocols with context variables and predicates. Global state reachability analysis is one of the most straight-forward ways to verify communication protocols specified in the state transition model. Many CFSM-based global state reduction techniques have been proposed to reduce the complexity of global state reachability analysis. However, these reduction techniques cannot be directly applied to ISO's Estelle and CCITT's SDL, which are ECFSM-based formal description techniques (FDTs)-based protocol verification systems. Based on Itoh and Ichikawa's (1983) CFSM-based, Chu and Liu's (1989) ECFSM-based reduction techniques, and Huang et al.'s (1990) CFSM-based incremental verification technique, this paper proposes a protocol verification technique for ECFSM-based n-entity protocols. In this way, the integrated reduced incremental verification technique can be directly applied to Estelle or SDL -based protocol verification systems.<>
{"title":"A reduced incremental ECFSM-based protocol verification","authors":"Chung-Ming Huang, Hsin-Yi Lai, Duen-Tay Huang","doi":"10.1109/CMPSAC.1993.404231","DOIUrl":"https://doi.org/10.1109/CMPSAC.1993.404231","url":null,"abstract":"The extended communication finite state machine (ECFSM) model, which belongs to the state transition model, has been used to formally specify protocols with context variables and predicates. Global state reachability analysis is one of the most straight-forward ways to verify communication protocols specified in the state transition model. Many CFSM-based global state reduction techniques have been proposed to reduce the complexity of global state reachability analysis. However, these reduction techniques cannot be directly applied to ISO's Estelle and CCITT's SDL, which are ECFSM-based formal description techniques (FDTs)-based protocol verification systems. Based on Itoh and Ichikawa's (1983) CFSM-based, Chu and Liu's (1989) ECFSM-based reduction techniques, and Huang et al.'s (1990) CFSM-based incremental verification technique, this paper proposes a protocol verification technique for ECFSM-based n-entity protocols. In this way, the integrated reduced incremental verification technique can be directly applied to Estelle or SDL -based protocol verification systems.<<ETX>>","PeriodicalId":375808,"journal":{"name":"Proceedings of 1993 IEEE 17th International Computer Software and Applications Conference COMPSAC '93","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127368509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}