Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65163
C. Ramamoorthy, S. Shekhar
Neural networks have traditionally been applied to recognition problems, and most learning algorithms are tailored to those problems. The authors discuss the requirements of learning for generalization, which is NP-complete and cannot be approached by traditional methods based on gradient descent. They present a stochastic learning algorithm based on simulated annealing in weight space. The convergence properties and feasibility of the algorithm are verified.<>
{"title":"Stochastic backpropagation: a learning algorithm for generalization problems","authors":"C. Ramamoorthy, S. Shekhar","doi":"10.1109/CMPSAC.1989.65163","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65163","url":null,"abstract":"Neural networks have traditionally been applied to recognition problems, and most learning algorithms are tailored to those problems. The authors discuss the requirements of learning for generalization, which is NP-complete and cannot be approached by traditional methods based on gradient descent. They present a stochastic learning algorithm based on simulated annealing in weight space. The convergence properties and feasibility of the algorithm are verified.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126358357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65099
Christina Liebelt
Transaction programs by definition have to maintain all consistency constraints defined on a database. Although the overall operational consistency of a database rests on this crucial assumption, there are few design aids to support the design of consistent transaction programs. An approach is presented for verifying that the defined integrity constraints are not violated by a transaction program. Assuming that the database is in a consistent state before the transaction program starts, the database stays consistent after the execution of a correct transaction program. In this approach all computations and all modifications on the database are identified and represented with symbolic values. The symbolic representation of the output variables and database operations is used to verify the integrity constraints. Therefore, it is possible to support the application programmer in designing correct transaction programs.<>
{"title":"Designing consistency-preserving database transactions","authors":"Christina Liebelt","doi":"10.1109/CMPSAC.1989.65099","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65099","url":null,"abstract":"Transaction programs by definition have to maintain all consistency constraints defined on a database. Although the overall operational consistency of a database rests on this crucial assumption, there are few design aids to support the design of consistent transaction programs. An approach is presented for verifying that the defined integrity constraints are not violated by a transaction program. Assuming that the database is in a consistent state before the transaction program starts, the database stays consistent after the execution of a correct transaction program. In this approach all computations and all modifications on the database are identified and represented with symbolic values. The symbolic representation of the output variables and database operations is used to verify the integrity constraints. Therefore, it is possible to support the application programmer in designing correct transaction programs.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127188151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65081
F. Sadri
The object model which is based on the abstract data type concept, provides a natural and more powerful modeling capability. This modeling power, coupled with efficiency of implementation, makes object-oriented database systems suitable for complex applications, such as engineering design applications. The author concentrates on: differences between object-oriented databases and object-oriented programming languages and differences between object-oriented databases and classical (relational) databases. The author argues the need for supporting schema evolution and object versions.<>
{"title":"Object-oriented database systems","authors":"F. Sadri","doi":"10.1109/CMPSAC.1989.65081","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65081","url":null,"abstract":"The object model which is based on the abstract data type concept, provides a natural and more powerful modeling capability. This modeling power, coupled with efficiency of implementation, makes object-oriented database systems suitable for complex applications, such as engineering design applications. The author concentrates on: differences between object-oriented databases and object-oriented programming languages and differences between object-oriented databases and classical (relational) databases. The author argues the need for supporting schema evolution and object versions.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126022927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65157
B. Zimmer
Software quality and productivity analysis (SQPA) is a relatively new program at Hewlett-Packard which has met with overwhelming success in the past two years. Its basis is a set of standardized questions which provide an assessment of all the major factors which affect the software development environment. The history, methodology, impact, and results of using SQPA are discussed.<>
{"title":"Software quality and productivity analysis at Hewlett-Packard","authors":"B. Zimmer","doi":"10.1109/CMPSAC.1989.65157","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65157","url":null,"abstract":"Software quality and productivity analysis (SQPA) is a relatively new program at Hewlett-Packard which has met with overwhelming success in the past two years. Its basis is a set of standardized questions which provide an assessment of all the major factors which affect the software development environment. The history, methodology, impact, and results of using SQPA are discussed.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125414734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65136
J. Salasin
Approaches and metrics useful in assuring reliable software, starting from the time at which requirements are defined, are discussed. This study provides the background for a discussion of technology and policy advances that are expected to greatly improve the ability to field the systems promised.<>
{"title":"Building reliable systems: software testing and analysis","authors":"J. Salasin","doi":"10.1109/CMPSAC.1989.65136","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65136","url":null,"abstract":"Approaches and metrics useful in assuring reliable software, starting from the time at which requirements are defined, are discussed. This study provides the background for a discussion of technology and policy advances that are expected to greatly improve the ability to field the systems promised.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116431990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65168
S. Sugimoto, T. Sakaguchi, K. Tabata
A description is given of Concurrent Lisp, which is designed on the basis of lexical scope. It is called lexically scoped concurrent LISP (LS/CL). LS/CL is based on Common LISP. Local variables are lexically scoped and the functions, except for concurrent functions, are defined to satisfy the language specifications of Common LISP. Processes of LS/CL are dynamically created and cooperate with each other. In addition to the language features, this paper shows the environment management mechanism of dynamically activated processes that have statically scoped variables. Also described is the LS/CL system implemented on a workstation.<>
{"title":"Concurrent LISP based on lexical scope","authors":"S. Sugimoto, T. Sakaguchi, K. Tabata","doi":"10.1109/CMPSAC.1989.65168","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65168","url":null,"abstract":"A description is given of Concurrent Lisp, which is designed on the basis of lexical scope. It is called lexically scoped concurrent LISP (LS/CL). LS/CL is based on Common LISP. Local variables are lexically scoped and the functions, except for concurrent functions, are defined to satisfy the language specifications of Common LISP. Processes of LS/CL are dynamically created and cooperate with each other. In addition to the language features, this paper shows the environment management mechanism of dynamically activated processes that have statically scoped variables. Also described is the LS/CL system implemented on a workstation.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132806031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65091
J. Srivastava, D. Rotem
The concept of precision in databases for both queries and data is introduced and formalized. Algorithms for maintaining a database at a specified degree of precision are presented and analyzed. It is shown how these algorithms can be used to choose an optimal query plan by means of a query optimizer.<>
{"title":"A framework for expressing and controlling imprecision in databases","authors":"J. Srivastava, D. Rotem","doi":"10.1109/CMPSAC.1989.65091","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65091","url":null,"abstract":"The concept of precision in databases for both queries and data is introduced and formalized. Algorithms for maintaining a database at a specified degree of precision are presented and analyzed. It is shown how these algorithms can be used to choose an optimal query plan by means of a query optimizer.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134232878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65178
S. Yeh, Chuan-lin Wu, Hong-Da Sheng, C. Hung, R. Lee
An expert system for network management is designed and prototyped to do network troubleshooting automatically. The expert system employs management information provided by a monitoring mechanism of the network. The whole spectrum of the fault management information is analyzed. The management knowledge derived is categorized into five types: the physical property, the experience in the past, the heuristic rule of thumb, the predictable problems, and the deep knowledge. These knowledge types and related rules are divided into groups to improve reasoning speed. The expert system is composed of a problem manager, a problem analyzer, and many problem solvers. A prototyped expert system, using simulated faults, shows that the expert-system-based fault management system can automatically diagnose problems and take corrective actions.<>
{"title":"Expert system based automatic network fault management system","authors":"S. Yeh, Chuan-lin Wu, Hong-Da Sheng, C. Hung, R. Lee","doi":"10.1109/CMPSAC.1989.65178","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65178","url":null,"abstract":"An expert system for network management is designed and prototyped to do network troubleshooting automatically. The expert system employs management information provided by a monitoring mechanism of the network. The whole spectrum of the fault management information is analyzed. The management knowledge derived is categorized into five types: the physical property, the experience in the past, the heuristic rule of thumb, the predictable problems, and the deep knowledge. These knowledge types and related rules are divided into groups to improve reasoning speed. The expert system is composed of a problem manager, a problem analyzer, and many problem solvers. A prototyped expert system, using simulated faults, shows that the expert-system-based fault management system can automatically diagnose problems and take corrective actions.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"254 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116010981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65132
Y. Tohma
Models used to estimate the number of residual software faults are examined. The models were developed by universities and industries in Japan. It has been shown that the hypergeometric distribution model can be applied to real data of different types. The relationship between the model based on the hypergeometric distribution and those based on the nonhomogeneous Poisson process has been revealed.<>
{"title":"Models to estimate the number of faults still resident in the software after test/debug process","authors":"Y. Tohma","doi":"10.1109/CMPSAC.1989.65132","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65132","url":null,"abstract":"Models used to estimate the number of residual software faults are examined. The models were developed by universities and industries in Japan. It has been shown that the hypergeometric distribution model can be applied to real data of different types. The relationship between the model based on the hypergeometric distribution and those based on the nonhomogeneous Poisson process has been revealed.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114258158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-09-20DOI: 10.1109/CMPSAC.1989.65087
H. Lam, S. Su, A. Alashqur
The object orientation of a semantic association model (OSAM) is presented. It integrates the concepts and techniques of semantic modeling and those introduced by the object-oriented paradigm. Unlike conventional data models such as the relational model, the object orientation of OSAM allows the user to model an application in terms of complex objects, classes and their associations, instead of tuples (or records) and relations (or record types). The primitives (objects, class, instance and link) and the perspectives (class and object) of an OSAM database are described. Key differences between OSAM and a conventional object-oriented model are discussed. The features of object orientation and explicit definition of semantic associations among objects allow the database of an application domain to be modeled, accessed and manipulated at a higher conceptual level and thus simplify the tasks of the users in the development of their applications.<>
{"title":"Integrating the concepts and techniques of semantic modeling and the object-oriented paradigm","authors":"H. Lam, S. Su, A. Alashqur","doi":"10.1109/CMPSAC.1989.65087","DOIUrl":"https://doi.org/10.1109/CMPSAC.1989.65087","url":null,"abstract":"The object orientation of a semantic association model (OSAM) is presented. It integrates the concepts and techniques of semantic modeling and those introduced by the object-oriented paradigm. Unlike conventional data models such as the relational model, the object orientation of OSAM allows the user to model an application in terms of complex objects, classes and their associations, instead of tuples (or records) and relations (or record types). The primitives (objects, class, instance and link) and the perspectives (class and object) of an OSAM database are described. Key differences between OSAM and a conventional object-oriented model are discussed. The features of object orientation and explicit definition of semantic associations among objects allow the database of an application domain to be modeled, accessed and manipulated at a higher conceptual level and thus simplify the tasks of the users in the development of their applications.<<ETX>>","PeriodicalId":339677,"journal":{"name":"[1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference","volume":"409 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114940782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}