A deductive approach which incorporates first-order logic into a relational database is proposed to remedy the problems incurred in the relational algebraic approach. The method not only allows recursive definitions for more complex logical reasoning but also provides a uniform query interface to the users for verifying functional properties of a protocol. Most important, with the deductibility of the deductive database it becomes possible to define algorithms for the incremental verification method, which speeds up the verification process by reverifying protocols without generating the global states from scratch.<>
{"title":"Incremental protocol verification using deductive database systems","authors":"I-En Liao, Ming T. Liu","doi":"10.1109/ICDE.1989.47217","DOIUrl":"https://doi.org/10.1109/ICDE.1989.47217","url":null,"abstract":"A deductive approach which incorporates first-order logic into a relational database is proposed to remedy the problems incurred in the relational algebraic approach. The method not only allows recursive definitions for more complex logical reasoning but also provides a uniform query interface to the users for verifying functional properties of a protocol. Most important, with the deductibility of the deductive database it becomes possible to define algorithms for the incremental verification method, which speeds up the verification process by reverifying protocols without generating the global states from scratch.<<ETX>>","PeriodicalId":329505,"journal":{"name":"[1989] Proceedings. Fifth International Conference on Data Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122787861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
LINDA, an experimental system for database access in heterogeneous environments, is discussed. The goal is to achieve maximum site autonomy and as much database access homogenization as possible. The problems to be solved are discussed, and design priorities are justified. Major heterogeneity problems are sketched, and the applied techniques are briefly described. An overview of the implementation of the system is given, and the system's applicability is discussed.<>
{"title":"LINDA: a system for loosely integrated databases","authors":"A. Wolski","doi":"10.1109/ICDE.1989.47201","DOIUrl":"https://doi.org/10.1109/ICDE.1989.47201","url":null,"abstract":"LINDA, an experimental system for database access in heterogeneous environments, is discussed. The goal is to achieve maximum site autonomy and as much database access homogenization as possible. The problems to be solved are discussed, and design priorities are justified. Major heterogeneity problems are sketched, and the applied techniques are briefly described. An overview of the implementation of the system is given, and the system's applicability is discussed.<<ETX>>","PeriodicalId":329505,"journal":{"name":"[1989] Proceedings. Fifth International Conference on Data Engineering","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126054575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The effectiveness of parallel processing of relational join operations is examined. The skew in the distribution of join attribute values and the stochastic nature of the task processing times are identified as the major factors that can affect the effective utilization of parallelism. When many small processors are used in the parallel architecture, the skew can result in some processors becoming sources of bottleneck while other processors are being under utilized. Even in the absence of skew, the variations in the processing times of the parallel tasks belonging to a query can lead to high task synchronization delay and impact the maximum speedup achievable through parallel execution. Analytic expressions for join execution time are developed for different task time distributions with or without skew.<>
{"title":"Limiting factors of join performance on parallel processors","authors":"M. Lakshmi, Philip S. Yu","doi":"10.1109/ICDE.1989.47254","DOIUrl":"https://doi.org/10.1109/ICDE.1989.47254","url":null,"abstract":"The effectiveness of parallel processing of relational join operations is examined. The skew in the distribution of join attribute values and the stochastic nature of the task processing times are identified as the major factors that can affect the effective utilization of parallelism. When many small processors are used in the parallel architecture, the skew can result in some processors becoming sources of bottleneck while other processors are being under utilized. Even in the absence of skew, the variations in the processing times of the parallel tasks belonging to a query can lead to high task synchronization delay and impact the maximum speedup achievable through parallel execution. Analytic expressions for join execution time are developed for different task time distributions with or without skew.<<ETX>>","PeriodicalId":329505,"journal":{"name":"[1989] Proceedings. Fifth International Conference on Data Engineering","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127432371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using the Application Development Toolkit (ADT) as an example, it is shown that by borrowing some techniques from the artificial-intelligence field, database-centered application-development productivity tools can be made more acceptable to end users and more useful to expert developers. Experience with ADT has indicated that a more end-user-oriented approach, and, in particular, more accommodating and application-oriented interface, is needed. A characteristic set of problems that are found in the class of productivity tools similar to ADT is presented. A series of possible improvements that shed some light on deficiencies in current state-of-the-art application-generation systems and productivity tools is suggested.<>
{"title":"Knowledge-based support for the development of database-centered applications","authors":"Hany M. Atchan, R. Bell, B. Thuraisingham","doi":"10.1109/ICDE.1989.47240","DOIUrl":"https://doi.org/10.1109/ICDE.1989.47240","url":null,"abstract":"Using the Application Development Toolkit (ADT) as an example, it is shown that by borrowing some techniques from the artificial-intelligence field, database-centered application-development productivity tools can be made more acceptable to end users and more useful to expert developers. Experience with ADT has indicated that a more end-user-oriented approach, and, in particular, more accommodating and application-oriented interface, is needed. A characteristic set of problems that are found in the class of productivity tools similar to ADT is presented. A series of possible improvements that shed some light on deficiencies in current state-of-the-art application-generation systems and productivity tools is suggested.<<ETX>>","PeriodicalId":329505,"journal":{"name":"[1989] Proceedings. Fifth International Conference on Data Engineering","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131122001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An approach is presented for integrating AI (artificial intelligence) systems with DBMS (database management systems). The impedance mismatch that has made this integration a problem is, in essence, a difference in the two system models of data processing. The present approach is to avoid the mismatch by forcing both AI systems and DBMS into the common model of stream processing. The approach taken in the Tangram project at UCLA, which integrates Prolog with relational DBMS, is described. Prolog is extended to a functional language called Log(F) that facilitates development of stream processing programs. The integration of this system with DBMS is simultaneously elegant, easy to use, and relatively efficient.<>
{"title":"Integrating AI and DBMS through stream processing","authors":"D. S. Parker","doi":"10.1109/ICDE.1989.47224","DOIUrl":"https://doi.org/10.1109/ICDE.1989.47224","url":null,"abstract":"An approach is presented for integrating AI (artificial intelligence) systems with DBMS (database management systems). The impedance mismatch that has made this integration a problem is, in essence, a difference in the two system models of data processing. The present approach is to avoid the mismatch by forcing both AI systems and DBMS into the common model of stream processing. The approach taken in the Tangram project at UCLA, which integrates Prolog with relational DBMS, is described. Prolog is extended to a functional language called Log(F) that facilitates development of stream processing programs. The integration of this system with DBMS is simultaneously elegant, easy to use, and relatively efficient.<<ETX>>","PeriodicalId":329505,"journal":{"name":"[1989] Proceedings. Fifth International Conference on Data Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127876492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A password authentication mechanism based on the public-key distribution cryptosystem is proposed. The scheme uses an authentication table to replace the traditional password file. With this scheme, even if the authentication table is comprised, the system security is preserved. The user's password is effectively bound to the user's identification in a timely, efficient, and simple manner.<>
{"title":"Password authentication based on public-key distribution cryptosystem","authors":"L. Harn, D. Huang, C. Laih","doi":"10.1109/ICDE.1989.47233","DOIUrl":"https://doi.org/10.1109/ICDE.1989.47233","url":null,"abstract":"A password authentication mechanism based on the public-key distribution cryptosystem is proposed. The scheme uses an authentication table to replace the traditional password file. With this scheme, even if the authentication table is comprised, the system security is preserved. The user's password is effectively bound to the user's identification in a timely, efficient, and simple manner.<<ETX>>","PeriodicalId":329505,"journal":{"name":"[1989] Proceedings. Fifth International Conference on Data Engineering","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127115083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An insurance underwriting expert system is designed and implemented around an existing mainframe program, driven by a sizable old-fashioned database. It is shown how modeling helped not only to identify real user needs but also to untangle many accumulated complications in the existing repository of information. The author also highlights which conceptual modeling mechanisms are most needed to support such tasks.<>
{"title":"Experience in applying conceptual modelling to interface with a real-life business application","authors":"M. Pilote","doi":"10.1109/ICDE.1989.47212","DOIUrl":"https://doi.org/10.1109/ICDE.1989.47212","url":null,"abstract":"An insurance underwriting expert system is designed and implemented around an existing mainframe program, driven by a sizable old-fashioned database. It is shown how modeling helped not only to identify real user needs but also to untangle many accumulated complications in the existing repository of information. The author also highlights which conceptual modeling mechanisms are most needed to support such tasks.<<ETX>>","PeriodicalId":329505,"journal":{"name":"[1989] Proceedings. Fifth International Conference on Data Engineering","volume":"273 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116214190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A theoretical framework for wait-for systems is provided, and general characteristics of a correct algorithm for deadlock detection and resolution are presented. It is shown that the computational upper bounds (number of messages) for deadlock detection and resolution are both O(n/sup 3/) in the worst case when n transactions are involved. This result is better than previous ones, which often are even exponential. Two correct deadlock detection and resolution algorithms are described which both achieve these upper bounds.<>
{"title":"Towards efficient algorithms for deadlock detection and resolution in distributed systems","authors":"ShouHan Wang, G. Vossen","doi":"10.1109/ICDE.1989.47228","DOIUrl":"https://doi.org/10.1109/ICDE.1989.47228","url":null,"abstract":"A theoretical framework for wait-for systems is provided, and general characteristics of a correct algorithm for deadlock detection and resolution are presented. It is shown that the computational upper bounds (number of messages) for deadlock detection and resolution are both O(n/sup 3/) in the worst case when n transactions are involved. This result is better than previous ones, which often are even exponential. Two correct deadlock detection and resolution algorithms are described which both achieve these upper bounds.<<ETX>>","PeriodicalId":329505,"journal":{"name":"[1989] Proceedings. Fifth International Conference on Data Engineering","volume":"1402 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114841235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The database analyzer and predictor (DBAP) is a CASE (computer-aided software engineering) tool supporting the design and tuning of IDMS (integrated data-management system) DB/DC applications. It is an interactive system, completely integrated with the IDMS online environment and providing the database designer with a comprehensive set of performance analysis models evaluating principal aspects of the database system performance. Close coupling of the workload analysis models with the computer system model provides a means for predicting the response-time behavior of the modeled database system. The performance-oriented database methodology, supported by a menu-driven user interface, guides the users through the database design, as well as existing database system tuning and capacity-planning projects.<>
{"title":"Database analyzer and predictor-an overview","authors":"S. Orlando, V. Perri, S. Scrivano, W. Staniszkis","doi":"10.1109/ICDE.1989.47270","DOIUrl":"https://doi.org/10.1109/ICDE.1989.47270","url":null,"abstract":"The database analyzer and predictor (DBAP) is a CASE (computer-aided software engineering) tool supporting the design and tuning of IDMS (integrated data-management system) DB/DC applications. It is an interactive system, completely integrated with the IDMS online environment and providing the database designer with a comprehensive set of performance analysis models evaluating principal aspects of the database system performance. Close coupling of the workload analysis models with the computer system model provides a means for predicting the response-time behavior of the modeled database system. The performance-oriented database methodology, supported by a menu-driven user interface, guides the users through the database design, as well as existing database system tuning and capacity-planning projects.<<ETX>>","PeriodicalId":329505,"journal":{"name":"[1989] Proceedings. Fifth International Conference on Data Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130048211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The quasi-partitioning paradigm of operation for partitioned database systems is discussed in which a broken main link between two partitions can be replaced by a much slower backup link (e.g. a dial-up telephone connection). The paradigm solves the problem of preparation for network partitioning. The quasi-partitioning mode of operation has two primitive operations: creeping retrieval and creeping merge. Creeping retrieval increases data availability by crossing partition boundaries (over backup links) to read foreign data. Similarly, creeping merge improves the degree of partition-consistency by crossing partition boundaries to perform merge actions. A quasi-partitioning protocol consists of an adaptation protocol and a merge protocol. Taxonomies are shown for quasi-partitioning adaptation protocols and for quasi-partitioning merge protocols (for restoring partition-consistency after system reconnection). Since merge protocols and adaptation protocols are interdependent, it is indicated here how these protocols should be paired.<>
{"title":"Quasi-partitioning: a new paradigm for transaction execution in partitioned distributed database systems","authors":"L. Lilien","doi":"10.1109/ICDE.1989.47261","DOIUrl":"https://doi.org/10.1109/ICDE.1989.47261","url":null,"abstract":"The quasi-partitioning paradigm of operation for partitioned database systems is discussed in which a broken main link between two partitions can be replaced by a much slower backup link (e.g. a dial-up telephone connection). The paradigm solves the problem of preparation for network partitioning. The quasi-partitioning mode of operation has two primitive operations: creeping retrieval and creeping merge. Creeping retrieval increases data availability by crossing partition boundaries (over backup links) to read foreign data. Similarly, creeping merge improves the degree of partition-consistency by crossing partition boundaries to perform merge actions. A quasi-partitioning protocol consists of an adaptation protocol and a merge protocol. Taxonomies are shown for quasi-partitioning adaptation protocols and for quasi-partitioning merge protocols (for restoring partition-consistency after system reconnection). Since merge protocols and adaptation protocols are interdependent, it is indicated here how these protocols should be paired.<<ETX>>","PeriodicalId":329505,"journal":{"name":"[1989] Proceedings. Fifth International Conference on Data Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128979977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}