Pub Date : 1998-08-20DOI: 10.1109/COOPIS.1998.706182
G. Huck, Péter Fankhauser, K. Aberer, E. Neuhold
Jedi (Java based Extraction and Dissemination of Information) is a lightweight tool for the creation of wrappers and mediators to extract, combine, and reconcile information from several independent information sources. For wrappers it uses attributed grammars, which are evaluated with a fault-tolerant parsing strategy to cope with ambiguous grammars and irregular sources. For mediation it uses a simple generic object-model that can be extended with Java-libraries for specific models such as HTML, XML or the relational model. This paper describes the architecture of Jedi, and then focuses on Jedi's wrapper generator.
{"title":"Jedi: extracting and synthesizing information from the Web","authors":"G. Huck, Péter Fankhauser, K. Aberer, E. Neuhold","doi":"10.1109/COOPIS.1998.706182","DOIUrl":"https://doi.org/10.1109/COOPIS.1998.706182","url":null,"abstract":"Jedi (Java based Extraction and Dissemination of Information) is a lightweight tool for the creation of wrappers and mediators to extract, combine, and reconcile information from several independent information sources. For wrappers it uses attributed grammars, which are evaluated with a fault-tolerant parsing strategy to cope with ambiguous grammars and irregular sources. For mediation it uses a simple generic object-model that can be extended with Java-libraries for specific models such as HTML, XML or the relational model. This paper describes the architecture of Jedi, and then focuses on Jedi's wrapper generator.","PeriodicalId":106219,"journal":{"name":"Proceedings. 3rd IFCIS International Conference on Cooperative Information Systems (Cat. No.98EX122)","volume":"288 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114105993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-08-20DOI: 10.1109/COOPIS.1998.706199
P. Hung, K. Karlapalem, J. W. Gray
Workflow systems are becoming very popular and are being used to support many of the day to day activities in large organizations. One of the major problems with workflow systems is that they often use heterogeneous and distributed hardware and software systems to execute a given activity. This gives rise to decentralized security policies and mechanisms, which, in order to enable activity execution, give too many privileges to agents (humans or systems) for executing the work. We develop the concept of least privilege, wherein the set of agents are given just enough privileges to complete the given activities. We develop our concepts in the context of CapBasED-AMS (Capability-based and Event-driven Activity Management System). CapBasED-AMS deals with the management and execution of activities. An activity consists of multiple inter-dependent tasks (atomic activities, each executed by a single agent) that need to be coordinated, scheduled and executed by a set of agents. We formalize the concept of least privilege and present algorithms to statically assign least privilege assignment to the agents. We develop the concept of dynamic least privilege enforcement, wherein an agent is given its privileges only during the duration of the task for which those privileges were assigned. Finally, we introduce a metric, security risk factor and use it to evaluate the trade-off between least privilege and resilience to agent failure.
{"title":"A study of least privilege in CapBasED-AMS","authors":"P. Hung, K. Karlapalem, J. W. Gray","doi":"10.1109/COOPIS.1998.706199","DOIUrl":"https://doi.org/10.1109/COOPIS.1998.706199","url":null,"abstract":"Workflow systems are becoming very popular and are being used to support many of the day to day activities in large organizations. One of the major problems with workflow systems is that they often use heterogeneous and distributed hardware and software systems to execute a given activity. This gives rise to decentralized security policies and mechanisms, which, in order to enable activity execution, give too many privileges to agents (humans or systems) for executing the work. We develop the concept of least privilege, wherein the set of agents are given just enough privileges to complete the given activities. We develop our concepts in the context of CapBasED-AMS (Capability-based and Event-driven Activity Management System). CapBasED-AMS deals with the management and execution of activities. An activity consists of multiple inter-dependent tasks (atomic activities, each executed by a single agent) that need to be coordinated, scheduled and executed by a set of agents. We formalize the concept of least privilege and present algorithms to statically assign least privilege assignment to the agents. We develop the concept of dynamic least privilege enforcement, wherein an agent is given its privileges only during the duration of the task for which those privileges were assigned. Finally, we introduce a metric, security risk factor and use it to evaluate the trade-off between least privilege and resilience to agent failure.","PeriodicalId":106219,"journal":{"name":"Proceedings. 3rd IFCIS International Conference on Cooperative Information Systems (Cat. No.98EX122)","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123357595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-08-20DOI: 10.1109/COOPIS.1998.706228
A. Moulton, S. Madnick, M. Siegel
The paper reports on a practical implementation of a context mediator for the fixed income securities industry. The authors describe industry circumstances and the data and calculation services (DCS) mediator developed and deployed in the early 1990s. The mediator was designed as an interpretive engine controlled by a static declarative knowledge structure and client preference data. In addition to heterogeneous, autonomous data sources, the mediator integrated autonomously developed local and remote procedural components. Client access to both data and computational resources were provided through an active conceptual model. Structural and semantic context conversions were used to integrate disparate components and to support varying client needs. Lessons learned from the implementation and usage of this mediator provide insight into the requirements for a successful context mediator.
{"title":"Context mediation on Wall Street","authors":"A. Moulton, S. Madnick, M. Siegel","doi":"10.1109/COOPIS.1998.706228","DOIUrl":"https://doi.org/10.1109/COOPIS.1998.706228","url":null,"abstract":"The paper reports on a practical implementation of a context mediator for the fixed income securities industry. The authors describe industry circumstances and the data and calculation services (DCS) mediator developed and deployed in the early 1990s. The mediator was designed as an interpretive engine controlled by a static declarative knowledge structure and client preference data. In addition to heterogeneous, autonomous data sources, the mediator integrated autonomously developed local and remote procedural components. Client access to both data and computational resources were provided through an active conceptual model. Structural and semantic context conversions were used to integrate disparate components and to support varying client needs. Lessons learned from the implementation and usage of this mediator provide insight into the requirements for a successful context mediator.","PeriodicalId":106219,"journal":{"name":"Proceedings. 3rd IFCIS International Conference on Cooperative Information Systems (Cat. No.98EX122)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132676433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-08-20DOI: 10.1109/COOPIS.1998.706179
A. Gal, Scott Kerr, J. Mylopoulos
The World Wide Web is serving as a leading vehicle for information dissemination by offering information services, such as product information, group interactions, or sales transactions. Three major factors affect the performance and reliability of information services for the Web: the distribution of information which has resulted from the globalization of information systems, the heterogeneity of information sources, and the sources' instability caused by their autonomous evolution. This paper focuses on integrating existing information sources, available via the Web, in the delivery of information services. The primary objective of the paper is to provide mechanisms for structuring and maintaining a domain model for Web applications. These mechanisms are based on conceptual modeling techniques, where concepts are being defined and refined within a meta-data repository through the use of instantiation, generalization and attribution. Also, active databases techniques are exploited to provide robust mechanisms for maintaining a consistent domain model in a rapidly evolving environment, such as the Web.
{"title":"Information services for the Web: building and maintaining domain models","authors":"A. Gal, Scott Kerr, J. Mylopoulos","doi":"10.1109/COOPIS.1998.706179","DOIUrl":"https://doi.org/10.1109/COOPIS.1998.706179","url":null,"abstract":"The World Wide Web is serving as a leading vehicle for information dissemination by offering information services, such as product information, group interactions, or sales transactions. Three major factors affect the performance and reliability of information services for the Web: the distribution of information which has resulted from the globalization of information systems, the heterogeneity of information sources, and the sources' instability caused by their autonomous evolution. This paper focuses on integrating existing information sources, available via the Web, in the delivery of information services. The primary objective of the paper is to provide mechanisms for structuring and maintaining a domain model for Web applications. These mechanisms are based on conceptual modeling techniques, where concepts are being defined and refined within a meta-data repository through the use of instantiation, generalization and attribution. Also, active databases techniques are exploited to provide robust mechanisms for maintaining a consistent domain model in a rapidly evolving environment, such as the Web.","PeriodicalId":106219,"journal":{"name":"Proceedings. 3rd IFCIS International Conference on Cooperative Information Systems (Cat. No.98EX122)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129265617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-08-20DOI: 10.1109/COOPIS.1998.706201
M. Norrie, Alexios Palinginis, Alain Würgler
The authors present a general model of database connectivity for the controlled sharing and migration of information across databases as supported in the object-oriented database management system OMS Connect. A database may connect to one or more other databases, thereby enabling remote data to be viewed, processed and copied within the local database in such a way that consistency of the user working space can be maintained. Further the objects of the remote database may be extended locally with attributes and methods and additional classifications. Importantly, operation of the local database is not dependent on such a connection and remote objects may be replicated locally with explicit synchronisation points, thus making the system suitable for mobile computing.
{"title":"OMS Connect: supporting multidatabase and mobile working through database connectivity","authors":"M. Norrie, Alexios Palinginis, Alain Würgler","doi":"10.1109/COOPIS.1998.706201","DOIUrl":"https://doi.org/10.1109/COOPIS.1998.706201","url":null,"abstract":"The authors present a general model of database connectivity for the controlled sharing and migration of information across databases as supported in the object-oriented database management system OMS Connect. A database may connect to one or more other databases, thereby enabling remote data to be viewed, processed and copied within the local database in such a way that consistency of the user working space can be maintained. Further the objects of the remote database may be extended locally with attributes and methods and additional classifications. Importantly, operation of the local database is not dependent on such a connection and remote objects may be replicated locally with explicit synchronisation points, thus making the system suitable for mobile computing.","PeriodicalId":106219,"journal":{"name":"Proceedings. 3rd IFCIS International Conference on Cooperative Information Systems (Cat. No.98EX122)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121401502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-08-20DOI: 10.1109/COOPIS.1998.706315
P. K. Reddy, M. Kitsuregawa
In distributed database systems (DDBSs), a transaction blocks during two-phase commit (2PC) processing if the coordinator site fails and at the same time some participant site has declared itself ready to commit the transaction. The blocking phenomena reduces availability of the system since the blocked transactions keep all the resources until they receive the final command from the coordinator after its recovery. To remove the blocking problem in 2PC protocol, the three phase commit (3PC) protocol was proposed. Although 3PC protocol eliminates the blocking problem, it involves an extra round of message transmission, which further degrades the performance of DDBSs. We propose a backup commit (BC) protocol by including a backup phase to 2PC protocol. In this, one backup site is attached to each coordinator site. After receiving responses from all participants in the first phase, the coordinator communicates its decision only to its backup site in the backup phase. Afterwards, it sends a final decision to participants. When blocking occurs due to the failure of the coordinator site, the participant sites consult the coordinator's backup site and follow termination protocols. In this way, BC protocol achieves a non-blocking property in most of the coordinator site failures. However, in the worst case, the blocking can occur in BC protocol when both the coordinator and its backup site fail simultaneously. If such a rare case occurs, the participants wait until the recovery of either the coordinator site or the backup site. BC protocol suits DDBS environments in which sites fail frequently and messages take longer delivery time. Through simulation experiments it is shown that BC protocol exhibits superior throughput and response time performance over 3PC protocol and performs closely with 2PC protocol.
{"title":"Reducing the blocking in two-phase commit protocol employing backup sites","authors":"P. K. Reddy, M. Kitsuregawa","doi":"10.1109/COOPIS.1998.706315","DOIUrl":"https://doi.org/10.1109/COOPIS.1998.706315","url":null,"abstract":"In distributed database systems (DDBSs), a transaction blocks during two-phase commit (2PC) processing if the coordinator site fails and at the same time some participant site has declared itself ready to commit the transaction. The blocking phenomena reduces availability of the system since the blocked transactions keep all the resources until they receive the final command from the coordinator after its recovery. To remove the blocking problem in 2PC protocol, the three phase commit (3PC) protocol was proposed. Although 3PC protocol eliminates the blocking problem, it involves an extra round of message transmission, which further degrades the performance of DDBSs. We propose a backup commit (BC) protocol by including a backup phase to 2PC protocol. In this, one backup site is attached to each coordinator site. After receiving responses from all participants in the first phase, the coordinator communicates its decision only to its backup site in the backup phase. Afterwards, it sends a final decision to participants. When blocking occurs due to the failure of the coordinator site, the participant sites consult the coordinator's backup site and follow termination protocols. In this way, BC protocol achieves a non-blocking property in most of the coordinator site failures. However, in the worst case, the blocking can occur in BC protocol when both the coordinator and its backup site fail simultaneously. If such a rare case occurs, the participants wait until the recovery of either the coordinator site or the backup site. BC protocol suits DDBS environments in which sites fail frequently and messages take longer delivery time. Through simulation experiments it is shown that BC protocol exhibits superior throughput and response time performance over 3PC protocol and performs closely with 2PC protocol.","PeriodicalId":106219,"journal":{"name":"Proceedings. 3rd IFCIS International Conference on Cooperative Information Systems (Cat. No.98EX122)","volume":"408 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116668856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-08-20DOI: 10.1109/COOPIS.1998.706193
Maria-Esther Vidal, L. Raschid, J. Gruser
Current mediator and wrapper architectures do not have the flexibility to scale to multiple wrapped sources, where some sources may be redundant, and some sources may provide incomplete answers to a query. We propose a meta-wrapper component which is capable of handling multiple wrapped sources, in a particular domain, where the multiple sources provide related information. The meta-wrapper makes these sources transparent to the mediator and provides a single meta-wrapper interface for all these sources. Source descriptions specify the content and query capability of the sources. These are used to determine the meta-wrapper interface and to decide which queries from a mediator can be accepted. Sources are partitioned into equivalence classes, based on their descriptions. These equivalence classes are partially ordered, and the lattices that correspond to these orderings are used to identify the relevant sources for a query submitted by the mediator. If there is redundancy of the sources, the meta-wrapper identifies alternate sources for the query. A meta-wrapper cost model is then used to select among alternate relevant sources and choose the best plan.
{"title":"A meta-wrapper for scaling up to multiple autonomous distributed information sources","authors":"Maria-Esther Vidal, L. Raschid, J. Gruser","doi":"10.1109/COOPIS.1998.706193","DOIUrl":"https://doi.org/10.1109/COOPIS.1998.706193","url":null,"abstract":"Current mediator and wrapper architectures do not have the flexibility to scale to multiple wrapped sources, where some sources may be redundant, and some sources may provide incomplete answers to a query. We propose a meta-wrapper component which is capable of handling multiple wrapped sources, in a particular domain, where the multiple sources provide related information. The meta-wrapper makes these sources transparent to the mediator and provides a single meta-wrapper interface for all these sources. Source descriptions specify the content and query capability of the sources. These are used to determine the meta-wrapper interface and to decide which queries from a mediator can be accepted. Sources are partitioned into equivalence classes, based on their descriptions. These equivalence classes are partially ordered, and the lattices that correspond to these orderings are used to identify the relevant sources for a query submitted by the mediator. If there is redundancy of the sources, the meta-wrapper identifies alternate sources for the query. A meta-wrapper cost model is then used to select among alternate relevant sources and choose the best plan.","PeriodicalId":106219,"journal":{"name":"Proceedings. 3rd IFCIS International Conference on Cooperative Information Systems (Cat. No.98EX122)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131459247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-08-20DOI: 10.1109/COOPIS.1998.706203
Wen-Chih Peng, Ming-Syan Chen
Recent advances in wireless technologies have made the mobile computing a reality. In order to provide services of good quality to mobile users and improve the system performance, the mobile computing system is usually of a distributed server architecture. As users move to a new service area, the new server is expected to take over the execution of running programs for mobile users so as to reduce the communication overhead of the mobile system. This procedure is referred to as service handoff. Note that when service handoff occurs, the cache of the new sewer does not contain any data entry that was accessed by prior transactions and the new server will thus lose its advantages for cache access. To remedy this, the authors examine several cache retrieval schemes to improve the efficiency of cache retrieval. In particular they analyze the impact of using a coordinator buffer to improve the overall performance of cache retrieval. Moreover, in light of the properties of transactions (i.e., temporal locality of data access among transactions), they devise a dynamic and adaptive cache retrieval scheme (DAR) that can adopt proper cache methods based on some specific criteria devised to deal with the service handoff situation in a mobile computing environment. The performance of these cache retrieval schemes is analyzed and a system simulator is developed to validate the results.
{"title":"A dynamic and adaptive cache retrieval scheme for mobile computing systems","authors":"Wen-Chih Peng, Ming-Syan Chen","doi":"10.1109/COOPIS.1998.706203","DOIUrl":"https://doi.org/10.1109/COOPIS.1998.706203","url":null,"abstract":"Recent advances in wireless technologies have made the mobile computing a reality. In order to provide services of good quality to mobile users and improve the system performance, the mobile computing system is usually of a distributed server architecture. As users move to a new service area, the new server is expected to take over the execution of running programs for mobile users so as to reduce the communication overhead of the mobile system. This procedure is referred to as service handoff. Note that when service handoff occurs, the cache of the new sewer does not contain any data entry that was accessed by prior transactions and the new server will thus lose its advantages for cache access. To remedy this, the authors examine several cache retrieval schemes to improve the efficiency of cache retrieval. In particular they analyze the impact of using a coordinator buffer to improve the overall performance of cache retrieval. Moreover, in light of the properties of transactions (i.e., temporal locality of data access among transactions), they devise a dynamic and adaptive cache retrieval scheme (DAR) that can adopt proper cache methods based on some specific criteria devised to deal with the service handoff situation in a mobile computing environment. The performance of these cache retrieval schemes is analyzed and a system simulator is developed to validate the results.","PeriodicalId":106219,"journal":{"name":"Proceedings. 3rd IFCIS International Conference on Cooperative Information Systems (Cat. No.98EX122)","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128973497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-08-20DOI: 10.1109/COOPIS.1998.706194
R. Ahmed, U. Dayal
In many complex applications, there is a need to manage work-in-progress. Typically, this requires that each user has a private and non-volatile workspace, in which multiple pieces of work are finished and stored before it is appropriate for these data changes to be made globally accessible to other users. These requirements reflect current work practices in paper-based systems, which ensure security, persistence, privacy and accountability for work-in-progress. This paper describes a technique for implementing work-in-progress that requires no extensions to existing relational database management systems. The semantics of private workspace and work-in-progress are implemented by augmenting the database schema and modifying query and update operations against this augmented schema.
{"title":"Management of work in progress in relational information systems","authors":"R. Ahmed, U. Dayal","doi":"10.1109/COOPIS.1998.706194","DOIUrl":"https://doi.org/10.1109/COOPIS.1998.706194","url":null,"abstract":"In many complex applications, there is a need to manage work-in-progress. Typically, this requires that each user has a private and non-volatile workspace, in which multiple pieces of work are finished and stored before it is appropriate for these data changes to be made globally accessible to other users. These requirements reflect current work practices in paper-based systems, which ensure security, persistence, privacy and accountability for work-in-progress. This paper describes a technique for implementing work-in-progress that requires no extensions to existing relational database management systems. The semantics of private workspace and work-in-progress are implemented by augmenting the database schema and modifying query and update operations against this augmented schema.","PeriodicalId":106219,"journal":{"name":"Proceedings. 3rd IFCIS International Conference on Cooperative Information Systems (Cat. No.98EX122)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129132955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-08-20DOI: 10.1109/COOPIS.1998.706285
H. Frank, Johann Eder
View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema. In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.
{"title":"Integration of statecharts","authors":"H. Frank, Johann Eder","doi":"10.1109/COOPIS.1998.706285","DOIUrl":"https://doi.org/10.1109/COOPIS.1998.706285","url":null,"abstract":"View integration is an effective technique for developing large conceptual database models. The universe of discourse is described from the viewpoint of different user groups or parts of the system resulting in a set of external models. In a second step these models have to be integrated into a common conceptual database schema. In this work we present a new methodology for integrating views based upon an object oriented data model, where we concentrate on the integration of the behaviour of objects, which is not supported by existing view integration methods.","PeriodicalId":106219,"journal":{"name":"Proceedings. 3rd IFCIS International Conference on Cooperative Information Systems (Cat. No.98EX122)","volume":"29 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116455954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}