Academic programs follow very heterogeneous curricula. A first roadmap to overcome this still unsolved problem was stated in the Bologna-process. In this paper, we propose a framework heading this goal. It enables students to integrate classes from other institutions into their own curriculum even if the curriculum at the corresponding institution differs. In order to allow class substitutions, we propose the development of an ontology describing the required metadata. Methods for indexing texts as well as case-based reasoning techniques were used in order to identify interchangeable, organizational and semantical equivalent classes. Interoperability between participating institutions was accomplished using Web services and peer-to-peer nets. Existing learning management systems can be integrated. As a result, it was transparent to students where they take their classes in order to reach their degree. Furthermore, educational institutions can broaden their offerings by integrating additional programs
{"title":"Handling Heterogeneous Academic Curricula","authors":"R. Hackelbusch","doi":"10.1109/DEXA.2006.65","DOIUrl":"https://doi.org/10.1109/DEXA.2006.65","url":null,"abstract":"Academic programs follow very heterogeneous curricula. A first roadmap to overcome this still unsolved problem was stated in the Bologna-process. In this paper, we propose a framework heading this goal. It enables students to integrate classes from other institutions into their own curriculum even if the curriculum at the corresponding institution differs. In order to allow class substitutions, we propose the development of an ontology describing the required metadata. Methods for indexing texts as well as case-based reasoning techniques were used in order to identify interchangeable, organizational and semantical equivalent classes. Interoperability between participating institutions was accomplished using Web services and peer-to-peer nets. Existing learning management systems can be integrated. As a result, it was transparent to students where they take their classes in order to reach their degree. Furthermore, educational institutions can broaden their offerings by integrating additional programs","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124427233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unreliable failure detectors are mechanisms providing information about process failures, that allows to solve several problems in asynchronous systems, e.g., consensus. A particular class of failure detectors, Omega, provides an eventual leader election functionality. Recently, an algorithm implementing Omega with unknown membership and weak synchrony has been proposed by Jimenez et al. In that work, a crash failure model and a system in which every process has a direct communication link with every other process are assumed. In this paper, we adapt this algorithm to the crash-recovery failure model, and show that it also works in systems with partial connectivity and/or synchrony
{"title":"Implementing the Omega Failure Detector in the Crash-Recovery Model with partial Connectivity and/or Synchrony","authors":"M. Larrea, Cristian Martín","doi":"10.1109/DEXA.2006.71","DOIUrl":"https://doi.org/10.1109/DEXA.2006.71","url":null,"abstract":"Unreliable failure detectors are mechanisms providing information about process failures, that allows to solve several problems in asynchronous systems, e.g., consensus. A particular class of failure detectors, Omega, provides an eventual leader election functionality. Recently, an algorithm implementing Omega with unknown membership and weak synchrony has been proposed by Jimenez et al. In that work, a crash failure model and a system in which every process has a direct communication link with every other process are assumed. In this paper, we adapt this algorithm to the crash-recovery failure model, and show that it also works in systems with partial connectivity and/or synchrony","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116685847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nondeterminism is a source of problems for distributed replication because it makes it difficult to keep replicas consistent as they execute, process invocations and modify their internal states. Even if a middleware application is completely deterministic, the underlying middleware, e.g., the ORB, can continue to remain a source of nondeterminism. The paper presents our analysis of an open-source ORB from the viewpoint of nondeterminism. Our approach identifies the various sources of nondeterminism within the ORB. Our results demonstrate that while ORBs can contain several apparently nondeterministic system calls and functions, only a fraction of them manifest as actual nondeterminism and pose a threat to replica consistency
{"title":"Nondeterminism in ORBs: The Perception and the Reality","authors":"Joseph G. Slember, P. Narasimhan","doi":"10.1109/DEXA.2006.99","DOIUrl":"https://doi.org/10.1109/DEXA.2006.99","url":null,"abstract":"Nondeterminism is a source of problems for distributed replication because it makes it difficult to keep replicas consistent as they execute, process invocations and modify their internal states. Even if a middleware application is completely deterministic, the underlying middleware, e.g., the ORB, can continue to remain a source of nondeterminism. The paper presents our analysis of an open-source ORB from the viewpoint of nondeterminism. Our approach identifies the various sources of nondeterminism within the ORB. Our results demonstrate that while ORBs can contain several apparently nondeterministic system calls and functions, only a fraction of them manifest as actual nondeterminism and pose a threat to replica consistency","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115329694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the main challenges of large-scale information mediation and warehousing is the efficient and resource-effective processing of continuous queries. Continuous queries monitor streams of incoming data and produce results on-the-fly. They are usually long-running, yet need to be removed from the system when they become obsolete. We present a framework and the corresponding algorithms and data-structures for the efficient evaluation of multiple long-running spatial queries over unbounded streams of spatial data and the management of obsolete queries. Using both real-life and synthetic datasets and workloads, we show that our proposed approach achieves significant improvements over standard approaches
{"title":"Processing of Multiple Long-Running Queries in LargeScale Geo-Data Repositories","authors":"W. Tok, S. Bressan","doi":"10.1109/DEXA.2006.117","DOIUrl":"https://doi.org/10.1109/DEXA.2006.117","url":null,"abstract":"One of the main challenges of large-scale information mediation and warehousing is the efficient and resource-effective processing of continuous queries. Continuous queries monitor streams of incoming data and produce results on-the-fly. They are usually long-running, yet need to be removed from the system when they become obsolete. We present a framework and the corresponding algorithms and data-structures for the efficient evaluation of multiple long-running spatial queries over unbounded streams of spatial data and the management of obsolete queries. Using both real-life and synthetic datasets and workloads, we show that our proposed approach achieves significant improvements over standard approaches","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122419018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Aleksy, C. Atkinson, P. Bostan, T. Butter, M. Schader
As the power of mobile devices continues to grow, and the range of resources accessible via wireless networks expands, there is an increasing need to offer services to users in a customized way, based on their immediate desires and context. At the same time, to construct such applications in a cost-effective and reusable way, there is also a growing pressure on mobile application developers to structure their systems in terms of a service-oriented architecture. However, these two goals are not always compatible. In this paper we present a new set of architectural components and principles which allow context-sensitive, mobile business applications to be assembled in highly flexible and reuse-oriented way based on the principles of SOA. We present the four main configuration patterns and interaction styles which this architecture supports and evaluate their pros and cons from the perspective of different infrastructure and usability issues such as bandwidth usage, latency needs, pricing and, privacy. Finally, we discuss which configuration to use in which circumstances
{"title":"Interaction Styles for Service Discovery in Mobile Business Applications","authors":"M. Aleksy, C. Atkinson, P. Bostan, T. Butter, M. Schader","doi":"10.1109/DEXA.2006.75","DOIUrl":"https://doi.org/10.1109/DEXA.2006.75","url":null,"abstract":"As the power of mobile devices continues to grow, and the range of resources accessible via wireless networks expands, there is an increasing need to offer services to users in a customized way, based on their immediate desires and context. At the same time, to construct such applications in a cost-effective and reusable way, there is also a growing pressure on mobile application developers to structure their systems in terms of a service-oriented architecture. However, these two goals are not always compatible. In this paper we present a new set of architectural components and principles which allow context-sensitive, mobile business applications to be assembled in highly flexible and reuse-oriented way based on the principles of SOA. We present the four main configuration patterns and interaction styles which this architecture supports and evaluate their pros and cons from the perspective of different infrastructure and usability issues such as bandwidth usage, latency needs, pricing and, privacy. Finally, we discuss which configuration to use in which circumstances","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114369989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing template languages and engines do not give guarantees about the results of the instantiation process. As this is a possible source of errors and the reason for additional testing efforts, the proposed architecture that allows guarantees about the instantiated template will increase the quality of applications incorporating template engines, like Web Content Management Systems and UML tools. The implementation of this architecture for the generation of XML documents using template techniques concludes the article
{"title":"An Architecture for an XML-Template Engine Enabling Safe Authoring","authors":"Falk Hartmann","doi":"10.1109/DEXA.2006.23","DOIUrl":"https://doi.org/10.1109/DEXA.2006.23","url":null,"abstract":"Existing template languages and engines do not give guarantees about the results of the instantiation process. As this is a possible source of errors and the reason for additional testing efforts, the proposed architecture that allows guarantees about the instantiated template will increase the quality of applications incorporating template engines, like Web Content Management Systems and UML tools. The implementation of this architecture for the generation of XML documents using template techniques concludes the article","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129868966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is devoted to an extension of the inclusion operator. The logical view of the inclusion between fuzzy sets rests on the use of fuzzy implications. The idea suggested in this paper is to relax R-implications in order to tolerate low-level intensity exceptions and thus obtain a qualitative approximate inclusion. A concrete usage of this type of inclusion is illustrated in the area of databases, with the approximate division of relations
{"title":"On a Qualitiative Approximate Inclusion -- Application to the Division of Fuzzy Relations","authors":"P. Bosc, O. Pivert","doi":"10.1109/DEXA.2006.100","DOIUrl":"https://doi.org/10.1109/DEXA.2006.100","url":null,"abstract":"This paper is devoted to an extension of the inclusion operator. The logical view of the inclusion between fuzzy sets rests on the use of fuzzy implications. The idea suggested in this paper is to relax R-implications in order to tolerate low-level intensity exceptions and thus obtain a qualitative approximate inclusion. A concrete usage of this type of inclusion is illustrated in the area of databases, with the approximate division of relations","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130407152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Auto-ID technologies allow capturing the time and location of products in the supply chain for tracking and tracing. This paves the way for a variety of business applications, such as anti-counterfeiting, pedigree, and genealogy, which analyze the trace history of products to detect patterns or anomalies in the supply chain. While these applications have gained considerable interest recently, further work is needed towards integration of event data from heterogeneous auto-ID nodes in order to obtain the complete trace history for products of interest. As a first step, we perform an architectural study on interoperable auto-ID systems and present the results in this paper. We first review established techniques for data integration and data sharing as well as relevant industrial efforts. We then clarify the requirements that need to be addressed by an auto-ID network. Finally, we discuss four possible architecture alternatives for implementing interoperability in such a network and comparatively evaluate the approaches according to the identified requirements
{"title":"Architecture Evaluation for Distributed Auto-ID Systems","authors":"H. Do, Jürgen Anke, Gregor Hackenbroich","doi":"10.1109/DEXA.2006.30","DOIUrl":"https://doi.org/10.1109/DEXA.2006.30","url":null,"abstract":"Auto-ID technologies allow capturing the time and location of products in the supply chain for tracking and tracing. This paves the way for a variety of business applications, such as anti-counterfeiting, pedigree, and genealogy, which analyze the trace history of products to detect patterns or anomalies in the supply chain. While these applications have gained considerable interest recently, further work is needed towards integration of event data from heterogeneous auto-ID nodes in order to obtain the complete trace history for products of interest. As a first step, we perform an architectural study on interoperable auto-ID systems and present the results in this paper. We first review established techniques for data integration and data sharing as well as relevant industrial efforts. We then clarify the requirements that need to be addressed by an auto-ID network. Finally, we discuss four possible architecture alternatives for implementing interoperability in such a network and comparatively evaluate the approaches according to the identified requirements","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123420356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The joint use of Java and XML is a matter of fact for new developments, even in hard contexts. A recent research area is trying to address how to improve techniques for coupling Java programs and XML parsers and API. This paper briefly show the current state of the art of this young research area. Two perspectives are considered: efficiency (i.e. improvement of parsing performance) and effectiveness (development of techniques to obtain faster application development processes)
{"title":"On the Problem of Coupling Java Algorithms and XML Parsers (Invited Paper)","authors":"G. Psaila","doi":"10.1109/DEXA.2006.102","DOIUrl":"https://doi.org/10.1109/DEXA.2006.102","url":null,"abstract":"The joint use of Java and XML is a matter of fact for new developments, even in hard contexts. A recent research area is trying to address how to improve techniques for coupling Java programs and XML parsers and API. This paper briefly show the current state of the art of this young research area. Two perspectives are considered: efficiency (i.e. improvement of parsing performance) and effectiveness (development of techniques to obtain faster application development processes)","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121199832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing amount of XML data provided via the Web imposes a large need for efficient data exchange and import. This has led to an increasing interest in XML standards within a large number of areas. However, in most application domains there are several competing standards capturing the same kind of information. This work presents a method for classification and comparison of standards within an area. The method can be applied to XML-standards in any domain. We report on the situation for two different areas, molecular interactions and digital television and use our method to compare the standards within the two domains. The classification gives information on how similar the standards are, in terms of information content and structure. This information is useful for deciding which kind of methods are interesting for providing automatic matching and efficient development of tools for import of standardised data
{"title":"A Classification for Comparing Standardized XML Data","authors":"L. Strömbäck","doi":"10.1109/DEXA.2006.5","DOIUrl":"https://doi.org/10.1109/DEXA.2006.5","url":null,"abstract":"The increasing amount of XML data provided via the Web imposes a large need for efficient data exchange and import. This has led to an increasing interest in XML standards within a large number of areas. However, in most application domains there are several competing standards capturing the same kind of information. This work presents a method for classification and comparison of standards within an area. The method can be applied to XML-standards in any domain. We report on the situation for two different areas, molecular interactions and digital television and use our method to compare the standards within the two domains. The classification gives information on how similar the standards are, in terms of information content and structure. This information is useful for deciding which kind of methods are interesting for providing automatic matching and efficient development of tools for import of standardised data","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116180305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}