Various kinds of data such as news articles and sensor data are generated continuously in the form of XML data on the network. The processing systems (e.g., systems for selective dissemination of information and notification) must evaluate many filters for every XML data. Therefore, Gupta and Suciu proposed an automaton called the XPush machine, which can efficiently evaluate a large number of XPath filters, each with many predicates, on a stream of XML documents. The XPush machine is constructed by creating an AFA (Alternating Finite Automaton) for each filter, and then transforming the set of AFAs into a single DPDA (Deterministic PushDown Automaton). Since the XPush machine cannot be partially updated inherently, however, addition of a single filter necessitates recalculation (i.e., reconstruction) of the XPush machine as a whole. In other words, the cost of updating an automaton depends on the total number of AFAs (or filters). In this paper, we propose and evaluate an integrated XPush machine, which enables incremental update by constructing the whole machine from a set of sub-XPush machines. The evaluation result positively demonstrates that efficient partial exchange of the AFAs is possible without significantly affecting all of the state transition tables.
{"title":"Incrementally-Updatable Stream Processors for XPath Queries based on Merging Automata via Ordered Hash-keys","authors":"H. Takekawa, H. Ishikawa","doi":"10.1109/DEXA.2007.13","DOIUrl":"https://doi.org/10.1109/DEXA.2007.13","url":null,"abstract":"Various kinds of data such as news articles and sensor data are generated continuously in the form of XML data on the network. The processing systems (e.g., systems for selective dissemination of information and notification) must evaluate many filters for every XML data. Therefore, Gupta and Suciu proposed an automaton called the XPush machine, which can efficiently evaluate a large number of XPath filters, each with many predicates, on a stream of XML documents. The XPush machine is constructed by creating an AFA (Alternating Finite Automaton) for each filter, and then transforming the set of AFAs into a single DPDA (Deterministic PushDown Automaton). Since the XPush machine cannot be partially updated inherently, however, addition of a single filter necessitates recalculation (i.e., reconstruction) of the XPush machine as a whole. In other words, the cost of updating an automaton depends on the total number of AFAs (or filters). In this paper, we propose and evaluate an integrated XPush machine, which enables incremental update by constructing the whole machine from a set of sub-XPush machines. The evaluation result positively demonstrates that efficient partial exchange of the AFAs is possible without significantly affecting all of the state transition tables.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128856926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
XML data are increasing in many areas including internet and public documentation. XML data change dynamically while processing the query. Many kinds of techniques have been researched to speed up the query performance about XML data structures. In this paper, based on the XML structure, we analyze the query pattern and propose the data mining technique about extracting the similar query pattern by the users. In this paper to speed up the performance we used FP- growth algorithm for mining similar query patterns about the XML data structure. We confirmed that the proposed method using FP-growth algorithm applied to XML query subtrees outperforms Apriori algorithm. The proposed method gives the fast query result about the repeatedly occurring queries.
{"title":"Frequent XML Query Pattern Mining based on FP-TRee","authors":"M. Gu, J. Hwang, K. Ryu","doi":"10.1109/DEXA.2007.78","DOIUrl":"https://doi.org/10.1109/DEXA.2007.78","url":null,"abstract":"XML data are increasing in many areas including internet and public documentation. XML data change dynamically while processing the query. Many kinds of techniques have been researched to speed up the query performance about XML data structures. In this paper, based on the XML structure, we analyze the query pattern and propose the data mining technique about extracting the similar query pattern by the users. In this paper to speed up the performance we used FP- growth algorithm for mining similar query patterns about the XML data structure. We confirmed that the proposed method using FP-growth algorithm applied to XML query subtrees outperforms Apriori algorithm. The proposed method gives the fast query result about the repeatedly occurring queries.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129287342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the lack of efficient native XML database management systems, XML data manipulation and query evaluation may be resource-consuming, and represent a bottleneck for several computationally intensive applications. To overcome the above limitations, a possible solution consists in computing synopsis data structures from XML databases, i.e. compressed representations providing a "succinct" description of the original databases while ensuring low computational overhead and high accuracy for many XML processing tasks. Specifically, these data structures are very useful for both selectivity estimation and approximate query answering purposes. On the other hand, while synopsis data structures have been widely applied to relational as well as multidimensional data, a full usage for XML data is still lacking. Inspired by these considerations, in this paper we discuss the models and issues of synopsis data structures for XML databases, and we complete our analysis by selecting and discussing future perspectives for this research field.
{"title":"Synopsis Data Structures for XML Databases: Models, Issues, and Research Perspectives","authors":"A. Bonifati, A. Cuzzocrea","doi":"10.1109/DEXA.2007.100","DOIUrl":"https://doi.org/10.1109/DEXA.2007.100","url":null,"abstract":"Due to the lack of efficient native XML database management systems, XML data manipulation and query evaluation may be resource-consuming, and represent a bottleneck for several computationally intensive applications. To overcome the above limitations, a possible solution consists in computing synopsis data structures from XML databases, i.e. compressed representations providing a \"succinct\" description of the original databases while ensuring low computational overhead and high accuracy for many XML processing tasks. Specifically, these data structures are very useful for both selectivity estimation and approximate query answering purposes. On the other hand, while synopsis data structures have been widely applied to relational as well as multidimensional data, a full usage for XML data is still lacking. Inspired by these considerations, in this paper we discuss the models and issues of synopsis data structures for XML databases, and we complete our analysis by selecting and discussing future perspectives for this research field.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129938726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advances in m-commerce have raised the usage of scenarios with communication restrictions. These scenarios create new security challenges which must be considered by protocol's designers in order to achieve the same security capabilities as in those protocols designed for mobile payment systems based on a 'Full connectivity' scenario (where all the entities can exchange messages with each other without intermediaries). In this paper, we propose an anonymous payment protocol for a Client Centric Mobile Scenario where the merchant has no direct communication with the acquirer due to absence of Internet access in her infrastructure, and the unaffordability of other communication technologies due to the inconveniences and costs associated. The proposed protocol uses symmetric-key operations which require low computational power and can be processed much faster than asymetric ones. As a result, our proposal illustrates how a merchant can sell goods in a secure way even if she can not directly communicate with the acquirer.
{"title":"An Anonymous Account-Based Mobile Payment Protocol for a Restricted Connectivity Scenario","authors":"Jesús Téllez Isaac, J. M. Sierra","doi":"10.1109/DEXA.2007.132","DOIUrl":"https://doi.org/10.1109/DEXA.2007.132","url":null,"abstract":"Recent advances in m-commerce have raised the usage of scenarios with communication restrictions. These scenarios create new security challenges which must be considered by protocol's designers in order to achieve the same security capabilities as in those protocols designed for mobile payment systems based on a 'Full connectivity' scenario (where all the entities can exchange messages with each other without intermediaries). In this paper, we propose an anonymous payment protocol for a Client Centric Mobile Scenario where the merchant has no direct communication with the acquirer due to absence of Internet access in her infrastructure, and the unaffordability of other communication technologies due to the inconveniences and costs associated. The proposed protocol uses symmetric-key operations which require low computational power and can be processed much faster than asymetric ones. As a result, our proposal illustrates how a merchant can sell goods in a secure way even if she can not directly communicate with the acquirer.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"65 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120823057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The term "deep Web" refers to Web pages that are not accessible to search engines, e.g., because those Web pages are dynamically generated in response to queries through Web forms or Web services. The existing automated Web crawlers cannot index these pages, thus they are hidden from the Web search engines. Our goal is to properly annotate such deep Web services (i.e. content generation interfaces of hidden Web sources) with semantic indexing by constructing domain-specific ontologies to represent the contents of the deep Web sources. The fully automatic derivation of ontologies from Web sources without human review is to date a challenging research issue. We present a novel approach to automatically building a large, yet domain-specific, ontology by interweaving sub-taxonomies of WordNet with domain-specific information extracted from deep Web service pages. Our algorithms extract domain concepts from deep Web sources which are augmented with concepts and relationships from WordNet to construct ontology fragments. Structurally, these are directed acyclic graphs (DAGs). An iterative process of extracting WordNet concepts and relationships and bridging concept gaps is used to tie together disparate domain concepts and ontology fragments into one ontology. Using eight domains (airfares, jobs, etc.) from a well-known test-bed, our algorithms constructed an ontology of 1692 concepts from deep Web sources and 4434 concepts from WordNet. This ontology is expressed in the OWL format to support semantic Web searches.
{"title":"Automatic Generation of Ontology from the Deep Web","authors":"Y. J. An, J. Geller, Yi-Ta Wu, Soon Ae Chun","doi":"10.1109/DEXA.2007.43","DOIUrl":"https://doi.org/10.1109/DEXA.2007.43","url":null,"abstract":"The term \"deep Web\" refers to Web pages that are not accessible to search engines, e.g., because those Web pages are dynamically generated in response to queries through Web forms or Web services. The existing automated Web crawlers cannot index these pages, thus they are hidden from the Web search engines. Our goal is to properly annotate such deep Web services (i.e. content generation interfaces of hidden Web sources) with semantic indexing by constructing domain-specific ontologies to represent the contents of the deep Web sources. The fully automatic derivation of ontologies from Web sources without human review is to date a challenging research issue. We present a novel approach to automatically building a large, yet domain-specific, ontology by interweaving sub-taxonomies of WordNet with domain-specific information extracted from deep Web service pages. Our algorithms extract domain concepts from deep Web sources which are augmented with concepts and relationships from WordNet to construct ontology fragments. Structurally, these are directed acyclic graphs (DAGs). An iterative process of extracting WordNet concepts and relationships and bridging concept gaps is used to tie together disparate domain concepts and ontology fragments into one ontology. Using eight domains (airfares, jobs, etc.) from a well-known test-bed, our algorithms constructed an ontology of 1692 concepts from deep Web sources and 4434 concepts from WordNet. This ontology is expressed in the OWL format to support semantic Web searches.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124141559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Babar, Karl Cox, V. Tosic, S. Bleistein, J. Verner
Context identification is an important feature of goal modelling techniques. It helps to understand wider organisational systems during requirements engineering. The goal of this paper is to identify domain context for the process driven requirements modelling technique MAP. We present our preliminary research on adding domain context to MAP by using Jackson's context diagrams. The resulting model shows a clear picture of domain entities involved in the MAP processes. We validate our approach on a case dealing with the point of sale system of Seven Eleven Japan (SEJ).
{"title":"Identifying Domain Context for the Intentional Modelling Technique MAP","authors":"A. Babar, Karl Cox, V. Tosic, S. Bleistein, J. Verner","doi":"10.1109/DEXA.2007.20","DOIUrl":"https://doi.org/10.1109/DEXA.2007.20","url":null,"abstract":"Context identification is an important feature of goal modelling techniques. It helps to understand wider organisational systems during requirements engineering. The goal of this paper is to identify domain context for the process driven requirements modelling technique MAP. We present our preliminary research on adding domain context to MAP by using Jackson's context diagrams. The resulting model shows a clear picture of domain entities involved in the MAP processes. We validate our approach on a case dealing with the point of sale system of Seven Eleven Japan (SEJ).","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115247641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Creating security enforcement policies is a complex design problem that is often done using a native GUI. A security policy for an enterprise must span native applications and be expressed in a format that best captures design problems. This paper looks at the patterns that emerge when developing enterprising spanning security policy using a language based approach.
{"title":"Patterns in Security Enforcement Policy Development","authors":"D. Thomsen","doi":"10.1109/DEXA.2007.146","DOIUrl":"https://doi.org/10.1109/DEXA.2007.146","url":null,"abstract":"Creating security enforcement policies is a complex design problem that is often done using a native GUI. A security policy for an enterprise must span native applications and be expressed in a format that best captures design problems. This paper looks at the patterns that emerge when developing enterprising spanning security policy using a language based approach.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115716109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The mobile-P2P paradigm is becoming increasingly popular. Existing mobile-P2P solutions largely do not consider economic incentive models for enticing peer participation without eliminating free-riders and for effectively handling mobile resource constraints such as energy. This paper presents an executive summary of the existing solutions and an overview of some of the important issues for handling problems in mobile-P2P networks using economic models. We also present our perspectives on building 'real' mobile-P2P applications using economic models.
{"title":"Research issues and overview of economic models in Mobile-P2P networks","authors":"Anirban Mondal, S. Madria, M. Kitsuregawa","doi":"10.1109/DEXA.2007.128","DOIUrl":"https://doi.org/10.1109/DEXA.2007.128","url":null,"abstract":"The mobile-P2P paradigm is becoming increasingly popular. Existing mobile-P2P solutions largely do not consider economic incentive models for enticing peer participation without eliminating free-riders and for effectively handling mobile resource constraints such as energy. This paper presents an executive summary of the existing solutions and an overview of some of the important issues for handling problems in mobile-P2P networks using economic models. We also present our perspectives on building 'real' mobile-P2P applications using economic models.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115732669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our ongoing work aims at defining an ontology-centered approach for building expertise models for the CommonKADS methodology. This approach relies on a core problem-solving ontology (OntoKADS) which extends a foundational ontology (DOLCE) and a core ontology in the domain of semiotics (I&DA). In this article, our presentation of OntoKADS focuses on "knowledge roles" - the modeling primitive situated at the interface between domain knowledge and reasoning, and whose ontological status is still much debated. The main contribution of this paper is to propose a coherent, global, ontological framework which enables us to account for this primitive. We also show how the novel characterization of this primitive allows definition of new rules for the construction of expertise models.
{"title":"A Clarification of the Ontological Status of \"Knowledge Roles\"","authors":"Sabine Bruaux, G. Kassel, Gilles Morel","doi":"10.1109/DEXA.2007.82","DOIUrl":"https://doi.org/10.1109/DEXA.2007.82","url":null,"abstract":"Our ongoing work aims at defining an ontology-centered approach for building expertise models for the CommonKADS methodology. This approach relies on a core problem-solving ontology (OntoKADS) which extends a foundational ontology (DOLCE) and a core ontology in the domain of semiotics (I&DA). In this article, our presentation of OntoKADS focuses on \"knowledge roles\" - the modeling primitive situated at the interface between domain knowledge and reasoning, and whose ontological status is still much debated. The main contribution of this paper is to propose a coherent, global, ontological framework which enables us to account for this primitive. We also show how the novel characterization of this primitive allows definition of new rules for the construction of expertise models.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"11 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132625565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work deals with an evolutionary data processing, concretely optimization of control of chaos based on using EA. The main aim of this work is to show that evolutionary algorithms are capable of optimization of chaos control. As models of deterministic chaotic system one dimensional Logistic equation and two dimensional Henon map were used. The evolutionary algorithm self-organizing migrating algorithm (SOMA) was used in four versions. For each version, simulations were repeated several times to show and check robustness of used method.
{"title":"Optimization of Chaos Control by Means of Evolutionary Algorithms","authors":"I. Zelinka, R. Šenkeřík, E. Navratil","doi":"10.1109/DEXA.2007.64","DOIUrl":"https://doi.org/10.1109/DEXA.2007.64","url":null,"abstract":"This work deals with an evolutionary data processing, concretely optimization of control of chaos based on using EA. The main aim of this work is to show that evolutionary algorithms are capable of optimization of chaos control. As models of deterministic chaotic system one dimensional Logistic equation and two dimensional Henon map were used. The evolutionary algorithm self-organizing migrating algorithm (SOMA) was used in four versions. For each version, simulations were repeated several times to show and check robustness of used method.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123691974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}