Pub Date : 2004-03-30DOI: 10.1109/ICDE.2004.1320025
David T. McWherter, Bianca Schroeder, A. Ailamaki, Mor Harchol-Balter
Transactional workloads are a hallmark of modern OLTP and Web applications, ranging from electronic commerce and banking to online shopping. Often, the database at the core of these applications is the performance bottleneck. Given the limited resources available to the database, transaction execution times can vary wildly as they compete and wait for critical resources. As the competitor is "only a click away", valuable (high-priority) users must be ensured consistently good performance via QoS and transaction prioritization. This paper analyzes and proposes prioritization for transactional workloads in traditional database systems (DBMS). This work first performs a detailed bottleneck analysis of resource usage by transactional workloads on commercial and noncommercial DBMS (IBM DB2, Post-greSQL, Shore) under a range of configurations. Second, this work implements and evaluates the performance of several preemptive and nonpreemptive DBMS prioritization policies in PostgreSQL and Shore. The primary contributions of this work include (i) understanding the bottleneck resources in transactional DBMS workloads and (ii) a demonstration that prioritization in traditional DBMS can provide 2x-5x improvement for high-priority transactions using simple scheduling policies, without expense to low-priority transactions.
{"title":"Priority mechanisms for OLTP and transactional Web applications","authors":"David T. McWherter, Bianca Schroeder, A. Ailamaki, Mor Harchol-Balter","doi":"10.1109/ICDE.2004.1320025","DOIUrl":"https://doi.org/10.1109/ICDE.2004.1320025","url":null,"abstract":"Transactional workloads are a hallmark of modern OLTP and Web applications, ranging from electronic commerce and banking to online shopping. Often, the database at the core of these applications is the performance bottleneck. Given the limited resources available to the database, transaction execution times can vary wildly as they compete and wait for critical resources. As the competitor is \"only a click away\", valuable (high-priority) users must be ensured consistently good performance via QoS and transaction prioritization. This paper analyzes and proposes prioritization for transactional workloads in traditional database systems (DBMS). This work first performs a detailed bottleneck analysis of resource usage by transactional workloads on commercial and noncommercial DBMS (IBM DB2, Post-greSQL, Shore) under a range of configurations. Second, this work implements and evaluates the performance of several preemptive and nonpreemptive DBMS prioritization policies in PostgreSQL and Shore. The primary contributions of this work include (i) understanding the bottleneck resources in transactional DBMS workloads and (ii) a demonstration that prioritization in traditional DBMS can provide 2x-5x improvement for high-priority transactions using simple scheduling policies, without expense to low-priority transactions.","PeriodicalId":358862,"journal":{"name":"Proceedings. 20th International Conference on Data Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129720236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-30DOI: 10.1109/ICDE.2004.1320004
Atsuyuki Morishima, H. Kitagawa, Akira Matsumoto
We present XLearner, a novel tool that helps the rapid development of XML mapping queries written in XQuery. XLearner is novel in that it learns XQuery queries consistent with given examples (fragments) of intended query results. XLearner combines known learning techniques, incorporates mechanisms to cope with issues specific to the XQuery learning context, and provides a systematic way for the semiautomatic development of queries. We describe the XLearner system. It presents algorithms for learning various classes of XQuery, shows that a minor extension gives the system a practical expressive power, and reports experimental results to demonstrate how XLearner outputs reasonably complicated queries with only a small number of interactions with the user.
{"title":"A machine learning approach to rapid development of XML mapping queries","authors":"Atsuyuki Morishima, H. Kitagawa, Akira Matsumoto","doi":"10.1109/ICDE.2004.1320004","DOIUrl":"https://doi.org/10.1109/ICDE.2004.1320004","url":null,"abstract":"We present XLearner, a novel tool that helps the rapid development of XML mapping queries written in XQuery. XLearner is novel in that it learns XQuery queries consistent with given examples (fragments) of intended query results. XLearner combines known learning techniques, incorporates mechanisms to cope with issues specific to the XQuery learning context, and provides a systematic way for the semiautomatic development of queries. We describe the XLearner system. It presents algorithms for learning various classes of XQuery, shows that a minor extension gives the system a practical expressive power, and reports experimental results to demonstrate how XLearner outputs reasonably complicated queries with only a small number of interactions with the user.","PeriodicalId":358862,"journal":{"name":"Proceedings. 20th International Conference on Data Engineering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131383335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-30DOI: 10.1109/ICDE.2004.1319991
Iosif Lazaridis, S. Mehrotra
We examine the problem of evaluating selection queries over imprecisely represented objects. Such objects are used either because they are much smaller in size than the precise ones (e.g., compressed versions of time series), or as imprecise replicas of fast-changing objects across the network (e.g., interval approximations for time-varying sensor readings). It may be impossible to determine whether an imprecise object meets the selection predicate. Additionally, the objects appearing in the output are also imprecise. Retrieving the precise objects themselves (at additional cost) can be used to increase the quality of the reported answer. We allow queries to specify their own answer quality requirements. We show how the query evaluation system may do the minimal amount of work to meet these requirements. Our work presents two important contributions: first, by considering queries with set-based answers, rather than the approximate aggregate queries over numerical data examined in the literature; second, by aiming to minimize the combined cost of both data processing and probe operations in a single framework. Thus, we establish that the answer accuracy/performance tradeoff can be realized in a more general setting than previously seen.
{"title":"Approximate selection queries over imprecise data","authors":"Iosif Lazaridis, S. Mehrotra","doi":"10.1109/ICDE.2004.1319991","DOIUrl":"https://doi.org/10.1109/ICDE.2004.1319991","url":null,"abstract":"We examine the problem of evaluating selection queries over imprecisely represented objects. Such objects are used either because they are much smaller in size than the precise ones (e.g., compressed versions of time series), or as imprecise replicas of fast-changing objects across the network (e.g., interval approximations for time-varying sensor readings). It may be impossible to determine whether an imprecise object meets the selection predicate. Additionally, the objects appearing in the output are also imprecise. Retrieving the precise objects themselves (at additional cost) can be used to increase the quality of the reported answer. We allow queries to specify their own answer quality requirements. We show how the query evaluation system may do the minimal amount of work to meet these requirements. Our work presents two important contributions: first, by considering queries with set-based answers, rather than the approximate aggregate queries over numerical data examined in the literature; second, by aiming to minimize the combined cost of both data processing and probe operations in a single framework. Thus, we establish that the answer accuracy/performance tradeoff can be realized in a more general setting than previously seen.","PeriodicalId":358862,"journal":{"name":"Proceedings. 20th International Conference on Data Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129171172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-30DOI: 10.1109/ICDE.2004.1320017
Songting Chen, Jun Chen, Xin Zhang, Elke A. Rundensteiner
Data integration over multiple heterogeneous data sources has become increasingly important for modern applications. The integrated data is usually stored in materialized views for high availability and better performance. Such views must be maintained after the data sources change. In a loosely-coupled and dynamic environment, such as the Data Grid, the sources may autonomously change not only their data but also their schema, query capabilities or semantics, which may consequently cause the ongoing view maintenance fail. We analyze the maintenance errors and classify them into different classes of dependencies. We then propose several dependency detection and correction algorithms to handle these new classes of concurrency. Our techniques are not tied to specific maintenance algorithms nor to a particular data model. To our knowledge, this is the first complete solution to the view maintenance concurrency problems for both data and schema changes. We have implemented the proposed solutions and experimentally evaluated the impact of anomalies on maintenance performance and trade-offs between different dependency detection algorithms.
{"title":"Detection and correction of conflicting source updates for view maintenance","authors":"Songting Chen, Jun Chen, Xin Zhang, Elke A. Rundensteiner","doi":"10.1109/ICDE.2004.1320017","DOIUrl":"https://doi.org/10.1109/ICDE.2004.1320017","url":null,"abstract":"Data integration over multiple heterogeneous data sources has become increasingly important for modern applications. The integrated data is usually stored in materialized views for high availability and better performance. Such views must be maintained after the data sources change. In a loosely-coupled and dynamic environment, such as the Data Grid, the sources may autonomously change not only their data but also their schema, query capabilities or semantics, which may consequently cause the ongoing view maintenance fail. We analyze the maintenance errors and classify them into different classes of dependencies. We then propose several dependency detection and correction algorithms to handle these new classes of concurrency. Our techniques are not tied to specific maintenance algorithms nor to a particular data model. To our knowledge, this is the first complete solution to the view maintenance concurrency problems for both data and schema changes. We have implemented the proposed solutions and experimentally evaluated the impact of anomalies on maintenance performance and trade-offs between different dependency detection algorithms.","PeriodicalId":358862,"journal":{"name":"Proceedings. 20th International Conference on Data Engineering","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128253640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-30DOI: 10.1109/ICDE.2004.1320018
Jeffrey Considine, Feifei Li, G. Kollios, J. Byers
In the emerging area of sensor-based systems, a significant challenge is to develop scalable, fault-tolerant methods to extract useful information from the data the sensors collect. An approach to this data management problem is the use of sensor database systems, exemplified by TinyDB and Cougar, which allow users to perform aggregation queries such as MIN, COUNT and AVG on a sensor network. Due to power and range constraints, centralized approaches are generally impractical, so most systems use in-network aggregation to reduce network traffic. However, these aggregation strategies become bandwidth-intensive when combined with the fault-tolerant, multipath routing methods often used in these environments. For example, duplicate-sensitive aggregates such as SUM cannot be computed exactly using substantially less bandwidth than explicit enumeration. To avoid this expense, we investigate the use of approximate in-network aggregation using small sketches. Our contributions are as follows: 1) we generalize well known duplicate-insensitive sketches for approximating COUNT to handle SUM, 2) we present and analyze methods for using sketches to produce accurate results with low communication and computation overhead, and 3) we present an extensive experimental validation of our methods.
{"title":"Approximate aggregation techniques for sensor databases","authors":"Jeffrey Considine, Feifei Li, G. Kollios, J. Byers","doi":"10.1109/ICDE.2004.1320018","DOIUrl":"https://doi.org/10.1109/ICDE.2004.1320018","url":null,"abstract":"In the emerging area of sensor-based systems, a significant challenge is to develop scalable, fault-tolerant methods to extract useful information from the data the sensors collect. An approach to this data management problem is the use of sensor database systems, exemplified by TinyDB and Cougar, which allow users to perform aggregation queries such as MIN, COUNT and AVG on a sensor network. Due to power and range constraints, centralized approaches are generally impractical, so most systems use in-network aggregation to reduce network traffic. However, these aggregation strategies become bandwidth-intensive when combined with the fault-tolerant, multipath routing methods often used in these environments. For example, duplicate-sensitive aggregates such as SUM cannot be computed exactly using substantially less bandwidth than explicit enumeration. To avoid this expense, we investigate the use of approximate in-network aggregation using small sketches. Our contributions are as follows: 1) we generalize well known duplicate-insensitive sketches for approximating COUNT to handle SUM, 2) we present and analyze methods for using sketches to produce accurate results with low communication and computation overhead, and 3) we present an extensive experimental validation of our methods.","PeriodicalId":358862,"journal":{"name":"Proceedings. 20th International Conference on Data Engineering","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123052900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-30DOI: 10.1109/ICDE.2004.1320076
B. Benatallah, Mohand-Said Hacid, Hye-young Paik, Christophe Rey, F. Toumani
More and more suppliers are offering access to their product or information portals (also called e-catalogs) via the Web. The key issue is how to efficiently integrate and query large, intricate, heterogeneous information sources such as e-catalogs. Traditional data integration approach, where the development of an integrated schema requires the understanding of both structure and semantics of all schemas of sources to be integrated, is hardly applicable because of the dynamic nature and size of the Web. We present WS-CatalogNet: a Web services based data sharing middleware infrastructure whose aims is to enhance the potential of e-catalogs by focusing on scalability and flexible aspects of their sharing and access.
{"title":"Peering and querying e-catalog communities","authors":"B. Benatallah, Mohand-Said Hacid, Hye-young Paik, Christophe Rey, F. Toumani","doi":"10.1109/ICDE.2004.1320076","DOIUrl":"https://doi.org/10.1109/ICDE.2004.1320076","url":null,"abstract":"More and more suppliers are offering access to their product or information portals (also called e-catalogs) via the Web. The key issue is how to efficiently integrate and query large, intricate, heterogeneous information sources such as e-catalogs. Traditional data integration approach, where the development of an integrated schema requires the understanding of both structure and semantics of all schemas of sources to be integrated, is hardly applicable because of the dynamic nature and size of the Web. We present WS-CatalogNet: a Web services based data sharing middleware infrastructure whose aims is to enhance the potential of e-catalogs by focusing on scalability and flexible aspects of their sharing and access.","PeriodicalId":358862,"journal":{"name":"Proceedings. 20th International Conference on Data Engineering","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122829902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-30DOI: 10.1109/ICDE.2004.1320031
D. Gawlick, Dmitry Lenkov, Aravind Yalamanchi, L. Chernobrod
The support for the expression data type in a relational database system allows storing of conditional expressions as data in database tables and evaluating them using SQL queries. In the context of this new capability, expressions can be interpreted as descriptions, queries, and filters, and this significantly broadens the use of a relational database system to support new types of applications. The paper presents an overview of the expression data type, relates expressions to descriptions, queries, and filters, considers applications pertaining to information distribution, demand analysis, and task assignment, and shows how these applications can be easily supported with improved functionality.
{"title":"Applications for expression data in relational database systems","authors":"D. Gawlick, Dmitry Lenkov, Aravind Yalamanchi, L. Chernobrod","doi":"10.1109/ICDE.2004.1320031","DOIUrl":"https://doi.org/10.1109/ICDE.2004.1320031","url":null,"abstract":"The support for the expression data type in a relational database system allows storing of conditional expressions as data in database tables and evaluating them using SQL queries. In the context of this new capability, expressions can be interpreted as descriptions, queries, and filters, and this significantly broadens the use of a relational database system to support new types of applications. The paper presents an overview of the expression data type, relates expressions to descriptions, queries, and filters, considers applications pertaining to information distribution, demand analysis, and task assignment, and shows how these applications can be easily supported with improved functionality.","PeriodicalId":358862,"journal":{"name":"Proceedings. 20th International Conference on Data Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116986565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-30DOI: 10.1109/ICDE.2004.1320085
J. Freire, Maya Ramanath, Lingzhi Zhang
A key component of XML data management systems is the result size estimator, which estimates the cardinalities of user queries. Estimated cardinalities are needed in a variety of tasks, including query optimization and cost-based storage design; and they can also be used to give users early feedback about the expected outcome of their queries. In contrast to previously proposed result estimators, which use specialized data structures and estimation algorithms, StatiX uses histograms to uniformly capture both the structural and value skew present in documents. The original version of StatiX was built as a proof of concept. With the goal of making the system publicly available, we have built StatiX++, a new and improved version of StatiX, which extends the original system in significant ways. In this demonstration, we show the key features of StatiX++.
{"title":"A flexible infrastructure for gathering XML statistics and estimating query cardinality","authors":"J. Freire, Maya Ramanath, Lingzhi Zhang","doi":"10.1109/ICDE.2004.1320085","DOIUrl":"https://doi.org/10.1109/ICDE.2004.1320085","url":null,"abstract":"A key component of XML data management systems is the result size estimator, which estimates the cardinalities of user queries. Estimated cardinalities are needed in a variety of tasks, including query optimization and cost-based storage design; and they can also be used to give users early feedback about the expected outcome of their queries. In contrast to previously proposed result estimators, which use specialized data structures and estimation algorithms, StatiX uses histograms to uniformly capture both the structural and value skew present in documents. The original version of StatiX was built as a proof of concept. With the goal of making the system publicly available, we have built StatiX++, a new and improved version of StatiX, which extends the original system in significant ways. In this demonstration, we show the key features of StatiX++.","PeriodicalId":358862,"journal":{"name":"Proceedings. 20th International Conference on Data Engineering","volume":"84 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120883441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-03-30DOI: 10.1109/ICDE.2004.1320073
Peter Rosenthal
Many object-oriented applications use large numbers of structurally different database queries. With current technology, writing applications that generate queries at runtime is difficult and error-prone. FROQUE, a framework for object-oriented queries, provides a secure and purely object-oriented solution to access relational databases. As such, it is easy to use for object-oriented programmers and with the help of object-oriented compilers it guarantees that queries formulated in the object-oriented world at execution time result in correct SQL queries. Thus, FROQUE is an improvement over existing database frameworks such as Apache OJB, the object relational bridge, which are not strongly typed and can lead to runtime errors.
{"title":"A type-safe object-oriented solution for the dynamic construction of queries","authors":"Peter Rosenthal","doi":"10.1109/ICDE.2004.1320073","DOIUrl":"https://doi.org/10.1109/ICDE.2004.1320073","url":null,"abstract":"Many object-oriented applications use large numbers of structurally different database queries. With current technology, writing applications that generate queries at runtime is difficult and error-prone. FROQUE, a framework for object-oriented queries, provides a secure and purely object-oriented solution to access relational databases. As such, it is easy to use for object-oriented programmers and with the help of object-oriented compilers it guarantees that queries formulated in the object-oriented world at execution time result in correct SQL queries. Thus, FROQUE is an improvement over existing database frameworks such as Apache OJB, the object relational bridge, which are not strongly typed and can lead to runtime errors.","PeriodicalId":358862,"journal":{"name":"Proceedings. 20th International Conference on Data Engineering","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121653131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Kaushik, R. Krishnamurthy, J. Naughton, R. Ramakrishnan
Recently, there has been a great deal of interest in the development of techniques to evaluate path expressions over collections of XML documents. In general, these path expressions contain both structural and keyword components. Several methods have been proposed for processing path expressions over graph/tree-structured XML data. These methods can be classified into two broad classes. The first involves graph traversal where the input query is evaluated by traversing the data graph or some compressed representation. The other class involves information-retrieval style processing using inverted lists. In this framework, structure indexes have been proposed to be used as a substitute for graph traversal. Here, we focus on a subclass of CAS queries consisting of simple path expressions. We study algorithmic issues in integrating structure indexes with inverted lists for the evaluation of these queries, where we rank all documents that match the query and return the top k documents in order of relevance.
{"title":"On the integration of structure indexes and inverted lists","authors":"R. Kaushik, R. Krishnamurthy, J. Naughton, R. Ramakrishnan","doi":"10.1145/1007568.1007656","DOIUrl":"https://doi.org/10.1145/1007568.1007656","url":null,"abstract":"Recently, there has been a great deal of interest in the development of techniques to evaluate path expressions over collections of XML documents. In general, these path expressions contain both structural and keyword components. Several methods have been proposed for processing path expressions over graph/tree-structured XML data. These methods can be classified into two broad classes. The first involves graph traversal where the input query is evaluated by traversing the data graph or some compressed representation. The other class involves information-retrieval style processing using inverted lists. In this framework, structure indexes have been proposed to be used as a substitute for graph traversal. Here, we focus on a subclass of CAS queries consisting of simple path expressions. We study algorithmic issues in integrating structure indexes with inverted lists for the evaluation of these queries, where we rank all documents that match the query and return the top k documents in order of relevance.","PeriodicalId":358862,"journal":{"name":"Proceedings. 20th International Conference on Data Engineering","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121910639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}