首页 > 最新文献

Proceedings 17th International Conference on Data Engineering最新文献

英文 中文
High dimensional similarity search with space filling curves 基于空间填充曲线的高维相似性搜索
Pub Date : 2001-04-02 DOI: 10.1109/ICDE.2001.914876
Swanwa Liao, M. Lopez, Scott T. Leutenegger
We present a new approach for approximate nearest neighbor queries for sets of high dimensional points under any L/sub t/-metric, t=1,...,/spl infin/. The proposed algorithm is efficient and simple to implement. The algorithm uses multiple shifted copies of the data points and stores them in up to (d+1) B-trees where d is the dimensionality of the data, sorted according to their position along a space filling curve. This is done in a way that allows us to guarantee that a neighbor within an O(d/sup 1+1/t/) factor of the exact nearest, can be returned with at most (d+1)log, n page accesses, where p is the branching factor of the B-trees. In practice, for real data sets, our approximate technique finds the exact nearest neighbor between 87% and 99% of the time and a point no farther than the third nearest neighbor between 98% and 100% of the time. Our solution is dynamic, allowing insertion or deletion of points in O(d log/sub p/ n) page accesses and generalizes easily to find approximate k-nearest neighbors.
对于任意L/下标t/-metric, t=1,…下的高维点集,提出了一种近似最近邻查询的新方法。spl infin /。该算法效率高,实现简单。该算法使用数据点的多个移位副本,并将它们存储在最多(d+1)棵b树中,其中d是数据的维数,根据它们沿着空间填充曲线的位置进行排序。这是通过一种方式来实现的,这种方式允许我们保证在最接近的O(d/sup 1+1/t/)因子内的邻居,可以以最多(d+1)log, n次页面访问返回,其中p是b树的分支因子。在实践中,对于真实的数据集,我们的近似技术在87%到99%的时间内找到精确的最近邻居,在98%到100%的时间内找到不超过第三最近邻居的点。我们的解决方案是动态的,允许在O(d log/sub p/ n)页访问中插入或删除点,并且可以很容易地找到近似的k近邻。
{"title":"High dimensional similarity search with space filling curves","authors":"Swanwa Liao, M. Lopez, Scott T. Leutenegger","doi":"10.1109/ICDE.2001.914876","DOIUrl":"https://doi.org/10.1109/ICDE.2001.914876","url":null,"abstract":"We present a new approach for approximate nearest neighbor queries for sets of high dimensional points under any L/sub t/-metric, t=1,...,/spl infin/. The proposed algorithm is efficient and simple to implement. The algorithm uses multiple shifted copies of the data points and stores them in up to (d+1) B-trees where d is the dimensionality of the data, sorted according to their position along a space filling curve. This is done in a way that allows us to guarantee that a neighbor within an O(d/sup 1+1/t/) factor of the exact nearest, can be returned with at most (d+1)log, n page accesses, where p is the branching factor of the B-trees. In practice, for real data sets, our approximate technique finds the exact nearest neighbor between 87% and 99% of the time and a point no farther than the third nearest neighbor between 98% and 100% of the time. Our solution is dynamic, allowing insertion or deletion of points in O(d log/sub p/ n) page accesses and generalizes easily to find approximate k-nearest neighbors.","PeriodicalId":431818,"journal":{"name":"Proceedings 17th International Conference on Data Engineering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116953438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 100
A split operator for now-relative bitemporal databases 拆分操作符,用于现在相对的双时态数据库
Pub Date : 2001-04-02 DOI: 10.1109/ICDE.2001.914812
Mikkel Agesen, Michael H. Böhlen, Lasse Poulsen, K. Torp
The timestamps of now-relative bitemporal databases are modeled as growing, shrinking or rectangular regions. The shape of these regions makes it a challenge to design bitemporal operators that (a) are consistent with the point-based interpretation of a temporal database, (b) preserve the identity of the argument timestamps, (c) ensure locality and (d) perform efficiently. We identify the bitemporal split operator as the basic primitive to implement a wide range of advanced now-relative bitemporal operations. The bitemporal split operator splits each tuple of a bitemporal argument relation, such that equality and standard nontemporal algorithms can be used to implement the bitemporal counterparts with the aforementioned properties. Both a native database algorithm and an SQL implementation are provided. Our performance results show that the bitemporal split operator outperforms related approaches by orders of magnitude and scales well.
现在相对双时间数据库的时间戳被建模为增长、缩小或矩形区域。这些区域的形状使得设计双时间运算符(a)与时间数据库的基于点的解释一致,(b)保持参数时间戳的身份,(c)确保局部性和(d)高效执行成为一项挑战。我们将双时间分割操作符定义为实现一系列高级的现在相对的双时间操作的基本基元。双时间分割运算符分割双时间参数关系的每个元组,这样就可以使用等式和标准非时间算法来实现具有上述属性的双时间对应物。提供了一个本地数据库算法和一个SQL实现。我们的性能结果表明,双时间分割算子在数量级上优于相关方法,并且具有良好的可伸缩性。
{"title":"A split operator for now-relative bitemporal databases","authors":"Mikkel Agesen, Michael H. Böhlen, Lasse Poulsen, K. Torp","doi":"10.1109/ICDE.2001.914812","DOIUrl":"https://doi.org/10.1109/ICDE.2001.914812","url":null,"abstract":"The timestamps of now-relative bitemporal databases are modeled as growing, shrinking or rectangular regions. The shape of these regions makes it a challenge to design bitemporal operators that (a) are consistent with the point-based interpretation of a temporal database, (b) preserve the identity of the argument timestamps, (c) ensure locality and (d) perform efficiently. We identify the bitemporal split operator as the basic primitive to implement a wide range of advanced now-relative bitemporal operations. The bitemporal split operator splits each tuple of a bitemporal argument relation, such that equality and standard nontemporal algorithms can be used to implement the bitemporal counterparts with the aforementioned properties. Both a native database algorithm and an SQL implementation are provided. Our performance results show that the bitemporal split operator outperforms related approaches by orders of magnitude and scales well.","PeriodicalId":431818,"journal":{"name":"Proceedings 17th International Conference on Data Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130972776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
The Skyline operator Skyline运营商
Pub Date : 2001-04-02 DOI: 10.1109/ICDE.2001.914855
S. Börzsönyi, Donald Kossmann, K. Stocker
We propose to extend database systems by a Skyline operation. This operation filters out a set of interesting points from a potentially large set of data points. A point is interesting if it is not dominated by any other point. For example, a hotel might be interesting for somebody traveling to Nassau if no other hotel is both cheaper and closer to the beach. We show how SSL can be extended to pose Skyline queries, present and evaluate alternative algorithms to implement the Skyline operation, and show how this operation can be combined with other database operations, e.g., join.
我们建议通过Skyline操作来扩展数据库系统。该操作从可能很大的数据点集中过滤出一组感兴趣的点。如果一个点不被其他点支配,它就是有趣的。例如,如果没有其他酒店既便宜又靠近海滩,那么去拿骚旅行的人可能会对一家酒店感兴趣。我们展示了如何扩展SSL来提出Skyline查询,呈现和评估实现Skyline操作的替代算法,并展示了如何将此操作与其他数据库操作(例如,join)结合起来。
{"title":"The Skyline operator","authors":"S. Börzsönyi, Donald Kossmann, K. Stocker","doi":"10.1109/ICDE.2001.914855","DOIUrl":"https://doi.org/10.1109/ICDE.2001.914855","url":null,"abstract":"We propose to extend database systems by a Skyline operation. This operation filters out a set of interesting points from a potentially large set of data points. A point is interesting if it is not dominated by any other point. For example, a hotel might be interesting for somebody traveling to Nassau if no other hotel is both cheaper and closer to the beach. We show how SSL can be extended to pose Skyline queries, present and evaluate alternative algorithms to implement the Skyline operation, and show how this operation can be combined with other database operations, e.g., join.","PeriodicalId":431818,"journal":{"name":"Proceedings 17th International Conference on Data Engineering","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126414076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2557
Variable length queries for time series data 时间序列数据的可变长度查询
Pub Date : 2001-04-02 DOI: 10.1109/ICDE.2001.914838
Tamer Kahveci, Ambuj K. Singh
Finding similar patterns in a time sequence is a well-studied problem. Most of the current techniques work well for queries of a prespecified length, but not for variable length queries. We propose a new indexing technique that works well for variable length queries. The central idea is to store index structures at different resolutions for a given dataset. The resolutions are based on wavelets. For a given query, a number of subqueries at different resolutions are generated. The ranges of the subqueries are progressively refined based on results from previous subqueries. Our experiments show that the total cost for our method is 4 to 20 times less than the current techniques including linear scan. Because of the need to store information at multiple resolution levels, the storage requirement of our method could potentially be large. In the second part of the paper we show how the index information can be compressed with minimal information loss. According to our experimental results, even after compressing the size of the index to one fifth, the total cost of our method is 3 to 15 times less than the current techniques.
在时间序列中寻找相似的模式是一个研究得很好的问题。当前的大多数技术都能很好地处理预先指定长度的查询,但不适合可变长度的查询。我们提出了一种新的索引技术,它可以很好地用于可变长度查询。其核心思想是为给定数据集以不同的分辨率存储索引结构。分辨率是基于小波的。对于给定的查询,将生成许多不同分辨率的子查询。子查询的范围将根据先前子查询的结果逐步细化。我们的实验表明,我们的方法的总成本比目前包括线性扫描在内的技术低4到20倍。由于需要在多个分辨率级别上存储信息,因此我们的方法的存储需求可能很大。在本文的第二部分,我们展示了如何在最小化信息丢失的情况下压缩索引信息。根据我们的实验结果,即使将索引的大小压缩到五分之一,我们的方法的总成本也比目前的技术低3到15倍。
{"title":"Variable length queries for time series data","authors":"Tamer Kahveci, Ambuj K. Singh","doi":"10.1109/ICDE.2001.914838","DOIUrl":"https://doi.org/10.1109/ICDE.2001.914838","url":null,"abstract":"Finding similar patterns in a time sequence is a well-studied problem. Most of the current techniques work well for queries of a prespecified length, but not for variable length queries. We propose a new indexing technique that works well for variable length queries. The central idea is to store index structures at different resolutions for a given dataset. The resolutions are based on wavelets. For a given query, a number of subqueries at different resolutions are generated. The ranges of the subqueries are progressively refined based on results from previous subqueries. Our experiments show that the total cost for our method is 4 to 20 times less than the current techniques including linear scan. Because of the need to store information at multiple resolution levels, the storage requirement of our method could potentially be large. In the second part of the paper we show how the index information can be compressed with minimal information loss. According to our experimental results, even after compressing the size of the index to one fifth, the total cost of our method is 3 to 15 times less than the current techniques.","PeriodicalId":431818,"journal":{"name":"Proceedings 17th International Conference on Data Engineering","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126251338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 145
Model-based mediation with domain maps 具有域映射的基于模型的中介
Pub Date : 2001-04-02 DOI: 10.1109/ICDE.2001.914816
Bertram Ludäscher, Amarnath Gupta, M. Martone
Proposes an extension to current view-based mediator systems called "model-based mediation", in which views are defined and executed at the level of conceptual models (CMs) rather than at the structural level. Structural integration and lifting of data to the conceptual level is "pushed down" from the mediator to wrappers which, in our system, export the classes, associations, constraints and query capabilities of a source. Another novel feature of our architecture is the use of domain maps - semantic nets of concepts and relationships that are used to mediate across sources from multiple worlds (i.e. whose data are related in indirect and often complex ways). As part of registering a source's CM with the mediator, the wrapper creates a "semantic index" of its data into the domain map. We show that these indexes not only semantically correlate the multiple-worlds data, and thereby support the definition of the integrated CM, but they are also useful during query processing, for example, to select relevant sources. A first prototype of the system has been implemented for a complex neuroscience mediation problem.
提出了对当前基于视图的中介系统的扩展,称为“基于模型的中介”,其中视图在概念模型(CMs)级别而不是在结构级别定义和执行。结构集成和将数据提升到概念级别是由中介“下推”到包装器的,在我们的系统中,包装器导出源的类、关联、约束和查询功能。我们架构的另一个新特性是使用领域映射——概念和关系的语义网络,用于跨多个世界的数据源进行调解(例如,其数据以间接且通常复杂的方式相关)。作为向中介注册源CM的一部分,包装器在域映射中创建其数据的“语义索引”。我们表明,这些索引不仅在语义上关联多世界数据,从而支持集成CM的定义,而且在查询处理过程中也很有用,例如,用于选择相关的源。该系统的第一个原型已经实现了一个复杂的神经科学调解问题。
{"title":"Model-based mediation with domain maps","authors":"Bertram Ludäscher, Amarnath Gupta, M. Martone","doi":"10.1109/ICDE.2001.914816","DOIUrl":"https://doi.org/10.1109/ICDE.2001.914816","url":null,"abstract":"Proposes an extension to current view-based mediator systems called \"model-based mediation\", in which views are defined and executed at the level of conceptual models (CMs) rather than at the structural level. Structural integration and lifting of data to the conceptual level is \"pushed down\" from the mediator to wrappers which, in our system, export the classes, associations, constraints and query capabilities of a source. Another novel feature of our architecture is the use of domain maps - semantic nets of concepts and relationships that are used to mediate across sources from multiple worlds (i.e. whose data are related in indirect and often complex ways). As part of registering a source's CM with the mediator, the wrapper creates a \"semantic index\" of its data into the domain map. We show that these indexes not only semantically correlate the multiple-worlds data, and thereby support the definition of the integrated CM, but they are also useful during query processing, for example, to select relevant sources. A first prototype of the system has been implemented for a complex neuroscience mediation problem.","PeriodicalId":431818,"journal":{"name":"Proceedings 17th International Conference on Data Engineering","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126284519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 100
Using EELs, a practical approach to outerjoin and antijoin reordering 使用EELs,这是一种实现外连接和反连接重排序的实用方法
Pub Date : 2001-04-02 DOI: 10.1109/ICDE.2001.914873
Jun Rao, B. Lindsay, G. Lohman, H. Pirahesh, David E. Simmen
Outerjoins and antijoins are two important classes of joins in database systems. Reordering outerjoins and antijoins with innerjoins is challenging because not all the join orders preserve the semantics of the original query. Previous work did not consider antijoins and was restricted to a limited class of queries. We consider using a conventional bottom-up optimizer to reorder different types of joins. We propose extending each join predicate's eligibility list, which contains all the tables referenced in the predicate. An extended eligibility list (EEL) includes all the tables needed by a predicate to preserve the semantics of the original query. We describe an algorithm that can set up the EELs properly in a bottom-up traversal of the original operator tree. A conventional join optimizer is then modified to check the EELs when generating sub-plans. Our approach handles antijoin and can resolve many practical issues. It is now being implemented in an upcoming release of IBM's Universal Database Server for Unix, Windows and OS/2.
外连接和反连接是数据库系统中两种重要的连接类型。用内部连接重新排序外连接和反连接是一项挑战,因为并非所有的连接顺序都保留原始查询的语义。以前的工作没有考虑反连接,并且仅限于有限的查询类。我们考虑使用传统的自底向上优化器来重新排序不同类型的连接。我们建议扩展每个连接谓词的资格列表,该列表包含谓词中引用的所有表。扩展资格列表(EEL)包括谓词保留原始查询语义所需的所有表。我们描述了一种算法,该算法可以在原始算子树的自下而上遍历中正确地设置eel。然后修改传统的连接优化器,以便在生成子计划时检查eel。我们的方法处理反连接,可以解决许多实际问题。它现在正在IBM即将发布的Unix、Windows和OS/2通用数据库服务器中实现。
{"title":"Using EELs, a practical approach to outerjoin and antijoin reordering","authors":"Jun Rao, B. Lindsay, G. Lohman, H. Pirahesh, David E. Simmen","doi":"10.1109/ICDE.2001.914873","DOIUrl":"https://doi.org/10.1109/ICDE.2001.914873","url":null,"abstract":"Outerjoins and antijoins are two important classes of joins in database systems. Reordering outerjoins and antijoins with innerjoins is challenging because not all the join orders preserve the semantics of the original query. Previous work did not consider antijoins and was restricted to a limited class of queries. We consider using a conventional bottom-up optimizer to reorder different types of joins. We propose extending each join predicate's eligibility list, which contains all the tables referenced in the predicate. An extended eligibility list (EEL) includes all the tables needed by a predicate to preserve the semantics of the original query. We describe an algorithm that can set up the EELs properly in a bottom-up traversal of the original operator tree. A conventional join optimizer is then modified to check the EELs when generating sub-plans. Our approach handles antijoin and can resolve many practical issues. It is now being implemented in an upcoming release of IBM's Universal Database Server for Unix, Windows and OS/2.","PeriodicalId":431818,"journal":{"name":"Proceedings 17th International Conference on Data Engineering","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121057036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
High-level parallelisation in a database cluster: a feasibility study using document services 数据库集群中的高级并行化:使用文档服务的可行性研究
Pub Date : 2001-04-02 DOI: 10.1109/ICDE.2001.914820
T. Grabs, Klemens Böhm, H. Schek
Our concern is the design of a scalable infrastructure for complex application services. We want to find out if a cluster of commodity database systems is well-suited as such an infrastructure. To this end, we have carried out a feasibility study based on document services, e.g. document insertion and retrieval. We decompose a service request into short parallel database transactions. Our system, implemented as an extension of a transaction processing monitor, routes the short transactions to the appropriate database systems in the cluster. Routing depends on the data distribution that we have chosen. To avoid bottlenecks, we distribute document functionality, such as term extraction, over the cluster. Extensive experiments show the following. (1) A relatively small number of components - for example eight components $already suffices to cope with high workloads of more than 100 concurrently active clients. (2) Speedup and throughput increase linearly for insertion operations when increasing the cluster size. These observations also hold when bundling service invocations into transactions at the semantic layer. A specialized coordinator component then implements semantic serializability and atomicity. Our experiments show that such a coordinator has minimal impact on CPU resource consumption and on response times.
我们关心的是为复杂的应用服务设计一个可伸缩的基础设施。我们想知道商品数据库系统集群是否适合作为这样的基础设施。为此,我们进行了基于文档服务的可行性研究,例如文档的插入和检索。我们将服务请求分解为短的并行数据库事务。我们的系统是作为事务处理监视器的扩展实现的,它将短事务路由到集群中适当的数据库系统。路由取决于我们选择的数据分布。为了避免瓶颈,我们将文档功能(如术语提取)分布在集群上。大量实验表明:(1)相对较少的组件(例如8个组件)已经足以应付超过100个并发活动客户端的高工作负载。(2)随着集群大小的增加,插入操作的加速和吞吐量呈线性增加。当在语义层将服务调用捆绑到事务中时,这些观察结果也成立。然后,专门的协调器组件实现语义序列化性和原子性。我们的实验表明,这样的协调器对CPU资源消耗和响应时间的影响最小。
{"title":"High-level parallelisation in a database cluster: a feasibility study using document services","authors":"T. Grabs, Klemens Böhm, H. Schek","doi":"10.1109/ICDE.2001.914820","DOIUrl":"https://doi.org/10.1109/ICDE.2001.914820","url":null,"abstract":"Our concern is the design of a scalable infrastructure for complex application services. We want to find out if a cluster of commodity database systems is well-suited as such an infrastructure. To this end, we have carried out a feasibility study based on document services, e.g. document insertion and retrieval. We decompose a service request into short parallel database transactions. Our system, implemented as an extension of a transaction processing monitor, routes the short transactions to the appropriate database systems in the cluster. Routing depends on the data distribution that we have chosen. To avoid bottlenecks, we distribute document functionality, such as term extraction, over the cluster. Extensive experiments show the following. (1) A relatively small number of components - for example eight components $already suffices to cope with high workloads of more than 100 concurrently active clients. (2) Speedup and throughput increase linearly for insertion operations when increasing the cluster size. These observations also hold when bundling service invocations into transactions at the semantic layer. A specialized coordinator component then implements semantic serializability and atomicity. Our experiments show that such a coordinator has minimal impact on CPU resource consumption and on response times.","PeriodicalId":431818,"journal":{"name":"Proceedings 17th International Conference on Data Engineering","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121819195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Microsoft server technology for mobile and wireless applications 微软移动和无线应用服务器技术
Pub Date : 2001-04-02 DOI: 10.1109/ICDE.2001.914832
P. Seshadri
Summary form only given. Microsoft is building a number of server technologies that are targeted at mobile and wireless applications. These technologies cover a wide range of customer scenarios and application requirements. The article discusses some of these technologies in detail.
只提供摘要形式。微软正在开发一系列针对移动和无线应用的服务器技术。这些技术涵盖了广泛的客户场景和应用程序需求。本文将详细讨论其中的一些技术。
{"title":"Microsoft server technology for mobile and wireless applications","authors":"P. Seshadri","doi":"10.1109/ICDE.2001.914832","DOIUrl":"https://doi.org/10.1109/ICDE.2001.914832","url":null,"abstract":"Summary form only given. Microsoft is building a number of server technologies that are targeted at mobile and wireless applications. These technologies cover a wide range of customer scenarios and application requirements. The article discusses some of these technologies in detail.","PeriodicalId":431818,"journal":{"name":"Proceedings 17th International Conference on Data Engineering","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128069432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CORBA Notification Service: design challenges and scalable solutions CORBA通知服务:设计挑战和可扩展的解决方案
Pub Date : 2001-04-02 DOI: 10.1109/ICDE.2001.914809
Robert E. Gruber, Balachander Krishnamurthy, E. Panagos
Presents READY, a multi-threaded implementation of the CORBA Notification Service. The main contribution of our work is the design and development of scalable solutions for the implementation of the CORBA Notification Service. In particular, we present the overall architecture of READY, discuss the key design challenges and choices we made with respect to filter evaluation and event dispatching, and present the current implementation status. Finally, we present preliminary experimental results from our current implementation.
介绍READY,一个CORBA通知服务的多线程实现。我们工作的主要贡献是为CORBA通知服务的实现设计和开发了可伸缩的解决方案。特别是,我们介绍了READY的整体架构,讨论了我们在过滤器评估和事件调度方面所做的关键设计挑战和选择,并介绍了当前的实现状态。最后,我们给出了我们目前实现的初步实验结果。
{"title":"CORBA Notification Service: design challenges and scalable solutions","authors":"Robert E. Gruber, Balachander Krishnamurthy, E. Panagos","doi":"10.1109/ICDE.2001.914809","DOIUrl":"https://doi.org/10.1109/ICDE.2001.914809","url":null,"abstract":"Presents READY, a multi-threaded implementation of the CORBA Notification Service. The main contribution of our work is the design and development of scalable solutions for the implementation of the CORBA Notification Service. In particular, we present the overall architecture of READY, discuss the key design challenges and choices we made with respect to filter evaluation and event dispatching, and present the current implementation status. Finally, we present preliminary experimental results from our current implementation.","PeriodicalId":431818,"journal":{"name":"Proceedings 17th International Conference on Data Engineering","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124458076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
MAFIA: a maximal frequent itemset algorithm for transactional databases MAFIA:一个用于事务性数据库的最大频繁项集算法
Pub Date : 2001-04-02 DOI: 10.1109/ICDE.2001.914857
D. Burdick, Manuel Calimlim, J. Gehrke
We present a new algorithm for mining maximal frequent itemsets from a transactional database. Our algorithm is especially efficient when the itemsets in the database are very long. The search strategy of our algorithm integrates a depth-first traversal of the itemset lattice with effective pruning mechanisms. Our implementation of the search strategy combines a vertical bitmap representation of the database with an efficient relative bitmap compression schema. In a thorough experimental analysis of our algorithm on real data, we isolate the effect of the individual components of the algorithm. Our performance numbers show that our algorithm outperforms previous work by a factor of three to five.
提出了一种从事务数据库中挖掘最大频繁项集的新算法。当数据库中的项集很长时,我们的算法特别有效。该算法的搜索策略将项集格的深度优先遍历与有效的剪枝机制相结合。我们的搜索策略的实现结合了数据库的垂直位图表示和有效的相对位图压缩模式。在对我们的算法在真实数据上的彻底实验分析中,我们隔离了算法的各个组成部分的影响。我们的性能数据表明,我们的算法比以前的工作性能高出三到五倍。
{"title":"MAFIA: a maximal frequent itemset algorithm for transactional databases","authors":"D. Burdick, Manuel Calimlim, J. Gehrke","doi":"10.1109/ICDE.2001.914857","DOIUrl":"https://doi.org/10.1109/ICDE.2001.914857","url":null,"abstract":"We present a new algorithm for mining maximal frequent itemsets from a transactional database. Our algorithm is especially efficient when the itemsets in the database are very long. The search strategy of our algorithm integrates a depth-first traversal of the itemset lattice with effective pruning mechanisms. Our implementation of the search strategy combines a vertical bitmap representation of the database with an efficient relative bitmap compression schema. In a thorough experimental analysis of our algorithm on real data, we isolate the effect of the individual components of the algorithm. Our performance numbers show that our algorithm outperforms previous work by a factor of three to five.","PeriodicalId":431818,"journal":{"name":"Proceedings 17th International Conference on Data Engineering","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130174727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 841
期刊
Proceedings 17th International Conference on Data Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1