首页 > 最新文献

Proceedings 18th International Conference on Data Engineering最新文献

英文 中文
The ATLaS system and its powerful database language based on simple extensions of SQL ATLaS系统及其强大的数据库语言基于SQL的简单扩展
Pub Date : 2002-08-07 DOI: 10.1109/ICDE.2002.994734
Haixun Wang, C. Zaniolo
A lack of power and extensibility in their query languages has seriously limited the generality of DBMSs and hampered their ability to support new applications domains, such as data mining. In this paper, we solve this problem by stream-oriented aggregate functions and generalized table functions which are definable by users in the SQL language itself, rather than in an external programming language. These simple extensions turn SQL into a powerful database language, which can express a wide range of applications, including recursive queries, ROLAP (relational online analytical processing) aggregates, time-series queries, stream-oriented processing and data-mining functions. The SQL extensions are implemented in ATLaS (Aggregate and Table Language and System).
它们的查询语言缺乏强大的功能和可扩展性,严重限制了dbms的通用性,并阻碍了它们支持新应用程序领域(如数据挖掘)的能力。在本文中,我们通过面向流的聚合函数和广义表函数来解决这个问题,这些函数可以由用户在SQL语言本身中定义,而不是在外部编程语言中定义。这些简单的扩展将SQL变成了一种强大的数据库语言,可以表达广泛的应用程序,包括递归查询、ROLAP(关系在线分析处理)聚合、时间序列查询、面向流的处理和数据挖掘功能。SQL扩展在ATLaS(聚合和表语言和系统)中实现。
{"title":"The ATLaS system and its powerful database language based on simple extensions of SQL","authors":"Haixun Wang, C. Zaniolo","doi":"10.1109/ICDE.2002.994734","DOIUrl":"https://doi.org/10.1109/ICDE.2002.994734","url":null,"abstract":"A lack of power and extensibility in their query languages has seriously limited the generality of DBMSs and hampered their ability to support new applications domains, such as data mining. In this paper, we solve this problem by stream-oriented aggregate functions and generalized table functions which are definable by users in the SQL language itself, rather than in an external programming language. These simple extensions turn SQL into a powerful database language, which can express a wide range of applications, including recursive queries, ROLAP (relational online analytical processing) aggregates, time-series queries, stream-oriented processing and data-mining functions. The SQL extensions are implemented in ATLaS (Aggregate and Table Language and System).","PeriodicalId":191529,"journal":{"name":"Proceedings 18th International Conference on Data Engineering","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130537475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data mining meets performance evaluation: fast algorithms for modeling bursty traffic 数据挖掘满足性能评估:快速算法建模突发流量
Pub Date : 2002-08-07 DOI: 10.1109/ICDE.2002.994770
Mengzhi Wang, N. Chan, S. Papadimitriou, C. Faloutsos, T. Madhyastha
Network, Web, and disk I/O traffic are usually bursty and self-similar and therefore cannot be modeled adequately with Poisson arrivals. However, we wish to model these types of traffic and generate realistic traces, because of obvious applications for disk scheduling, network management, and Web server design. Previous models (like fractional Brownian motion and FARIMA, etc.) tried to capture the 'burstiness'. However, the proposed models either require too many parameters to fit and/or require prohibitively large (quadratic) time to generate large traces. We propose a simple, parsimonious method, the b-model, which solves both problems: it requires just one parameter, and can easily generate large traces. In addition, it has many more attractive properties: (a) with our proposed estimation algorithm, it requires just a single pass over the actual trace to estimate b. For example, a one-day-long disk trace in milliseconds contains about 86 Mb data points and requires about 3 minutes for model fitting and 5 minutes for generation. (b) The resulting synthetic traces are very realistic: our experiments on real disk and Web traces show that our synthetic traces match the real ones very well in terms of queuing behavior.
网络、Web和磁盘I/O流量通常是突发的和自相似的,因此不能用泊松到达充分建模。但是,我们希望对这些类型的流量进行建模并生成真实的跟踪,因为磁盘调度、网络管理和Web服务器设计都有明显的应用程序。以前的模型(如分数布朗运动和FARIMA等)试图捕捉“爆发性”。然而,所提出的模型要么需要太多的参数来拟合,要么需要非常大的(二次)时间来生成大的轨迹。我们提出了一种简单、简洁的方法,即b模型,它解决了这两个问题:它只需要一个参数,并且可以很容易地生成大的迹线。此外,它还有许多更具吸引力的特性:(a)使用我们提出的估计算法,它只需要对实际跟踪进行一次传递来估计b。例如,以毫秒为单位的一天磁盘跟踪包含大约86 Mb的数据点,需要大约3分钟的模型拟合和5分钟的生成。(b)生成的合成轨迹非常逼真:我们对真实磁盘和Web轨迹的实验表明,我们的合成轨迹在排队行为方面与真实轨迹非常匹配。
{"title":"Data mining meets performance evaluation: fast algorithms for modeling bursty traffic","authors":"Mengzhi Wang, N. Chan, S. Papadimitriou, C. Faloutsos, T. Madhyastha","doi":"10.1109/ICDE.2002.994770","DOIUrl":"https://doi.org/10.1109/ICDE.2002.994770","url":null,"abstract":"Network, Web, and disk I/O traffic are usually bursty and self-similar and therefore cannot be modeled adequately with Poisson arrivals. However, we wish to model these types of traffic and generate realistic traces, because of obvious applications for disk scheduling, network management, and Web server design. Previous models (like fractional Brownian motion and FARIMA, etc.) tried to capture the 'burstiness'. However, the proposed models either require too many parameters to fit and/or require prohibitively large (quadratic) time to generate large traces. We propose a simple, parsimonious method, the b-model, which solves both problems: it requires just one parameter, and can easily generate large traces. In addition, it has many more attractive properties: (a) with our proposed estimation algorithm, it requires just a single pass over the actual trace to estimate b. For example, a one-day-long disk trace in milliseconds contains about 86 Mb data points and requires about 3 minutes for model fitting and 5 minutes for generation. (b) The resulting synthetic traces are very realistic: our experiments on real disk and Web traces show that our synthetic traces match the real ones very well in terms of queuing behavior.","PeriodicalId":191529,"journal":{"name":"Proceedings 18th International Conference on Data Engineering","volume":"55 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120923594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 203
A graphical XML query language 一种图形化XML查询语言
Pub Date : 2002-08-07 DOI: 10.1109/ICDE.2002.994718
S. Flesca, F. Furfaro, S. Greco
Informally presents the query language /spl Xscr//spl Gscr//spl Lscr/ (eXtensible Graphical Language). The main features of the language are described by means of two queries on a document named "bib.xml" (a document describing the bibliographic details of a book).
非正式地表示查询语言/spl Xscr//spl Gscr//spl Lscr/(可扩展图形语言)。该语言的主要特性是通过对名为“bib.xml”的文档(描述一本书的书目细节的文档)的两个查询来描述的。
{"title":"A graphical XML query language","authors":"S. Flesca, F. Furfaro, S. Greco","doi":"10.1109/ICDE.2002.994718","DOIUrl":"https://doi.org/10.1109/ICDE.2002.994718","url":null,"abstract":"Informally presents the query language /spl Xscr//spl Gscr//spl Lscr/ (eXtensible Graphical Language). The main features of the language are described by means of two queries on a document named \"bib.xml\" (a document describing the bibliographic details of a book).","PeriodicalId":191529,"journal":{"name":"Proceedings 18th International Conference on Data Engineering","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121577240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards meaningful high-dimensional nearest neighbor search by human-computer interaction 利用人机交互实现有意义的高维最近邻搜索
Pub Date : 2002-08-07 DOI: 10.1109/ICDE.2002.994777
C. Aggarwal
Nearest neighbor search is an important and widely used problem in a number of important application domains. In many of these domains, the dimensionality of the data representation is often very high. Recent theoretical results have shown that the concept of proximity or nearest neighbors may not be very meaningful for the high dimensional case. Therefore, it is often a complex problem to find good quality nearest neighbors in such data sets. Furthermore, it is also difficult to judge the value and relevance of the returned results. In fact, it is hard for any fully automated system to satisfy a user about the quality of the nearest neighbors found unless he is directly involved in the process. This is especially the case for high dimensional data in which the meaningfulness of the nearest neighbors found is questionable. We address the complex problem of high dimensional nearest neighbor search from the user perspective by designing a system which uses effective cooperation between the human and the computer. The system provides the user with visual representations of carefully chosen subspaces of the data in order to repeatedly elicit his preferences about the data patterns which are most closely related to the query point. These preferences are used in order to determine and quantify the meaningfulness of the nearest neighbors. Our system is not only able to find and quantify the meaningfulness of the nearest neighbors, but is also able to diagnose situations in which the nearest neighbors found are truly not meaningful.
在许多重要的应用领域中,最近邻搜索是一个重要而广泛应用的问题。在许多这些领域中,数据表示的维数通常非常高。最近的理论结果表明,接近或最近邻的概念对于高维情况可能不是很有意义。因此,在这样的数据集中找到高质量的近邻往往是一个复杂的问题。此外,也很难判断返回结果的价值和相关性。事实上,任何完全自动化的系统都很难让用户满意所找到的最近邻居的质量,除非他直接参与这个过程。这对于高维数据尤其如此,因为在高维数据中,所发现的最近邻的意义是值得怀疑的。从用户的角度出发,设计了一个人机有效协作的高维最近邻搜索系统,解决了高维最近邻搜索的复杂问题。系统为用户提供精心选择的数据子空间的可视化表示,以便反复引出他对与查询点最密切相关的数据模式的偏好。使用这些偏好是为了确定和量化最近邻居的意义。我们的系统不仅能够找到并量化最近邻居的意义,而且还能够诊断发现的最近邻居确实没有意义的情况。
{"title":"Towards meaningful high-dimensional nearest neighbor search by human-computer interaction","authors":"C. Aggarwal","doi":"10.1109/ICDE.2002.994777","DOIUrl":"https://doi.org/10.1109/ICDE.2002.994777","url":null,"abstract":"Nearest neighbor search is an important and widely used problem in a number of important application domains. In many of these domains, the dimensionality of the data representation is often very high. Recent theoretical results have shown that the concept of proximity or nearest neighbors may not be very meaningful for the high dimensional case. Therefore, it is often a complex problem to find good quality nearest neighbors in such data sets. Furthermore, it is also difficult to judge the value and relevance of the returned results. In fact, it is hard for any fully automated system to satisfy a user about the quality of the nearest neighbors found unless he is directly involved in the process. This is especially the case for high dimensional data in which the meaningfulness of the nearest neighbors found is questionable. We address the complex problem of high dimensional nearest neighbor search from the user perspective by designing a system which uses effective cooperation between the human and the computer. The system provides the user with visual representations of carefully chosen subspaces of the data in order to repeatedly elicit his preferences about the data patterns which are most closely related to the query point. These preferences are used in order to determine and quantify the meaningfulness of the nearest neighbors. Our system is not only able to find and quantify the meaningfulness of the nearest neighbors, but is also able to diagnose situations in which the nearest neighbors found are truly not meaningful.","PeriodicalId":191529,"journal":{"name":"Proceedings 18th International Conference on Data Engineering","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123090367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Similarity search over time-series data using wavelets 利用小波对时间序列数据进行相似性搜索
Pub Date : 2002-08-07 DOI: 10.1109/ICDE.2002.994711
I. Popivanov, Renée J. Miller
Considers the use of wavelet transformations as a dimensionality reduction technique to permit efficient similarity searching over high-dimensional time-series data. While numerous transformations have been proposed and studied, the only wavelet that has been shown to be effective for this application is the Haar wavelet. In this work, we observe that a large class of wavelet transformations (not only orthonormal wavelets but also bi-orthonormal wavelets) can be used to support similarity searching. This class includes the most popular and most effective wavelets being used in image compression. We present a detailed performance study of the effects of using different wavelets on the performance of similarity searching for time-series data. We include several wavelets that outperform both the Haar wavelet and the best-known non-wavelet transformations for this application. To ensure our results are usable by an application engineer, we also show how to configure an indexing strategy for the best-performing transformations. Finally, we identify classes of data that can be indexed efficiently using these wavelet transformations.
考虑使用小波变换作为降维技术,允许对高维时间序列数据进行有效的相似性搜索。虽然已经提出和研究了许多变换,但唯一被证明对这种应用有效的小波是哈尔小波。在这项工作中,我们观察到一大类小波变换(不仅是标准正交小波,还有双标准正交小波)可以用来支持相似性搜索。这类包括最流行的和最有效的小波被用于图像压缩。我们详细研究了使用不同小波对时间序列数据相似性搜索性能的影响。在这个应用中,我们包含了几个优于哈尔小波和最著名的非小波变换的小波。为了确保应用程序工程师可以使用我们的结果,我们还展示了如何为性能最佳的转换配置索引策略。最后,我们使用这些小波变换来识别可以有效索引的数据类别。
{"title":"Similarity search over time-series data using wavelets","authors":"I. Popivanov, Renée J. Miller","doi":"10.1109/ICDE.2002.994711","DOIUrl":"https://doi.org/10.1109/ICDE.2002.994711","url":null,"abstract":"Considers the use of wavelet transformations as a dimensionality reduction technique to permit efficient similarity searching over high-dimensional time-series data. While numerous transformations have been proposed and studied, the only wavelet that has been shown to be effective for this application is the Haar wavelet. In this work, we observe that a large class of wavelet transformations (not only orthonormal wavelets but also bi-orthonormal wavelets) can be used to support similarity searching. This class includes the most popular and most effective wavelets being used in image compression. We present a detailed performance study of the effects of using different wavelets on the performance of similarity searching for time-series data. We include several wavelets that outperform both the Haar wavelet and the best-known non-wavelet transformations for this application. To ensure our results are usable by an application engineer, we also show how to configure an indexing strategy for the best-performing transformations. Finally, we identify classes of data that can be indexed efficiently using these wavelet transformations.","PeriodicalId":191529,"journal":{"name":"Proceedings 18th International Conference on Data Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128444123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 317
OntoWebber: a novel approach for managing data on the Web OntoWebber:一种管理Web数据的新方法
Pub Date : 2002-08-07 DOI: 10.1109/ICDE.2002.994763
Yuhui Jin, Sichun Xu, S. Decker, G. Wiederhold
OntoWebber is a system for managing data on the Web with formally encoded semantics. It aims at solving the problems current technologies are confronted with, namely, the reusability of software components, flexibility in personalization, and ease of maintenance for data intensive Web sites. Based on a domain ontology and a site modeling ontology, site views on the underlying data can be constructed as site models. Instantiation of these models will create a browsable Web site, and manipulation of the site models helps to reduce the high effort of personalizing and maintaining the Web site. In this paper we present the architecture and demonstrate the major components of the system.
OntoWebber是一个使用正式编码语义管理Web上数据的系统。它旨在解决当前技术所面临的问题,即软件组件的可重用性、个性化的灵活性以及数据密集型Web站点的易于维护。基于领域本体和站点建模本体,可以将底层数据上的站点视图构建为站点模型。这些模型的实例化将创建一个可浏览的Web站点,并且对站点模型的操作有助于减少个性化和维护Web站点的繁重工作。在本文中,我们介绍了系统的体系结构并演示了系统的主要组成部分。
{"title":"OntoWebber: a novel approach for managing data on the Web","authors":"Yuhui Jin, Sichun Xu, S. Decker, G. Wiederhold","doi":"10.1109/ICDE.2002.994763","DOIUrl":"https://doi.org/10.1109/ICDE.2002.994763","url":null,"abstract":"OntoWebber is a system for managing data on the Web with formally encoded semantics. It aims at solving the problems current technologies are confronted with, namely, the reusability of software components, flexibility in personalization, and ease of maintenance for data intensive Web sites. Based on a domain ontology and a site modeling ontology, site views on the underlying data can be constructed as site models. Instantiation of these models will create a browsable Web site, and manipulation of the site models helps to reduce the high effort of personalizing and maintaining the Web site. In this paper we present the architecture and demonstrate the major components of the system.","PeriodicalId":191529,"journal":{"name":"Proceedings 18th International Conference on Data Engineering","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123034768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Exploiting punctuation semantics in data streams 利用数据流中的标点语义
Pub Date : 2002-08-07 DOI: 10.1109/ICDE.2002.994733
Peter A. Tucker, D. Maier
Applications that process data streams are becoming common. These applications are often queries over streams, so it seems natural to use a database management system instead of a custom application. However, some traditional relational operators are not conducive to stream processing. We propose embedding punctuations into data streams. A punctuation is a predicate that describes a subset of tuples. It informs a stream processor that no tuples exist after that punctuation that satisfy its predicate.
处理数据流的应用程序正变得越来越普遍。这些应用程序通常是对流的查询,因此使用数据库管理系统而不是自定义应用程序似乎很自然。然而,一些传统的关系运算符不利于流处理。我们建议在数据流中嵌入标点符号。标点符号是描述元组子集的谓词。它通知流处理器在该标点符号之后不存在满足其谓词的元组。
{"title":"Exploiting punctuation semantics in data streams","authors":"Peter A. Tucker, D. Maier","doi":"10.1109/ICDE.2002.994733","DOIUrl":"https://doi.org/10.1109/ICDE.2002.994733","url":null,"abstract":"Applications that process data streams are becoming common. These applications are often queries over streams, so it seems natural to use a database management system instead of a custom application. However, some traditional relational operators are not conducive to stream processing. We propose embedding punctuations into data streams. A punctuation is a predicate that describes a subset of tuples. It informs a stream processor that no tuples exist after that punctuation that satisfy its predicate.","PeriodicalId":191529,"journal":{"name":"Proceedings 18th International Conference on Data Engineering","volume":"1606 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121549487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
The BINGO! focused crawler: from bookmarks to archetypes 宾果!聚焦爬虫:从书签到原型
Pub Date : 2002-08-07 DOI: 10.1109/ICDE.2002.994746
Sergej Sizov, Stefan Siersdorfer, M. Theobald, G. Weikum
The BINGO! system implements an approach to focused crawling that aims to overcome the limitations of the initial training data. To this end, BINGO! identifies, among the crawled and positively classified documents of a topic, characteristic "archetypes" and uses them for periodically re-training the classifier; this way the crawler is dynamically adapted based on the most significant documents seen so far. Two kinds of archetypes are considered: good authorities as determined by employing Kleinberg's link analysis algorithm, and documents that have been automatically classified with high confidence using a linear SVM classifier.
宾果!系统实现了一种聚焦爬行的方法,旨在克服初始训练数据的局限性。为此,答对了!在抓取和积极分类的主题文档中识别特征“原型”,并使用它们定期重新训练分类器;通过这种方式,爬虫可以根据迄今为止看到的最重要的文档进行动态调整。本文考虑了两种原型:采用Kleinberg链接分析算法确定的良好权威,以及使用线性支持向量机分类器以高置信度自动分类的文档。
{"title":"The BINGO! focused crawler: from bookmarks to archetypes","authors":"Sergej Sizov, Stefan Siersdorfer, M. Theobald, G. Weikum","doi":"10.1109/ICDE.2002.994746","DOIUrl":"https://doi.org/10.1109/ICDE.2002.994746","url":null,"abstract":"The BINGO! system implements an approach to focused crawling that aims to overcome the limitations of the initial training data. To this end, BINGO! identifies, among the crawled and positively classified documents of a topic, characteristic \"archetypes\" and uses them for periodically re-training the classifier; this way the crawler is dynamically adapted based on the most significant documents seen so far. Two kinds of archetypes are considered: good authorities as determined by employing Kleinberg's link analysis algorithm, and documents that have been automatically classified with high confidence using a linear SVM classifier.","PeriodicalId":191529,"journal":{"name":"Proceedings 18th International Conference on Data Engineering","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130678876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
NAPA : Nearest Available Parking lot Application 最近可用停车场申请
Pub Date : 2002-08-07 DOI: 10.1109/ICDE.2002.994767
Hae Don Chon, D. Agrawal, A. E. Abbadi
With the advances in wireless communications and mobile device technologies, location-based applications or services will become an essential part of future applications. We have developed a location-based application called NAPA (Nearest Available Parking lot Application) that assists users to find the nearest parking space on campus. NAPA is an example of an application which combines a number of new features, such as location-based, wireless communication and a directory service like LDAP (Lightweight Directory Access Protocol).
随着无线通信和移动设备技术的进步,基于位置的应用程序或服务将成为未来应用程序的重要组成部分。我们开发了一个基于位置的应用程序,叫做NAPA(最近可用停车场应用程序),它可以帮助用户找到校园里最近的停车位。NAPA是一个应用程序的例子,它结合了许多新特性,如基于位置的无线通信和目录服务,如LDAP(轻量级目录访问协议)。
{"title":"NAPA : Nearest Available Parking lot Application","authors":"Hae Don Chon, D. Agrawal, A. E. Abbadi","doi":"10.1109/ICDE.2002.994767","DOIUrl":"https://doi.org/10.1109/ICDE.2002.994767","url":null,"abstract":"With the advances in wireless communications and mobile device technologies, location-based applications or services will become an essential part of future applications. We have developed a location-based application called NAPA (Nearest Available Parking lot Application) that assists users to find the nearest parking space on campus. NAPA is an example of an application which combines a number of new features, such as location-based, wireless communication and a directory service like LDAP (Lightweight Directory Access Protocol).","PeriodicalId":191529,"journal":{"name":"Proceedings 18th International Conference on Data Engineering","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131827073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Using Unity to semi-automatically integrate relational schema 使用Unity半自动集成关系模式
Pub Date : 2002-08-07 DOI: 10.1109/ICDE.2002.994742
R. Lawrence, K. Barker
Unity is an architecture for integrating relational databases which performs three processes: meta-data capture, semantic integration, and query formulation and execution. The foundation of the architecture is a naming methodology that allows concepts to be integrated across systems. Semantic naming of schema constructs increases automation during integration and provides users with physical and logical access transparency during query formulation.
Unity是一个集成关系数据库的架构,它执行三个过程:元数据捕获、语义集成和查询的制定和执行。体系结构的基础是一种命名方法,它允许概念跨系统集成。模式构造的语义命名提高了集成过程中的自动化程度,并在查询制定过程中为用户提供了物理和逻辑访问的透明性。
{"title":"Using Unity to semi-automatically integrate relational schema","authors":"R. Lawrence, K. Barker","doi":"10.1109/ICDE.2002.994742","DOIUrl":"https://doi.org/10.1109/ICDE.2002.994742","url":null,"abstract":"Unity is an architecture for integrating relational databases which performs three processes: meta-data capture, semantic integration, and query formulation and execution. The foundation of the architecture is a naming methodology that allows concepts to be integrated across systems. Semantic naming of schema constructs increases automation during integration and provides users with physical and logical access transparency during query formulation.","PeriodicalId":191529,"journal":{"name":"Proceedings 18th International Conference on Data Engineering","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129537717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings 18th International Conference on Data Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1