Pub Date : 1998-02-23DOI: 10.1109/ICDE.1998.655781
T. Johnson
Data warehouses allow users to make sense of large quantities of detail data. While most queries can be answered through summary data, some queries can only be answered by accessing the detail data. It is usually not cost-effective to store terabytes of detail data online; instead, the detail data is stored on tape. The problem we address in this paper is how to index tape-based detail data. Conventional indices on tens of terabytes of data can require terabytes of storage themselves. We propose the use of coarse indices for tape-based detail data. Instead of specifying all locations of a record containing a particular key, the coarse index specifies whether or not a region of tape contains at least one record with a particular key value. Our proposal is based on the observation that while long tape seeks are fast, short tape seeks are slow. Therefore, indices that point to the exact record location on tape do not provide performance benefits to justify the cost of their storage. A few bits pointing to an appropriate location are enough. In this paper, we present the design of such a coarse index, and provide fast algorithms for its updating and querying. Our experiments on a large data set taken from an existing data warehouse show that using compressed bitmap indices offer an order-of-magnitude reduction in index size, permitting the online storage of the coarse indices. Analytical and simulation models of the time to fetch selected records from tape show that using coarse indices almost always improves reduces the total loading time as compared to using dense tape-based indices or to using no index at all.
{"title":"Coarse indices for a tape-based data warehouse","authors":"T. Johnson","doi":"10.1109/ICDE.1998.655781","DOIUrl":"https://doi.org/10.1109/ICDE.1998.655781","url":null,"abstract":"Data warehouses allow users to make sense of large quantities of detail data. While most queries can be answered through summary data, some queries can only be answered by accessing the detail data. It is usually not cost-effective to store terabytes of detail data online; instead, the detail data is stored on tape. The problem we address in this paper is how to index tape-based detail data. Conventional indices on tens of terabytes of data can require terabytes of storage themselves. We propose the use of coarse indices for tape-based detail data. Instead of specifying all locations of a record containing a particular key, the coarse index specifies whether or not a region of tape contains at least one record with a particular key value. Our proposal is based on the observation that while long tape seeks are fast, short tape seeks are slow. Therefore, indices that point to the exact record location on tape do not provide performance benefits to justify the cost of their storage. A few bits pointing to an appropriate location are enough. In this paper, we present the design of such a coarse index, and provide fast algorithms for its updating and querying. Our experiments on a large data set taken from an existing data warehouse show that using compressed bitmap indices offer an order-of-magnitude reduction in index size, permitting the online storage of the coarse indices. Analytical and simulation models of the time to fetch selected records from tape show that using coarse indices almost always improves reduces the total loading time as compared to using dense tape-based indices or to using no index at all.","PeriodicalId":264926,"journal":{"name":"Proceedings 14th International Conference on Data Engineering","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130948120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-02-23DOI: 10.1109/ICDE.1998.655763
Sunil Prabhakar, K. Abdel-Ghaffar, D. Agrawal, A. E. Abbadi
Various proposals have been made for declustering 2D tiled data on multiple I/O devices. Strictly optimal solutions only exist under very restrictive conditions on the tiling of the 2D space or for very few I/O devices. In this paper, we explore allocation methods where no strictly optimal solution exists. We propose a general class of allocation methods, referred to as cyclic allocation methods, and show that many existing methods are instances of this class. As a result, various seemingly ad hoc and unrelated methods are presented in a single framework. Furthermore, the framework is used to develop new allocation methods that give better performance than any previous method and that approach the best feasible performance.
{"title":"Cyclic allocation of two-dimensional data","authors":"Sunil Prabhakar, K. Abdel-Ghaffar, D. Agrawal, A. E. Abbadi","doi":"10.1109/ICDE.1998.655763","DOIUrl":"https://doi.org/10.1109/ICDE.1998.655763","url":null,"abstract":"Various proposals have been made for declustering 2D tiled data on multiple I/O devices. Strictly optimal solutions only exist under very restrictive conditions on the tiling of the 2D space or for very few I/O devices. In this paper, we explore allocation methods where no strictly optimal solution exists. We propose a general class of allocation methods, referred to as cyclic allocation methods, and show that many existing methods are instances of this class. As a result, various seemingly ad hoc and unrelated methods are presented in a single framework. Furthermore, the framework is used to develop new allocation methods that give better performance than any previous method and that approach the best feasible performance.","PeriodicalId":264926,"journal":{"name":"Proceedings 14th International Conference on Data Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131068593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-02-23DOI: 10.1109/ICDE.1998.655807
K. Chakrabarti, S. Mehrotra
Over the last decade (1988-98), the R tree has emerged as one of the most robust multidimensional access methods. However, before the R tree can be integrated as an access method to a commercial strength database management system, efficient techniques to provide transactional access to data via R trees need to be developed. Concurrent access to data through a multidimensional data structure introduces the problem of protecting ranges specified in the retrieval from phantom insertions and deletions (the phantom problem). Existing approaches to phantom protection in B trees (namely, key range locking) cannot be applied to multidimensional data structures since they rely on a total order over the key space on which the B tree is designed. The paper presents a dynamic granular locking approach to phantom protection in R trees. To the best of our knowledge, the paper provides the first solution to the phantom problem in multidimensional access methods based on granular locking.
{"title":"Dynamic granular locking approach to phantom protection in R-trees","authors":"K. Chakrabarti, S. Mehrotra","doi":"10.1109/ICDE.1998.655807","DOIUrl":"https://doi.org/10.1109/ICDE.1998.655807","url":null,"abstract":"Over the last decade (1988-98), the R tree has emerged as one of the most robust multidimensional access methods. However, before the R tree can be integrated as an access method to a commercial strength database management system, efficient techniques to provide transactional access to data via R trees need to be developed. Concurrent access to data through a multidimensional data structure introduces the problem of protecting ranges specified in the retrieval from phantom insertions and deletions (the phantom problem). Existing approaches to phantom protection in B trees (namely, key range locking) cannot be applied to multidimensional data structures since they rely on a total order over the key space on which the B tree is designed. The paper presents a dynamic granular locking approach to phantom protection in R trees. To the best of our knowledge, the paper provides the first solution to the phantom problem in multidimensional access methods based on granular locking.","PeriodicalId":264926,"journal":{"name":"Proceedings 14th International Conference on Data Engineering","volume":"49 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132871503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-02-23DOI: 10.1109/ICDE.1998.655825
Sharma Chakravarthy, R. Le
The utility and functionality of active capability (event-condition-action or ECA rules) has been well established in the context of databases. Today, most of the commercial relational database management systems (RDBMSs) offer some form of ECA rule capability. In addition, there are several research prototypes that have extended the ECA rule capability to object-oriented database management systems (OODBMSs). Sentinel, developed at the University of Florida is one such prototype that supports an expressive composite event specification language (Snoop), efficient event detection (using generated wrappers), conditions and actions (as a combination of OQL and C++), multiple and cascaded rule processing (using a rule scheduler and nested transactions), a visualization tool, and an editor for dynamic creation and management of rules. In order for the active capability to be useful for a large class of advanced applications, it is necessary to go beyond what has been proposed/developed in the literature. Specifically, the extensions needed beyond the current state-of-the-art active capability are: (i) support active capability for non-database applications as well, (ii) support active capability for distributed environments; that is, allow ECA across applications, and (iii) support active capability for heterogeneous sources of events (whether they are databases or not). The authors address how they are planning on addressing some of the above extensions using a combination of existing components (COTS) and new functionality/services that are culled from their experience in designing and implementing Sentinel.
{"title":"ECA rule support for distributed heterogeneous environments","authors":"Sharma Chakravarthy, R. Le","doi":"10.1109/ICDE.1998.655825","DOIUrl":"https://doi.org/10.1109/ICDE.1998.655825","url":null,"abstract":"The utility and functionality of active capability (event-condition-action or ECA rules) has been well established in the context of databases. Today, most of the commercial relational database management systems (RDBMSs) offer some form of ECA rule capability. In addition, there are several research prototypes that have extended the ECA rule capability to object-oriented database management systems (OODBMSs). Sentinel, developed at the University of Florida is one such prototype that supports an expressive composite event specification language (Snoop), efficient event detection (using generated wrappers), conditions and actions (as a combination of OQL and C++), multiple and cascaded rule processing (using a rule scheduler and nested transactions), a visualization tool, and an editor for dynamic creation and management of rules. In order for the active capability to be useful for a large class of advanced applications, it is necessary to go beyond what has been proposed/developed in the literature. Specifically, the extensions needed beyond the current state-of-the-art active capability are: (i) support active capability for non-database applications as well, (ii) support active capability for distributed environments; that is, allow ECA across applications, and (iii) support active capability for heterogeneous sources of events (whether they are databases or not). The authors address how they are planning on addressing some of the above extensions using a combination of existing components (COTS) and new functionality/services that are culled from their experience in designing and implementing Sentinel.","PeriodicalId":264926,"journal":{"name":"Proceedings 14th International Conference on Data Engineering","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133684044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-02-23DOI: 10.1109/ICDE.1998.655775
C. Salka
Summary form only given, as follows. Over the past few years, OLAP vendors have engaged in a debate regarding relational versus multidimensional data stores. This debate has obscured the more significant problems facing today's OLAP customers: managing the exponential growth generated by multidimensional pre-aggregations, and architectural support for a wide array of OLAP data models. Microsoft discusses several aspects of its upcoming OLAP Server product, placing special emphasis on these areas. Solutions for managing voluminous pre-aggregates are discussed in the context of understanding of the dynamics of the data explosion problem, and a partial aggregation scheme that is adjusted according to user query needs. Flexible Hybrid OLAP is discussed as a compelling solution to a wide array of user needs and data requirements, with a focus on understanding the many different meanings associated with Hybrid OLAP and the strengths and weaknesses of each.
{"title":"Ending the ROLAP/MOLAP debate: usage based aggregation and flexible HOLAP","authors":"C. Salka","doi":"10.1109/ICDE.1998.655775","DOIUrl":"https://doi.org/10.1109/ICDE.1998.655775","url":null,"abstract":"Summary form only given, as follows. Over the past few years, OLAP vendors have engaged in a debate regarding relational versus multidimensional data stores. This debate has obscured the more significant problems facing today's OLAP customers: managing the exponential growth generated by multidimensional pre-aggregations, and architectural support for a wide array of OLAP data models. Microsoft discusses several aspects of its upcoming OLAP Server product, placing special emphasis on these areas. Solutions for managing voluminous pre-aggregates are discussed in the context of understanding of the dynamics of the data explosion problem, and a partial aggregation scheme that is adjusted according to user query needs. Flexible Hybrid OLAP is discussed as a compelling solution to a wide array of user needs and data requirements, with a focus on understanding the many different meanings associated with Hybrid OLAP and the strengths and weaknesses of each.","PeriodicalId":264926,"journal":{"name":"Proceedings 14th International Conference on Data Engineering","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115741784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-02-23DOI: 10.1109/ICDE.1998.655814
S. Venkataraman, J. Naughton, M. Livny
The recent dramatic improvements in the performance of commodity hardware has made clusters of workstations or PCs an attractive and economical platform upon which to build scalable database servers. These clusters have large aggregate memory capacities, however, since this global memory is distributed, good algorithms are necessary for memory management, or this large aggregate memory will go underutilized. The goal of the study is to develop and evaluate buffer management algorithms for database clusters. We propose a new buffer management algorithm, remote load sensitive caching (RLS caching), that uses novel techniques to combine data placement with a simple modification of standard client server page replacement algorithms to approximate a global LRU page replacement policy. Through an implementation in the SHORE database system, we evaluate the performance of RLS caching against other buffer management algorithms. Our study demonstrates that RLS caching indeed effectively manages the distributed memory of a server cluster.
{"title":"Remote load-sensitive caching for multi-server database systems","authors":"S. Venkataraman, J. Naughton, M. Livny","doi":"10.1109/ICDE.1998.655814","DOIUrl":"https://doi.org/10.1109/ICDE.1998.655814","url":null,"abstract":"The recent dramatic improvements in the performance of commodity hardware has made clusters of workstations or PCs an attractive and economical platform upon which to build scalable database servers. These clusters have large aggregate memory capacities, however, since this global memory is distributed, good algorithms are necessary for memory management, or this large aggregate memory will go underutilized. The goal of the study is to develop and evaluate buffer management algorithms for database clusters. We propose a new buffer management algorithm, remote load sensitive caching (RLS caching), that uses novel techniques to combine data placement with a simple modification of standard client server page replacement algorithms to approximate a global LRU page replacement policy. Through an implementation in the SHORE database system, we evaluate the performance of RLS caching against other buffer management algorithms. Our study demonstrates that RLS caching indeed effectively manages the distributed memory of a server cluster.","PeriodicalId":264926,"journal":{"name":"Proceedings 14th International Conference on Data Engineering","volume":"892 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116177766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-02-23DOI: 10.1109/ICDE.1998.655806
A. Bernstein, D. S. Gerstl, Wai-Hong Leung, P. M. Lewis
Serializability has been widely accepted as the correctness criterion for databases subject to concurrent access. Serializable execution is generally implemented using a two phase locking algorithm that locks items in the database to delay transactions that care in danger of performing in a nonserializable fashion. Such delays are unacceptable in high performance database systems and in systems supporting long running transactions. A number of models have been proposed in which transactions are decomposed into smaller, atomic, interleavable steps. A shortcoming of much of this work is that little guidance is provided as to how transactions should be decomposed and what interleavings preserve correct execution. We previously proposed a new correctness criterion, weaker than serializability, that guarantees that each transaction satisfies its specification (A. Bernstein and P. Lewis, 1996). Based on that correctness criterion, we have designed and implemented a new concurrency control. Experiments using the new concurrency control demonstrate significant improvement in performance when lock contention is high.
{"title":"Design and performance of an assertional concurrency control system","authors":"A. Bernstein, D. S. Gerstl, Wai-Hong Leung, P. M. Lewis","doi":"10.1109/ICDE.1998.655806","DOIUrl":"https://doi.org/10.1109/ICDE.1998.655806","url":null,"abstract":"Serializability has been widely accepted as the correctness criterion for databases subject to concurrent access. Serializable execution is generally implemented using a two phase locking algorithm that locks items in the database to delay transactions that care in danger of performing in a nonserializable fashion. Such delays are unacceptable in high performance database systems and in systems supporting long running transactions. A number of models have been proposed in which transactions are decomposed into smaller, atomic, interleavable steps. A shortcoming of much of this work is that little guidance is provided as to how transactions should be decomposed and what interleavings preserve correct execution. We previously proposed a new correctness criterion, weaker than serializability, that guarantees that each transaction satisfies its specification (A. Bernstein and P. Lewis, 1996). Based on that correctness criterion, we have designed and implemented a new concurrency control. Experiments using the new concurrency control demonstrate significant improvement in performance when lock contention is high.","PeriodicalId":264926,"journal":{"name":"Proceedings 14th International Conference on Data Engineering","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115880605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-02-23DOI: 10.1109/ICDE.1998.655788
Wen-Syan Li, K. Candan
Image data is structurally more complex than traditional types of data. An image can be viewed as a compound object containing many sub-objects. Each sub-object corresponds to image regions that are visually and semantically meaningful (e.g. car, man, etc.). We introduce a hierarchical structure for image modeling that supports image retrieval, at both the whole-image and object levels, using combinations of semantic expressions and visual examples. We introduce an image database system called SEMCOG (SEMantics and COGnition-based image retrieval). SEMCOG aims at integrating semantics- and cognition-based approaches and allows queries based on object-level information. We present a formal definition of a multimedia query language, we give details of the database's implementation and query processing, and we discuss our methods for merging similarities from different types of query criteria.
{"title":"SEMCOG: a hybrid object-based image database system and its modeling, language, and query processing","authors":"Wen-Syan Li, K. Candan","doi":"10.1109/ICDE.1998.655788","DOIUrl":"https://doi.org/10.1109/ICDE.1998.655788","url":null,"abstract":"Image data is structurally more complex than traditional types of data. An image can be viewed as a compound object containing many sub-objects. Each sub-object corresponds to image regions that are visually and semantically meaningful (e.g. car, man, etc.). We introduce a hierarchical structure for image modeling that supports image retrieval, at both the whole-image and object levels, using combinations of semantic expressions and visual examples. We introduce an image database system called SEMCOG (SEMantics and COGnition-based image retrieval). SEMCOG aims at integrating semantics- and cognition-based approaches and allows queries based on object-level information. We present a formal definition of a multimedia query language, we give details of the database's implementation and query processing, and we discuss our methods for merging similarities from different types of query criteria.","PeriodicalId":264926,"journal":{"name":"Proceedings 14th International Conference on Data Engineering","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127656978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-02-23DOI: 10.1109/ICDE.1998.655756
Sameer Mahajan, M. Donahoo, S. Navathe, M. Ammar, Sanjoy Malik
We consider an environment where one or more servers carry databases that are of interest to a community of clients. The clients are only intermittently connected to the server for brief periods of time. Clients carry a part of the database for their own processing and accumulate local updates while disconnected. We call this the Intermittently Connected Database (ICDB) environment. ICDBs have a wide variety of applications including sales force automation, insurance claim processing, and mobile workforces. Our focus is on the problem of update propagation at the server in ICDBs and the associated processing at the clients. The typical client-centric approach involves the communication and processing of updates and transactions on a per-client basis, ignoring the overlap of data between clients. The complexity of this approach is in the order of the number of connecting clients, thereby limiting the scalability of the server. We propose a data-centric approach which clusters data into groups and assigns to each client one or more of these groups. The proposed scheme results in server processing complexity on the order of the number of groups, which we control. We propose various techniques for grouping and discuss the processing required at the clients to enable the grouping approach. While the client-centric approach is expected to significantly degrade with the increasing number of clients, we expect that a properly designed grouping scheme will sustain a number of clients that is significantly larger. A prototype has been developed and performance studies are in progress.
{"title":"Grouping techniques for update propagation in intermittently connected databases","authors":"Sameer Mahajan, M. Donahoo, S. Navathe, M. Ammar, Sanjoy Malik","doi":"10.1109/ICDE.1998.655756","DOIUrl":"https://doi.org/10.1109/ICDE.1998.655756","url":null,"abstract":"We consider an environment where one or more servers carry databases that are of interest to a community of clients. The clients are only intermittently connected to the server for brief periods of time. Clients carry a part of the database for their own processing and accumulate local updates while disconnected. We call this the Intermittently Connected Database (ICDB) environment. ICDBs have a wide variety of applications including sales force automation, insurance claim processing, and mobile workforces. Our focus is on the problem of update propagation at the server in ICDBs and the associated processing at the clients. The typical client-centric approach involves the communication and processing of updates and transactions on a per-client basis, ignoring the overlap of data between clients. The complexity of this approach is in the order of the number of connecting clients, thereby limiting the scalability of the server. We propose a data-centric approach which clusters data into groups and assigns to each client one or more of these groups. The proposed scheme results in server processing complexity on the order of the number of groups, which we control. We propose various techniques for grouping and discuss the processing required at the clients to enable the grouping approach. While the client-centric approach is expected to significantly degrade with the increasing number of clients, we expect that a properly designed grouping scheme will sustain a number of clients that is significantly larger. A prototype has been developed and performance studies are in progress.","PeriodicalId":264926,"journal":{"name":"Proceedings 14th International Conference on Data Engineering","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132468987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-02-23DOI: 10.1109/ICDE.1998.655783
R. Jain
Summary form only given. All image search engines provide mechanisms to search based on keywords and provide the ability to do content-based searching using querying by pictorial example. In this paper, first we present some results from current image databases. Then we present a new approach to exploring image databases. Most of our current results are drawn from image and video asset management systems designed at Virage. The new approach is based on a navigational paradigm being developed at the University of California, San Diego.
{"title":"Content-based multimedia information management","authors":"R. Jain","doi":"10.1109/ICDE.1998.655783","DOIUrl":"https://doi.org/10.1109/ICDE.1998.655783","url":null,"abstract":"Summary form only given. All image search engines provide mechanisms to search based on keywords and provide the ability to do content-based searching using querying by pictorial example. In this paper, first we present some results from current image databases. Then we present a new approach to exploring image databases. Most of our current results are drawn from image and video asset management systems designed at Virage. The new approach is based on a navigational paradigm being developed at the University of California, San Diego.","PeriodicalId":264926,"journal":{"name":"Proceedings 14th International Conference on Data Engineering","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126462949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}